Category Archives: Cloud

Moving to the Cloud: 3 Data Integration Facts That Every Enterprise Should Understand

Cloud Data Integration

Cloud Data Integration

According to a survey conducted by Dimensional Research and commissioned by Host Analytics, “CIOs continue to grow more and more bullish about cloud solutions, with a whopping 92% saying that cloud provides business benefits, according to a recent survey. Nonetheless, IT execs remain concerned over how to avoid SaaS-based data silos.”

Since the survey was published, many enterprises have, indeed, leveraged the cloud to host business data in both IaaS and SaaS incarnations.  Overall, there seems to be two types of enterprises: First are the enterprises that get the value of data integration.  They leverage the value of cloud-based systems, and do not create additional data silos.  Second are the enterprises that build cloud-based data silos without a sound data integration strategy, and thus take a few steps backward, in terms of effectively leveraging enterprise data.

There are facts about data integration that most in enterprise IT don’t yet understand, and the use of cloud-based resources actually makes things worse.  The shame of it all is that, with a bit of work and some investment, the value should come back to the enterprises 10 to 20 times over.  Let’s consider the facts.

Fact 1: Implement new systems, such as those being stood up on public cloud platforms, and any data integration investment comes back 10 to 20 fold.  The focus is typically too much on cost and not enough on the benefit, when building a data integration strategy and investing in data integration technology.

Many in enterprise IT point out that their problem domain is unique, and thus their circumstances need special consideration.  While I always perform domain-specific calculations, the patterns of value typically remain the same.  You should determine the metrics that are right for your enterprise, but the positive values will be fairly consistent, with some varying degrees.

Fact 2: It’s not just about data moving from place-to-place, it’s also about the proper management of data.  This includes a central understanding of data semantics (metadata), and a place to manage a “single version of the truth” when it comes to dealing massive amounts of distributed data that enterprises must typically manage, and now they are also distributed within public clouds.

Most of those who manage enterprise data, cloud or no-cloud, have no common mechanism to deal with the meaning of the data, or even the physical location of the data.  While data integration is about moving data from place to place to support core business processes, it should come with a way to manage the data as well.  This means understanding, protecting, governing, and leveraging the enterprise data, both locally and within public cloud providers.

Fact 3: Some data belongs on clouds, and some data belongs in the enterprise.  Those in enterprise IT have either pushed back on cloud computing, stating that data outside the firewall is a bad idea due to security, performance, legal issues…you name it.  Others try to move all data to the cloud.  The point of value is somewhere in between.

The fact of the matter is that the public cloud is not the right fit for all data.  Enterprise IT must carefully consider the tradeoff between cloud-based and in-house, including performance, security, compliance, etc..  Finding the best location for the data is the same problem we’ve dealt with for years.  Now we have cloud computing as an option.  Work from your requirements to the target platform, and you’ll find what I’ve found: Cloud is a fit some of the time, but not all of the time.

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud, Cloud Application Integration, Cloud Computing, Cloud Data Integration, Data Integration | Tagged , , | Leave a comment

3 Ways to Simplify SAP Connectivity Protocols

Simplify SAP Connectivity Protocols

Simplify SAP Connectivity Protocols

SAP’s Jam social platform has generated a great deal of buzz since its release last May – and for good reason. As detailed by Alan Lepofsky in his coverage for ZDNet, the new Jam reboot included the kind of things, such as out-of-the-box integration, workflow templates and simplified developer tools, that make both IT and business users very happy.

However, the complexity of SAP’s Business Suite (or ECC as it’s called) does not easily lend itself to integration with other applications. The code underlying it is built on a proprietary language called ABAP – a combination of COBOL and Open SQL with some object-oriented features– that requires specialized knowledge and skill not easily found outside of the SAP ecosystem. Up until now, the typical integration project required the involvement of a specialized SAP consultant to develop custom ABAP code or map complex BAPI/IDoc structures as well as a BASIS administrator to transport the ABAP code from development to QA to production. The result was expensive and manually intensive. Integration projects took a few months or longer to complete and were not agile enough to handle ongoing requirements or even field changes.

Today, Informatica Cloud offers business a more innovative approach to SAP data extraction – ultimately, promoting agile development and enabling rapid deployment with the following three important features.

Automatically Generating ABAP Code

At the core of Informatica’s solution is the Cloud Connector for SAP. While the face of the Connector is a simple, wizard-based, drag-and-drop interface, under the hood it uses a Remote Function Call (RFC) to dynamically generate ABAP code (based on user choices) to connect with SAP and access the data through the application layer.

Drag and Drop Design Palette

The wizard guides the user through the steps necessary to extract the data from SAP and send it to any application where it is needed. Using Informatica Cloud’s drag-and-drop design palette, one can simply choose SAP – like any other application endpoint – and select what is needed to connect to the target, without ever having write specialized code.

Because of the dynamically generated ABAP code, SaaS application administrators trying to connect to SAP don’t have to deal with the SAP transports (and the lengthy development cycle) for each extract, reducing the time from months to weeks for an individual project, as well as the load on the BASIS administrators. The increased agility enables end users to respond to business demands by acquiring related data extracts, field and feature changes and/or additions – in near real time. Informatica also reduces the load on both the admin and the server – even further by eliminating the need for transports and sending the data in packets, and by running the extracts in the background. And since no data is staged or buffered on the SAP server, there is never a risk of compromising the system’s or online users’ performance.

Speedy Development Through Vibe Integration Packages

Informatica’s solution also includes a technology bundle to speed up development time and reduce the user’s learning curve. The bundle, or Informatica Vibe integration package, consists of downloadable templates that help the user to understand and use the complex SAP interfaces. Future roadmap releases will contain resources for additional SAS endpoints.

In his review mentioned above in the opening, Lepofsky notes the importance of integration and the partner ecosystem to the Jam platform. The same can be said of the SAP Business Suite and the specialized LOB cloud apps that orbit it. Without ready and real-time access to SAP’s data, even the most feature-rich app is of little use to anyone. With Informatica’s Cloud Connector for SAP, business users, like Informatica customer Addivant, now have a simple and efficient way to solve the most pressing problems presented by SAP to cloud app integration.

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud, Cloud Application Integration, Cloud Data Integration, Cloud Data Management | Leave a comment

Bulletproof Tips to Optimize Data Transformations for Salesforce

Optimize Data Transformations for Salesforce

Optimize Data Transformations for Salesforce

In the journey from a single-purpose cloud CRM app to the behemoth that it is today, Salesforce has made many smart acquisitions. However, the recent purchase of RelateIQ may have just been its most ingenious. Although a relatively small startup, RelateIQ has gained a big reputation for its innovative use of data science and predictive analytics, which would be highly beneficial to Salesforce customers.

As relevant from the acquisition, there is little doubt that the cloud application world is making a tectonic shift to data science and the appetite for Big Data to be pieced together to fuel the highly desired 360-degree view is only growing stronger.

But while looking ahead is certainly important, those of us who live in the present have much work yet still to accomplish in the here and now. For many, that means figuring out the best way to leverage data integration strategies and the Salesforce platform to gain actionable intelligence for our sales, marketing and CRM projects – today. Up until recently, this has involved manual, IT-heavy processes.

We need look no further than three common use cases where typical data integration strategies and technologies fail today’s business users:

Automated Metadata Discovery
The first, and perhaps most frustrating, has to do with discovering related objects. For example, objects, such as Accounts, don’t exist in a vacuum. They have related objects such as Contacts that can provide the business user – be it a salesperson or customer service manager, with context of the customer.

Now, during the typical data-integration process, these related objects are obscured from view because most integration technologies in the market today cannot automatically recognize the object metadata that could potentially relate all of the customer data together. The result is an incomplete view of the customer in Salesforce, and a lost opportunity to leverage the platform’s capability to strengthen a relationship or close a deal.

The Informatica Cloud platform is engineered ground up to be aware of the application ecosystem API and understand its metadata. As a result, our mapping engine can automatically discover metadata and relate objects to one another. This automated metadata discovery gives business users the ability to discover, choose and bring all of the related objects together into one mapping flow. Now, with a just a few clicks, business users, can quickly piece together relevant information in Salesforce and take the appropriate action.

Bulk Preparation of Data
The second instance where most data integration solutions typically fall short is with respect to the bulk preparation of data for analytic runs prior to the data transformation process. With the majority of Line-of-Business (LOB) applications now being run in the cloud, the typical business user has multiple terabytes of cloud data that either need to be warehoused on-premise or in the cloud, for a BI app to perform its analytics.

As a result, the best practice for bringing in data for analytics requires the ability to select and aggregate multiple data records from multiple data sources into a single batch, to speed up transformation and loading and – ultimately – intelligence gathering. Unfortunately, advanced transformations such as aggregations are something that is simply beyond the capabilities of most other integration technologies. Instead, most bring the data in one record or message at time, inhibiting speedy data loading and delivery of critical intelligence to business users when they need it most.

Alternatively, Informatica has leveraged its intellectual property in the market-leading data integration product, PowerCenter, and developed data aggregation transformation functionality within Informatica Cloud. This enables business users to pick a select group of important data points from multiple data sources – for example, country or dollar size of revenue opportunities – to aggregate and quickly process huge volumes in a single batch.

In-App Business Accelerators
Similar to what we’ve experienced with the mobile iOS and Android platforms, there recently has been an explosion of new, single-purpose business applications being developed and released for the cloud. While the platform-specific mobile model is predicated on and has flourished because of the “build it and they will come” premise, the paradigm does not work in the same way for cloud-based applications that are trying to become a platform.

The reason for this is that, unlike with iOS and Android platforms, in the cloud world, business users have a main LOB application that they are familiar with and rely on packaged integrations to bring in from other LOB cloud applications. However, with Salesforce extending its reach to become a cloud platform, the center of data gravity is shifting towards it being used for more than just CRM and the success of this depends upon these packaged integrations. Up until now, these integrations have just consisted of sample code and have been incredibly complex (and IT-heavy) to build and very cumbersome to use. As a result, business users lack the agility to easily customize fields or their workflows to match their unique business processes. Ultimately, the packages that were intended to accelerate business ended up inhibiting it.

Informatica Cloud’s Vibe Integration Packages (or VIPs) have made the promise of the integration package as a business accelerator into a reality, for all business users. Unlike sample code, VIPs are sophisticated templates that encapsulate the intelligence or logic of how you integrate the data between apps. While VIPs abstract complexity to give users out-of-the box integration, their pre-built mapping also provides great flexibility. Now, with just a few keystrokes, business users can map custom fields or leverage their unique business model into their Salesforce workflows.

A few paragraphs back I began this discussion with the recent acquisition of RelateIQ by Salesforce. While we can make an educated guess as to what that will bring in the future, no one knows for sure. What we do know is that, at present, Salesforce’s completeness as a platform – and source of meaningful analytics – requires something in addition to run your relevant business applications and solutions through it. A complete iPaaS (Integration Platform as a Service) solution such as Informatica Cloud has the power to make the vision into reality. Informatica enables this through the meta-data rich platform for data discovery, Industry leading data and application integration capabilities and business accelerators that put the power back in the hands of citizen application integrators and business users.

Join our webinar August 14 to learn more: Informatica Cloud Summer 2014: The Most Complete iPaaS

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud, Cloud Application Integration, Cloud Computing, Cloud Data Integration, Uncategorized | Tagged , , , , , | Leave a comment

5 Ways Hybrid Integration Redefines Your Cloud Strategy

Hybrid CloudThe mainstream use of SaaS applications as part of the cloud strategies in many enterprises continues to rise. Initially led by LOB IT (lines of business, apps IT), SaaS deployments now have central IT (such as Integration Competency Centers) personnel extensively involved. This shift stems from the need to develop strategies around hybrid application deployments – environments that include integrations between cloud and on-premise applications.

The entire breadth of cloud-to-cloud and cloud-to-ground integration scenarios necessitates interacting with the publicly available APIs, cloud services, and internal web services. The end goal is to enable secure, consistent data access on enterprise apps wherein any cloud, or on-premise application is accessible through a tablet or smartphone, in an intuitive, easy-to-use interface.

A key necessity for hybrid application deployments is the concept of “adaptive integration” within any integration platform-as-a-service (iPaaS). Any cloud service integration that claims to have iPaaS capabilities needs to have integration features that connect data, applications, and processes, as well as have governance and API management functionality. The iPaaS must also run on a multi-tenant infrastructure and be available on-premise at times.

You can learn more about adaptive integration, how the iPaaS impacts it, and hybrid application strategies in our recorded webinar, Enabling Hybrid Application Strategies through Cloud Service Integration, featuring Gartner Vice-President and Fellow, Massimo Pezzini, and Informatica Senior Vice-President of Data Integration, Ash Kulkarni. Key topics covered will include:

  • How SaaS adoption is driving the need for hybrid integration
  • Why the mobilization of the enterprise means a stricter criteria for an iPaaS
  • How Everton Football Club in the English Premier League gained major customer insights by using Informatica Cloud
  • What “Adaptive Integration” and the Internet of Things have in store for us
FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud, Cloud Application Integration, Cloud Data Integration, Cloud Data Management | Leave a comment

The State of Salesforce Report: Trends, Data, and What’s Next

A guest post by Jonathan Staley, Product Marketing Manager at Bluewolf Beyond, a consulting practice focused on innovating live cloud environments.

Guest blogger Jonathan Staley

Guest blogger Jonathan Staley

Now in its third year (2012, 2013), The State of Salesforce Annual Review continues to be the most comprehensive report on the Salesforce ecosystem. Based on the data from over 1,000 global Salesforce users, this report highlights how companies are using the Salesforce platform, where resources are being allocated, and where industry hype meets reality. Over the past three years, the report has evolved much like the technology, shifting and transforming to address recent advancements, and well as tracking longitudinal trends in the space.

We’ve found that key integration partners like Informatica Cloud continue to grow in importance within the Salesforce ecosystem. Beyond the core platform offerings from Salesforce, third-party apps and integration technologies have received considerable attention as companies look to extend the value of their initial investments and unite systems. The need to sync multiple platforms and applications is an emerging need in the Salesforce ecosystem—which will be highlighted in the 2014 report.

As Salesforce usage expands, so does our approach to survey execution. In line with this evolution, here’s what we’ve learned over the last three years from data collection:

Functions, Departments Make a Difference

Sales, Marketing, IT, and Service all have their own needs and pain points. As Salesforce moves quickly across the enterprise, we want to recognize the values, priorities, and investments by each department. Not only are the primary clouds for each function at different stages of maturity, but the ways in which each department uses their cloud are unique. We anticipate discovery of how enterprises are collaborating across functions and clouds.

Focus on Region

As our international data set continues to grow we are investing in regionalized reports for the US, UK, France, and Australia. While we saw indications of differences between each region in last year’s survey, they were not statistically significant.

Customer Engagement is a Top Priority

Everyone agrees that customer engagement is important, but what are companies actually doing about it? A section on predictive analytics and questions about engagement specific to departments has been included in this year’s survey. We suspect that the recent trend of companies empowering employees with a combination of data and mobile will be validated in the survey results.

Variation Across Industries

As an added bonus, we will build a report targeting specific insights from the Financial Services industry.

We Need Your Help

Our dataset depends on input from Salesforce users spanning all functions, roles, industries, and regions. Every response matters. Please take 15 minutes to share your Salesforce experiences, and you will receive a personalized report, comparing your responses to the aggregate survey results.

Click the Banner to take the Survey

Click the Banner to take the Survey

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud, Cloud Application Integration, Cloud Computing, Cloud Data Integration, Cloud Data Management, SaaS | Tagged , , , , | Leave a comment

How Parallel Data Loading and Amazon Redshift Redefine Data Warehousing Performance

As Informatica Cloud product managers, we spend a lot of our time thinking about things like relational databases. Recently, we’ve been considering their limitations, and, specifically, how difficult and expensive it is to provision an on-premise data warehouse to handle the petabytes of fluid data generated by cloud applications and social media.  As a result, companies have to often make tradeoffs and decide which data is worth putting into their data warehouse.

Certainly, relational databases have enormous value. They’ve been around for several decades and have served as a bulwark for storing and analyzing structured data. Without them, we wouldn’t be able to extract and store data from on-premise CRM, ERP and HR applications and push it downstream for BI applications to consume.

With the advent of cloud applications and social media however, we are now faced with managing a daily barrage of massive amounts of rapidly changing data, as well as the complexities of analyzing it within the same context as data from on-premise applications. Add to that the stream of data coming from Big Data sources such as Hadoop which then needs to be organized into a structured format so that various correlation analyses can be run by BI applications – and you can begin to understand the enormity of the problem.

Up until now, the only solution has been to throw development resources at legacy on-premise databases, and hope for the best. But given the cost and complexity, this is clearly not a sustainable long-term strategy.

As an alternative, Amazon Redshift, a petabyte-scale data warehouse service in the cloud has the right combination of performance and capabilities to handle the demands of social media and cloud app data, without the additional complexity or expense. Its Massively Parallel Processing (MPP) architecture allows for the lightning fast loading and querying of data. It also features a larger block size, which reduces the number of I/O requests needed to load data, and leads to better performance.

By combining Informatica Cloud with Amazon Redshift’s parallel loading architecture, you can make use of push-down optimization algorithms, which process data transformations in the most optimal source or target database engines. Informatica Cloud also offers native connectivity to cloud and social media apps, such as Salesforce, NetSuite, Workday, LinkedIn, and Twitter, to name a few, which makes it easy to funnel data from these apps into your Amazon Redshift cluster at faster speeds.

If you’re at the Amazon Web Services Summit today in New York City, then you heard our announcement that Informatica Cloud is offering a free 60-day trial for Amazon Redshift with no limitations on the number of rows, jobs, application endpoints, or scheduling. If you’d like to learn more, please visit our Redshift Trial page or go directly to the trial.

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud, Cloud Computing, Cloud Data Integration, Cloud Data Management | Tagged , , , , | Leave a comment

6 Steps to Petabyte-Scale Cloud Data Warehousing with Amazon Redshift and Informatica Cloud

Getting started with Cloud Data Warehousing using Amazon Redshift is now easier than ever, thanks to the Informatica Cloud’s 60-day trial for Amazon Redshift. Now, anyone can easily and quickly move data from any on-premise, cloud, Big Data, or relational data sources into Amazon Redshift without writing a single line of code and without being a data integration expert. You can use Informatica Cloud’s six-step wizard to quickly replicate your data or use the productivity-enhancing cloud integration designer to tackle more advanced use cases, such as combining multiple data sources into one Amazon Redshift table. Existing Informatica PowerCenter users can use Informatica Cloud and Amazon Redshift to extend an existing data warehouse with through an affordable and scalable approach. If you are currently exploring self-service business intelligence solutions such as Birst, Tableau, or Microstrategy, the combination of Redshift and Informatica Cloud makes it incredibly easy to prepare the data for analytics by any BI solution.

To get started, execute the following steps:

  1. Go to http://informaticacloud.com/cloud-trial-for-redshift and click on the ‘Sign Up Now’ link
  2. You’ll be taken to the Informatica Marketplace listing for the Amazon Redshift trial. Sign up for a Marketplace account if you don’t already have one, and then click on the ‘Start Free Trial Now’ button
  3. You’ll then be prompted to login with your Informatica Cloud account. If you do not have an Informatica Cloud username and password, register one by clicking the appropriate link and fill in the required details
  4. Once you finish registration and obtain your login details, download the Vibe ™ Secure Agent to your Amazon EC2 virtual machine (or to a local Windows or Linux instance), and ensure that it can access your Amazon S3 bucket and Amazon Redshift cluster.
  5. Ensure that your S3 bucket, and Redshift cluster are both in the same availability zone
  6. To start using the Informatica Cloud connector for Amazon Redshift, create a connection to your Amazon Redshift nodes by providing your AWS Access Key ID and Secret Access Key, specifying your cluster details, and obtaining your JDBC URL string.

You are now ready to begin moving data to and from Amazon Redshift by creating your first Data Synchronization task (available under Applications). Pick a source, pick your Redshift target, map the fields, and you’re done!

The value of using Informatica Cloud to load data into Amazon Redshift is the ability of the application to move massive amounts of data in parallel.  The Informatica engine optimizes by moving processing close to where the data is using push-down technology.  Unlike other data integration solutions for Redshift that perform batch processing using an XML engine which is inherently slow when processing large data volumes and don’t have multitenant architectures that scale well, Informatica Cloud processes over 2 billion transactions every day.

Amazon Redshift has brought agility, scalability, and affordability to petabyte-scale data warehousing, and Informatica Cloud has made it easy to transfer all your structured and unstructured data into Redshift so you can focus on getting data insights today, not weeks from now.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud, Cloud Data Integration, Data Warehousing | Tagged , , | Leave a comment