Tag Archives: cloud

The State of Salesforce Report: Trends, Data, and What’s Next

A guest post by Jonathan Staley, Product Marketing Manager at Bluewolf Beyond, a consulting practice focused on innovating live cloud environments.

Guest blogger Jonathan Staley

Guest blogger Jonathan Staley

Now in its third year (2012, 2013), The State of Salesforce Annual Review continues to be the most comprehensive report on the Salesforce ecosystem. Based on the data from over 1,000 global Salesforce users, this report highlights how companies are using the Salesforce platform, where resources are being allocated, and where industry hype meets reality. Over the past three years, the report has evolved much like the technology, shifting and transforming to address recent advancements, and well as tracking longitudinal trends in the space.

We’ve found that key integration partners like Informatica Cloud continue to grow in importance within the Salesforce ecosystem. Beyond the core platform offerings from Salesforce, third-party apps and integration technologies have received considerable attention as companies look to extend the value of their initial investments and unite systems. The need to sync multiple platforms and applications is an emerging need in the Salesforce ecosystem—which will be highlighted in the 2014 report.

As Salesforce usage expands, so does our approach to survey execution. In line with this evolution, here’s what we’ve learned over the last three years from data collection:

Functions, Departments Make a Difference

Sales, Marketing, IT, and Service all have their own needs and pain points. As Salesforce moves quickly across the enterprise, we want to recognize the values, priorities, and investments by each department. Not only are the primary clouds for each function at different stages of maturity, but the ways in which each department uses their cloud are unique. We anticipate discovery of how enterprises are collaborating across functions and clouds.

Focus on Region

As our international data set continues to grow we are investing in regionalized reports for the US, UK, France, and Australia. While we saw indications of differences between each region in last year’s survey, they were not statistically significant.

Customer Engagement is a Top Priority

Everyone agrees that customer engagement is important, but what are companies actually doing about it? A section on predictive analytics and questions about engagement specific to departments has been included in this year’s survey. We suspect that the recent trend of companies empowering employees with a combination of data and mobile will be validated in the survey results.

Variation Across Industries

As an added bonus, we will build a report targeting specific insights from the Financial Services industry.

We Need Your Help

Our dataset depends on input from Salesforce users spanning all functions, roles, industries, and regions. Every response matters. Please take 15 minutes to share your Salesforce experiences, and you will receive a personalized report, comparing your responses to the aggregate survey results.

Click the Banner to take the Survey

Click the Banner to take the Survey

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud, Cloud Application Integration, Cloud Computing, Cloud Data Integration, Cloud Data Management, SaaS | Tagged , , , , | Leave a comment

6 Steps to Petabyte-Scale Cloud Data Warehousing with Amazon Redshift and Informatica Cloud

Getting started with Cloud Data Warehousing using Amazon Redshift is now easier than ever, thanks to the Informatica Cloud’s 60-day trial for Amazon Redshift. Now, anyone can easily and quickly move data from any on-premise, cloud, Big Data, or relational data sources into Amazon Redshift without writing a single line of code and without being a data integration expert. You can use Informatica Cloud’s six-step wizard to quickly replicate your data or use the productivity-enhancing cloud integration designer to tackle more advanced use cases, such as combining multiple data sources into one Amazon Redshift table. Existing Informatica PowerCenter users can use Informatica Cloud and Amazon Redshift to extend an existing data warehouse with through an affordable and scalable approach. If you are currently exploring self-service business intelligence solutions such as Birst, Tableau, or Microstrategy, the combination of Redshift and Informatica Cloud makes it incredibly easy to prepare the data for analytics by any BI solution.

To get started, execute the following steps:

  1. Go to http://informaticacloud.com/cloud-trial-for-redshift and click on the ‘Sign Up Now’ link
  2. You’ll be taken to the Informatica Marketplace listing for the Amazon Redshift trial. Sign up for a Marketplace account if you don’t already have one, and then click on the ‘Start Free Trial Now’ button
  3. You’ll then be prompted to login with your Informatica Cloud account. If you do not have an Informatica Cloud username and password, register one by clicking the appropriate link and fill in the required details
  4. Once you finish registration and obtain your login details, download the Vibe ™ Secure Agent to your Amazon EC2 virtual machine (or to a local Windows or Linux instance), and ensure that it can access your Amazon S3 bucket and Amazon Redshift cluster.
  5. Ensure that your S3 bucket, and Redshift cluster are both in the same availability zone
  6. To start using the Informatica Cloud connector for Amazon Redshift, create a connection to your Amazon Redshift nodes by providing your AWS Access Key ID and Secret Access Key, specifying your cluster details, and obtaining your JDBC URL string.

You are now ready to begin moving data to and from Amazon Redshift by creating your first Data Synchronization task (available under Applications). Pick a source, pick your Redshift target, map the fields, and you’re done!

The value of using Informatica Cloud to load data into Amazon Redshift is the ability of the application to move massive amounts of data in parallel.  The Informatica engine optimizes by moving processing close to where the data is using push-down technology.  Unlike other data integration solutions for Redshift that perform batch processing using an XML engine which is inherently slow when processing large data volumes and don’t have multitenant architectures that scale well, Informatica Cloud processes over 2 billion transactions every day.

Amazon Redshift has brought agility, scalability, and affordability to petabyte-scale data warehousing, and Informatica Cloud has made it easy to transfer all your structured and unstructured data into Redshift so you can focus on getting data insights today, not weeks from now.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud, Cloud Data Integration, Data Warehousing | Tagged , , | Leave a comment

Cloud Designer and Dynamic Rules Linking in Informatica Cloud Spring 2014

Informatica Cloud Spring 2014: Advanced Integration, Simplified for All! Once upon a time, database schema changes were rare and handled with scrutiny. The stability of source data led to the development of the traditional Data Integration model. In this traditional model, a developer pulled a fixed number of source fields into an integration, transformed these fields, and then mapped the data into appropriate target fields.
The world of data has profoundly changed. Today’s Cloud applications allow an administrator to add custom fields to an object at a moment’s notice. Because source data is increasingly malleable, the traditional Data Integration model is no longer optimal. The Data Integration model must evolve.

Today’s integrations must dynamically adapt to ever-changing environments. (Webinar HERE)

To meet these demands, Informatica has built the Informatica Cloud Mapping Designer. The Mapping Designer provides power and adaptability to integrations through the “link rules” and “incoming field rules” features. Integration developers no longer need to deal with fields on a one-by-one basis. Cloud Designer allows the integration developer to specify a set of dynamic “rules” that tell the mapping how fields need to be handled.

For example, the default rule is “Include all fields”, which is both simple and powerful. The “all fields” rule dynamically resolves to bring in as many fields as exist at the source at run time. Regardless of how many new fields the application developer or database administrator may have thrown in to the source after the integration was developed, this simple rule can bring in all the new fields into the integration dynamically. This exponentially increases developer productivity, as the integration developer is not making modifications just to keep up with changes to the integration endpoints. Instead, the integration is “future proofed”.

Link rules can be defined in combination using both “includes” and “excludes” criteria. The rules can be of four types:

  • Include or Exclude All fields
  • Include or Exclude Fields of a particular datatype (example: String, numeric, decimal, datetime, blob etc)
  • Include or Exclude Fields that fit a name pattern (example: any field that ends with “_c” or any field that starts with “Shipping_”)
  • Include or Exclude Fields by a particular name (example: “Id”, “Name” etc)

Any combination of the link rules can be put together to create sophisticated dynamic rules for fields to flow.

Each transformation in the integration can specify the set of rules that determine what fields flow into that particular transformation. For example, if I need all custom fields from a Salesforce source to flow into a target, I would simply “Include fields by name pattern : suffixed with ‘_c’” – which is the naming convention for custom field names in Salesforce.  In another example, If I need to perform standardization of date formats for all datetime fields in an expression, I can define a rule to “Include fields by datatype – datetime”.

The dynamic nature of the link rules is what empowers a mapping created in Informatica Cloud Designer to be easily converted into a highly reusable integration template through parameterization.

For example, the entire source object can be parameterized and the integration developer may focus on the core integration logic without having to worry about individual fields. For example I can build an integration for bringing data into a slowly changing dimension table in a datawarehouse and this integration can apply to any source object. When the integration is executed by substituting different source objects for the source parameter, the integration would work as expected since the logical rules can dynamically bring in the fields regardless of what the source object structure is. Now all of a sudden, an integration developer is only required to build one reusable integration template for replicating multiple objects to the datawarehouse and NOT dozens or even hundreds of such repeated integration mappings. Needless to say, maintenance is hugely optimized.

With the power of logically defining field propagation through an integration combined with the ability to parameterize just about any part of the integration logic, the Cloud Mapping Designer provides a unique and powerful platform for developing reusable end to end integration solutions (such as Opportunity to Order, Accounts load to Salesforce, SAP product catalog to Salesforce, File load to Amazon redshift etc). Such prebuilt end-to-end solutions or VIPs (Vibe Integration Packages) can be easily customized by any consuming customer to adapt to their unique environments and business needs by tweaking only certain configurations but largely reusing the core integration logic.

What could be better than building integrations… building far fewer integrations that are reusable and self-adapting

To learn more, join the upcoming Cloud Spring release Webinar on Thursday, March 13.

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud Computing | Tagged , | Leave a comment

Are you Ready for the Massive Wave of Data?

Leo Eweani makes the case that the data tsunami is coming.  “Businesses are scrambling to respond and spending accordingly. Demand for data analysts is up by 92%; 25% of IT budgets are spent on the data integration projects required to access the value locked up in this data “ore” – it certainly seems that enterprise is doing The Right Thing – but is it?”

Data is exploding within most enterprises.  However, most enterprises have no clue how to manage this data effectively.  While you would think that an investment in data integration would be an area of focus, many enterprises don’t have a great track record in making data integration work.  “Scratch the surface, and it emerges that 83% of IT staff expect there to be no ROI at all on data integration projects and that they are notorious for being late, over-budget and incredibly risky.”

Are you Ready for the massive Wave of Data

The core message from me is that enterprises need to ‘up their game’ when it comes to data integration.  This recommendation is based upon the amount of data growth we’ve already experienced, and will experience in the near future.  Indeed, a “data tsunami” is on the horizon, and most enterprises are ill prepared for it.

So, how do you get prepared?   While many would say it’s all about buying anything and everything, when it comes to big data technology, the best approach is to splurge on planning.  This means defining exactly what data assets are in place now, and will be in place in the future, and how they should or will be leveraged.

To face the forthcoming wave of data, certain planning aspects and questions about data integration rise to the top:

Performance, including data latency.  Or, how quickly does the data need to flow from point or points A to point or points B?  As the volume of data quickly rises, the data integration engines have got to keep up.

Data security and governance.  Or, how will the data be protected both at-rest and in-flight, and how will the data be managed in terms of controls on use and change?

Abstraction, and removing data complexity.  Or, how will the enterprise remap and re-purpose key enterprise data that may not currently exist in a well-defined and functional structure?

Integration with cloud-based data.  Or, how will the enterprise link existing enterprise data assets with those that exist on remote cloud platforms?

While this may seem like a complex and risky process, think through the problems, leverage the right technology, and you can remove the risk and complexity.  The enterprises that seem to fail at data integration do not follow that advice.

I suspect the explosion of data to be the biggest challenge enterprise IT will face in many years.  While a few will take advantage of their data, most will struggle, at least initially.  Which route will you take?

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud Computing, Data Governance, Data Integration | Tagged , , , | Leave a comment

No, Cloud Won’t Chase the Data Analytics Gremlins Away

Hosting Big Data applications in the cloud has compelling advantages. Scale doesn’t become as overwhelming an issue as it is within on-premise systems. IT will no longer feel compelled to throw more disks at burgeoning storage requirements, and performance becomes the contractual obligation of someone else outside the organization.

Cloud may help clear up some of the costlier and thornier problems of attempting to manage Big Data environments, but it also creates some new issues. As Ron Exler of Saugatuck Technology recently pointed out in a new report, cloud-based solutions “can be quickly configured to address some big data business needs, enabling outsourcing and potentially faster implementations.” However, he adds, employing the cloud also brings some risks as well.

Data security is one major risk area, and I could write many posts on this. But management issues also present other challenges. Too many organizations see cloud as an cure-all for their application and data management ills, but broken processes are never fixed when new technology is applied to them. There are also plenty of risks with the misappropriation of big data, and the cloud won’t make these risks go away. Exler lists some of the risks that stem from over-reliance on cloud technology, from the late delivery of business reports to the delivery of incorrect business information, resulting in decisions based on incorrect source data. Sound familiar? The gremlins that have haunted data analytic and management for years simply won’t disappear behind a cloud.

Exler makes three recommendations for moving big data into cloud environments – note that the solutions he proposes have nothing to do with technology, and everything to do with management:

1) Analyze the growth trajectory of your data and your business. Typically, organizations will have a lot of different moving parts and interfaces. And, as the business grows and changes, it will be constantly adding new data sources.  As Exler notes, “processing integration or hand off points in such piecemeal approaches represent high risk to data in the chain of possession – from collection points to raw data to data edits to data combination to data warehouse to analytics engine to viewing applications on multiple platforms.” Business growth and future requirements should be analyzed and modeled to make sure cloud engagements will be able “to provide adequate system performance, availability, and scalability to account for the projected business expansion,” he states.

2) Address data quality issues as close to the source as possible.  Because both cloud and big data environments have so many moving parts,  “finding the source of a data problem can be a significant challenge,” Exler warns. “Finding problems upstream in the data flow prevent time-consuming and expensive reprocessing that could be needed should errors be discovered downstream.” Such quality issues have a substantial business cost as well. When data errors are found, it becomes “an expensive company-wide fire drill to correct the data,” he says.

3) Build your project management, teamwork and communication skills. Because big data and cloud projects involve so many people and components from across the enterprise, requiring coordination and interaction between various specialists, subject matter experts, vendors, and outsourcing partners. “This coordination is not simple,” Exler warns. “Each group involved likely has different sets of terminology, work habits, communications methods, and documentation standards. Each group also has different priorities; oftentimes such new projects are delegated to lower priority for supporting groups.” Project managers must be leaders and understand the value of open and regular communications.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud Computing, Data Quality | Tagged , , , | Leave a comment

Data Integration Is Now A Business Problem – That’s Good

Since the advent of middleware technology in the mid-1990’s, data integration has been primarily an IT-lead technical problem. Business leaders had their hands full focusing on their individual silos and were happy to delegate the complex task of integrating enterprise data and creating one version of the truth to IT. The problem is that there is now too much data that is highly fragmented across myriad internal systems, customer/supplier systems, cloud applications, mobile devices and automatic sensors. Traditional IT-lead approaches whereby a project is launched involving dozens (or hundreds) of staff to address every new opportunity are just too slow. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, CIO, Data Integration, Enterprise Data Management, Integration Competency Centers | Tagged , , , | Leave a comment

Logitech’s Story: Using Informatica to Overcome Three Salesforce Data Challenges

In recent blogs I’ve shared our customers’ stories from Dreamforce. In particular, how they are building a trusted customer information foundation for salesforce.com to achieve their business objectives.

Logitech uses Informatica PowerCenter, Cloud, Data Quality and MDM to Overcome Three Salesforce Data Challenges

In this blog, I’ll cover the presentation delivered by Logitech’s Senior Manager, Data Engineering, Doctor of Business Administration, Steven Perelli-Minetti.

Logitech is a $2.32B global company providing consumer electronics to consumers and businesses in 100 countries. Their diverse but focused portfolio of products is sold via direct and indirect sales channels.

Steve explained that Logitech is an Oracle shop. They use Oracle e-Business Suite and Oracle BI. They were using Oracle OWB for data integration until they tried Informatica PowerCenter and realized it was a better fit for their data warehouse support and management.

1. Salesforce Challenge: Integrating Trade Promotion Data into an EDW
Logitech needed to support the business’ need for trade promotion analytics and reporting as well as IT’s need to keep the data safe and secure. They chose Informatica Cloud to replicate data from a custom trade promotion management (TPM) application on Force.com (PaaS), bring it into the Enterprise Data Warehouse (EDW), and combine it with the Soft Dollar Accruals data from Oracle e-Business Suite and POS data.

Steven said the results have been pretty good. They went live in one month. There is little maintenance. It’s subscription-based, which makes it cost effective. Unlike other offerings, they can scale the solution across any other systems.

2.  Salesforce Challenge: Integrating Sales Data into an EDW
It’s critical for Logitech to understand what’s going on in their China business, which is an important emerging market. They use Informatica Cloud to integrate China’s Salesforce data with the EDW so management can gain insight into market share at the store level for example.

3.  Salesforce Challenge: Managing Trusted Product and Customer Data
Logitech needed to manage a single trusted version of product and customer data across on-premise systems, cloud applications and third party data providers. Now they use Informatica Data Quality and Informatica MDM to consolidate and manage business-critical data on an ongoing basis, which is used by on-premise systems and cloud applications such as Salesforce.

Join us for a live webinar on December 5: Logitech MDM Case Study: Seven Lessons for Mastering Product and Customer Data. Severin Stoll, Senior Business Engagement Manager of Global IT Solutions at Logitech will share seven lessons learned from their global master data management (MDM) implementation.

  • Product data:  Logitech uses Informatica MDM to consolidate product data (images, descriptions, features, dimensions) from various sources. With this trusted product data foundation, Logitech can ensure consistency of product information across websites managed by third party vendors in various languages across the globe, resulting in efficiencies for the marketing team.
  •  Customer data:  Logitech uses Informatica MDM to consolidate customer data from POS systems via thousands of data providers in many variations, languages and character sets. With this trusted customer data foundation, Logitech has a clearer picture of how products flow through channel partners to end-consumers.

Want more?

 

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud Computing, Customers, Data Integration, Data Quality, Manufacturing, Master Data Management, Retail | Tagged , , , , , , , , , , , , , | Leave a comment

Focus in 2013

Like many of you, this time of year means budgeting. Every year is different, and budgeting for 2013 is no exception – so how do I maximize the return on investment for our shareholders?

What makes 2013 different? For many of us, the global economic climate and political change means uncertainty. In most cases, this leads to cautious optimism and moderate budget increases at best. As I was scouring the IT Leadership Exchange for benchmark data. I found the data to be consistent. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in CIO | Tagged , , , , , , , , , | Leave a comment

How Migrating Customer Data from Legacy CRM to Salesforce is Like Moving to a New Home

After returning from Dreamforce, I learned that two of my friends were moving to new homes. They used different approaches to moving their belongings and had vastly different experiences once they moved in. This made me think of the different approaches companies use when migrating customer data from a legacy CRM system to Salesforce and the impact these approaches have on their user experience when they go live.

One friend, let’s call her “Jane” is not the most organized person. Her goal was to get her family’s belongings into the new house as quickly as possible. Room by room, they packed all of their belongings into boxes and shipped them to their new home.

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud Computing, Customer Acquisition & Retention, Master Data Management | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

OppenheimerFunds Dreamforce Story: Lay a Foundation of Trusted and Complete Customer Information for Salesforce

Imagine you are rolling Salesforce out to more than 500 users today. What will be their first impression? Will they be annoyed when they encounter duplicate customer records during their first experience? Will they complain when they need to access other systems to get all the relevant customer information they need to do their job? How will that impact your goals for Salesforce adoption? (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud Computing, Customer Acquisition & Retention, Financial Services, Master Data Management | Tagged , , , , , , , , , , , , | 1 Comment