Tag Archives: cloud
Now in its third year (2012, 2013), The State of Salesforce Annual Review continues to be the most comprehensive report on the Salesforce ecosystem. Based on the data from over 1,000 global Salesforce users, this report highlights how companies are using the Salesforce platform, where resources are being allocated, and where industry hype meets reality. Over the past three years, the report has evolved much like the technology, shifting and transforming to address recent advancements, and well as tracking longitudinal trends in the space.
We’ve found that key integration partners like Informatica Cloud continue to grow in importance within the Salesforce ecosystem. Beyond the core platform offerings from Salesforce, third-party apps and integration technologies have received considerable attention as companies look to extend the value of their initial investments and unite systems. The need to sync multiple platforms and applications is an emerging need in the Salesforce ecosystem—which will be highlighted in the 2014 report.
As Salesforce usage expands, so does our approach to survey execution. In line with this evolution, here’s what we’ve learned over the last three years from data collection:
Functions, Departments Make a Difference
Sales, Marketing, IT, and Service all have their own needs and pain points. As Salesforce moves quickly across the enterprise, we want to recognize the values, priorities, and investments by each department. Not only are the primary clouds for each function at different stages of maturity, but the ways in which each department uses their cloud are unique. We anticipate discovery of how enterprises are collaborating across functions and clouds.
Focus on Region
As our international data set continues to grow we are investing in regionalized reports for the US, UK, France, and Australia. While we saw indications of differences between each region in last year’s survey, they were not statistically significant.
Customer Engagement is a Top Priority
Everyone agrees that customer engagement is important, but what are companies actually doing about it? A section on predictive analytics and questions about engagement specific to departments has been included in this year’s survey. We suspect that the recent trend of companies empowering employees with a combination of data and mobile will be validated in the survey results.
Variation Across Industries
As an added bonus, we will build a report targeting specific insights from the Financial Services industry.
We Need Your Help
Our dataset depends on input from Salesforce users spanning all functions, roles, industries, and regions. Every response matters. Please take 15 minutes to share your Salesforce experiences, and you will receive a personalized report, comparing your responses to the aggregate survey results.
Getting started with Cloud Data Warehousing using Amazon Redshift is now easier than ever, thanks to the Informatica Cloud’s 60-day trial for Amazon Redshift. Now, anyone can easily and quickly move data from any on-premise, cloud, Big Data, or relational data sources into Amazon Redshift without writing a single line of code and without being a data integration expert. You can use Informatica Cloud’s six-step wizard to quickly replicate your data or use the productivity-enhancing cloud integration designer to tackle more advanced use cases, such as combining multiple data sources into one Amazon Redshift table. Existing Informatica PowerCenter users can use Informatica Cloud and Amazon Redshift to extend an existing data warehouse with through an affordable and scalable approach. If you are currently exploring self-service business intelligence solutions such as Birst, Tableau, or Microstrategy, the combination of Redshift and Informatica Cloud makes it incredibly easy to prepare the data for analytics by any BI solution.
To get started, execute the following steps:
- Go to http://informaticacloud.com/cloud-trial-for-redshift and click on the ‘Sign Up Now’ link
- You’ll be taken to the Informatica Marketplace listing for the Amazon Redshift trial. Sign up for a Marketplace account if you don’t already have one, and then click on the ‘Start Free Trial Now’ button
- You’ll then be prompted to login with your Informatica Cloud account. If you do not have an Informatica Cloud username and password, register one by clicking the appropriate link and fill in the required details
- Once you finish registration and obtain your login details, download the Vibe ™ Secure Agent to your Amazon EC2 virtual machine (or to a local Windows or Linux instance), and ensure that it can access your Amazon S3 bucket and Amazon Redshift cluster.
- Ensure that your S3 bucket, and Redshift cluster are both in the same availability zone
- To start using the Informatica Cloud connector for Amazon Redshift, create a connection to your Amazon Redshift nodes by providing your AWS Access Key ID and Secret Access Key, specifying your cluster details, and obtaining your JDBC URL string.
You are now ready to begin moving data to and from Amazon Redshift by creating your first Data Synchronization task (available under Applications). Pick a source, pick your Redshift target, map the fields, and you’re done!
The value of using Informatica Cloud to load data into Amazon Redshift is the ability of the application to move massive amounts of data in parallel. The Informatica engine optimizes by moving processing close to where the data is using push-down technology. Unlike other data integration solutions for Redshift that perform batch processing using an XML engine which is inherently slow when processing large data volumes and don’t have multitenant architectures that scale well, Informatica Cloud processes over 2 billion transactions every day.
Amazon Redshift has brought agility, scalability, and affordability to petabyte-scale data warehousing, and Informatica Cloud has made it easy to transfer all your structured and unstructured data into Redshift so you can focus on getting data insights today, not weeks from now.
Once upon a time, database schema changes were rare and handled with scrutiny. The stability of source data led to the development of the traditional Data Integration model. In this traditional model, a developer pulled a fixed number of source fields into an integration, transformed these fields, and then mapped the data into appropriate target fields.
The world of data has profoundly changed. Today’s Cloud applications allow an administrator to add custom fields to an object at a moment’s notice. Because source data is increasingly malleable, the traditional Data Integration model is no longer optimal. The Data Integration model must evolve.
Today’s integrations must dynamically adapt to ever-changing environments. (Webinar HERE)
To meet these demands, Informatica has built the Informatica Cloud Mapping Designer. The Mapping Designer provides power and adaptability to integrations through the “link rules” and “incoming field rules” features. Integration developers no longer need to deal with fields on a one-by-one basis. Cloud Designer allows the integration developer to specify a set of dynamic “rules” that tell the mapping how fields need to be handled.
For example, the default rule is “Include all fields”, which is both simple and powerful. The “all fields” rule dynamically resolves to bring in as many fields as exist at the source at run time. Regardless of how many new fields the application developer or database administrator may have thrown in to the source after the integration was developed, this simple rule can bring in all the new fields into the integration dynamically. This exponentially increases developer productivity, as the integration developer is not making modifications just to keep up with changes to the integration endpoints. Instead, the integration is “future proofed”.
Link rules can be defined in combination using both “includes” and “excludes” criteria. The rules can be of four types:
- Include or Exclude All fields
- Include or Exclude Fields of a particular datatype (example: String, numeric, decimal, datetime, blob etc)
- Include or Exclude Fields that fit a name pattern (example: any field that ends with “_c” or any field that starts with “Shipping_”)
- Include or Exclude Fields by a particular name (example: “Id”, “Name” etc)
Any combination of the link rules can be put together to create sophisticated dynamic rules for fields to flow.
Each transformation in the integration can specify the set of rules that determine what fields flow into that particular transformation. For example, if I need all custom fields from a Salesforce source to flow into a target, I would simply “Include fields by name pattern : suffixed with ‘_c’” – which is the naming convention for custom field names in Salesforce. In another example, If I need to perform standardization of date formats for all datetime fields in an expression, I can define a rule to “Include fields by datatype – datetime”.
The dynamic nature of the link rules is what empowers a mapping created in Informatica Cloud Designer to be easily converted into a highly reusable integration template through parameterization.
For example, the entire source object can be parameterized and the integration developer may focus on the core integration logic without having to worry about individual fields. For example I can build an integration for bringing data into a slowly changing dimension table in a datawarehouse and this integration can apply to any source object. When the integration is executed by substituting different source objects for the source parameter, the integration would work as expected since the logical rules can dynamically bring in the fields regardless of what the source object structure is. Now all of a sudden, an integration developer is only required to build one reusable integration template for replicating multiple objects to the datawarehouse and NOT dozens or even hundreds of such repeated integration mappings. Needless to say, maintenance is hugely optimized.
With the power of logically defining field propagation through an integration combined with the ability to parameterize just about any part of the integration logic, the Cloud Mapping Designer provides a unique and powerful platform for developing reusable end to end integration solutions (such as Opportunity to Order, Accounts load to Salesforce, SAP product catalog to Salesforce, File load to Amazon redshift etc). Such prebuilt end-to-end solutions or VIPs (Vibe Integration Packages) can be easily customized by any consuming customer to adapt to their unique environments and business needs by tweaking only certain configurations but largely reusing the core integration logic.
What could be better than building integrations… building far fewer integrations that are reusable and self-adapting
To learn more, join the upcoming Cloud Spring release Webinar on Thursday, March 13.
Leo Eweani makes the case that the data tsunami is coming. “Businesses are scrambling to respond and spending accordingly. Demand for data analysts is up by 92%; 25% of IT budgets are spent on the data integration projects required to access the value locked up in this data “ore” – it certainly seems that enterprise is doing The Right Thing – but is it?”
Data is exploding within most enterprises. However, most enterprises have no clue how to manage this data effectively. While you would think that an investment in data integration would be an area of focus, many enterprises don’t have a great track record in making data integration work. “Scratch the surface, and it emerges that 83% of IT staff expect there to be no ROI at all on data integration projects and that they are notorious for being late, over-budget and incredibly risky.”
The core message from me is that enterprises need to ‘up their game’ when it comes to data integration. This recommendation is based upon the amount of data growth we’ve already experienced, and will experience in the near future. Indeed, a “data tsunami” is on the horizon, and most enterprises are ill prepared for it.
So, how do you get prepared? While many would say it’s all about buying anything and everything, when it comes to big data technology, the best approach is to splurge on planning. This means defining exactly what data assets are in place now, and will be in place in the future, and how they should or will be leveraged.
To face the forthcoming wave of data, certain planning aspects and questions about data integration rise to the top:
Performance, including data latency. Or, how quickly does the data need to flow from point or points A to point or points B? As the volume of data quickly rises, the data integration engines have got to keep up.
Data security and governance. Or, how will the data be protected both at-rest and in-flight, and how will the data be managed in terms of controls on use and change?
Abstraction, and removing data complexity. Or, how will the enterprise remap and re-purpose key enterprise data that may not currently exist in a well-defined and functional structure?
Integration with cloud-based data. Or, how will the enterprise link existing enterprise data assets with those that exist on remote cloud platforms?
While this may seem like a complex and risky process, think through the problems, leverage the right technology, and you can remove the risk and complexity. The enterprises that seem to fail at data integration do not follow that advice.
I suspect the explosion of data to be the biggest challenge enterprise IT will face in many years. While a few will take advantage of their data, most will struggle, at least initially. Which route will you take?
Hosting Big Data applications in the cloud has compelling advantages. Scale doesn’t become as overwhelming an issue as it is within on-premise systems. IT will no longer feel compelled to throw more disks at burgeoning storage requirements, and performance becomes the contractual obligation of someone else outside the organization.
Cloud may help clear up some of the costlier and thornier problems of attempting to manage Big Data environments, but it also creates some new issues. As Ron Exler of Saugatuck Technology recently pointed out in a new report, cloud-based solutions “can be quickly configured to address some big data business needs, enabling outsourcing and potentially faster implementations.” However, he adds, employing the cloud also brings some risks as well.
Data security is one major risk area, and I could write many posts on this. But management issues also present other challenges. Too many organizations see cloud as an cure-all for their application and data management ills, but broken processes are never fixed when new technology is applied to them. There are also plenty of risks with the misappropriation of big data, and the cloud won’t make these risks go away. Exler lists some of the risks that stem from over-reliance on cloud technology, from the late delivery of business reports to the delivery of incorrect business information, resulting in decisions based on incorrect source data. Sound familiar? The gremlins that have haunted data analytic and management for years simply won’t disappear behind a cloud.
Exler makes three recommendations for moving big data into cloud environments – note that the solutions he proposes have nothing to do with technology, and everything to do with management:
1) Analyze the growth trajectory of your data and your business. Typically, organizations will have a lot of different moving parts and interfaces. And, as the business grows and changes, it will be constantly adding new data sources. As Exler notes, “processing integration or hand off points in such piecemeal approaches represent high risk to data in the chain of possession – from collection points to raw data to data edits to data combination to data warehouse to analytics engine to viewing applications on multiple platforms.” Business growth and future requirements should be analyzed and modeled to make sure cloud engagements will be able “to provide adequate system performance, availability, and scalability to account for the projected business expansion,” he states.
2) Address data quality issues as close to the source as possible. Because both cloud and big data environments have so many moving parts, “finding the source of a data problem can be a significant challenge,” Exler warns. “Finding problems upstream in the data flow prevent time-consuming and expensive reprocessing that could be needed should errors be discovered downstream.” Such quality issues have a substantial business cost as well. When data errors are found, it becomes “an expensive company-wide fire drill to correct the data,” he says.
3) Build your project management, teamwork and communication skills. Because big data and cloud projects involve so many people and components from across the enterprise, requiring coordination and interaction between various specialists, subject matter experts, vendors, and outsourcing partners. “This coordination is not simple,” Exler warns. “Each group involved likely has different sets of terminology, work habits, communications methods, and documentation standards. Each group also has different priorities; oftentimes such new projects are delegated to lower priority for supporting groups.” Project managers must be leaders and understand the value of open and regular communications.
Since the advent of middleware technology in the mid-1990’s, data integration has been primarily an IT-lead technical problem. Business leaders had their hands full focusing on their individual silos and were happy to delegate the complex task of integrating enterprise data and creating one version of the truth to IT. The problem is that there is now too much data that is highly fragmented across myriad internal systems, customer/supplier systems, cloud applications, mobile devices and automatic sensors. Traditional IT-lead approaches whereby a project is launched involving dozens (or hundreds) of staff to address every new opportunity are just too slow. (more…)
OppenheimerFunds Dreamforce Story: Lay a Foundation of Trusted and Complete Customer Information for Salesforce
Imagine you are rolling Salesforce out to more than 500 users today. What will be their first impression? Will they be annoyed when they encounter duplicate customer records during their first experience? Will they complain when they need to access other systems to get all the relevant customer information they need to do their job? How will that impact your goals for Salesforce adoption? (more…)