Category Archives: Cloud Application Integration
Informatica’s Redshift connector is a state-of-the-art Bulk-Load type connector which allows users to perform all CRUD operations on Amazon Redshift. It makes use of AWS best practices to load data at high throughput in a safe and secure manner and is available on Informatica Cloud and PowerCenter.
Today we are excited to announce the support of Amazon’s newly launched custom JDBC and ODBC drivers for Redshift. Both the drivers are certified for Linux and Windows environments.
Informatica’s Redshift connector will package the JDBC 4.1 driver which further enhances our meta-data fetch capabilities for tables and views in Redshift. That improves our overall design-time responsiveness by over 25%. It also allows us to query multiple tables/views and retrieve the result-set using primary and foreign key relationships.
Amazon’s ODBC driver enhances our FULL Push Down Optimization capabilities on Redshift. Some of the key differentiating factors are support for the SYSDATE variable, functions such as ADD_TO_DATE(), ASCII(), CONCAT(), LENGTH(), TO_DATE(), VARIANCE() etc. which weren’t possible before.
Amazon’s ODBC driver is not pre-packaged but can be directly downloaded from Amazon’s S3 store.
Once installed, the user can change the default ODBC System DSN in ODBC Data Source Administrator.
I have a teenaged daughter. She is interested in her appearance, but not a risk taker. As a result, she waits until a fashion trend has been established by her peers before adopting the new look. This works out well for me because I’m not on the bleeding edge of the vagaries of fashion spending on a trend that may fizzle out before it becomes more widely adopted. This works out well for my daughter because she is able to blend in with her peers. She has developed her own adoption model for fashion.
Healthcare analytics can be considered to be similar to fashion. It can be confusing and even overwhelming without a systematic framework to guide an approach or priorities. Fortunately – a group of very smart people have been working on this and have created the Healthcare Analytics Adoption Model. The Healthcare Analytics Adoption Model is a framework to measure the adoption and meaningful use of data warehouses and analytics in healthcare – similar to the HIMSS Analytics EMRAM model. It should be considered a guide to classifying groups of analytics capabilities, and provide a methodology for health organizations to adopt analytics.
The Healthcare Analytics Adoption Model proposes that there are three phases of data analysis:
- data collection – systems that are designed specifically for supporting transaction based workflows and data collection. The adoption of the electronic medical record (EMR) is a great example of this phase.
- data sharing – The need for sharing data among members of the workflow team – similar to the capabilities of a health information exchange.
- data analytics – organizations realize that they can start to analyze this collected and shared data through investigating the patterns in the aggregated data.
Once your organization has moved into Phase Three – you are ready to start work on the Healthcare Analytics Adoption Model which is comprised of eight levels:
Each level of adoption includes progressive expansion of analytic capabilities in four dimensions:
- New data sources – Data content expands as new sources of data are added to your organization
- Complexity – analytic algorithms and data binding become progressively more complex
- Data literacy – this increases among employees, leading to an increasing ability to exploit data as an asset
- Data timeliness – timeliness of data content increases (data latency decreases) leading to a reduction in decision cycles and mean time to improvement.
We’ll spend some time in future weeks talking through the different levels. In the meantime – where do you see your organization?
Strata 2015 – Making Data Work for Everyone with Cloud Integration, Cloud Data Management and Cloud Machine Learning
Are you ready to answer “Yes” to the questions:
a) “Are you Cloud Ready?”
b) “Are you Machine Learning Ready?”
I meet with hundreds of Informatica Cloud customers and prospects every year. While they are investing in Cloud, and seeing the benefits, they also know that there is more innovation out there. They’re asking me, what’s next for Cloud? And specifically, what’s next for Informatica in regards to Cloud Data Integration and Cloud Data Management? I’ll share more about my response throughout this blog post.
The spotlight will be on Big Data and Cloud at the Strata + Hadoop World conference taking place in Silicon Valley from February 17-20 with the theme “Make Data Work”. I want to focus this blog post on two topics related to making data work and business insights:
- How existing cloud technologies, innovations and partnerships can help you get ready for the new era in cloud analytics.
- How you can make data work in new and advanced ways for every user in your company.
Today, Informatica is announcing the availability of its Cloud Integration Secure Agent on Microsoft Azure and Linux Virtual Machines as well as an Informatica Cloud Connector for Microsoft Azure Storage. Users of Azure data services such as Azure HDInsight, Azure Machine Learning and Azure Data Factory can make their data work with access to the broadest set of data sources including on-premises applications, databases, cloud applications and social data. Read more from Microsoft about their news at Strata, including their relationship with Informatica, here.
“Informatica, a leader in data integration, provides a key solution with its Cloud Integration Secure Agent on Azure,” said Joseph Sirosh, Corporate Vice President, Machine Learning, Microsoft. “Today’s companies are looking to gain a competitive advantage by deriving key business insights from their largest and most complex data sets. With this collaboration, Microsoft Azure and Informatica Cloud provide a comprehensive portfolio of data services that deliver a broad set of advanced cloud analytics use cases for businesses in every industry.”
Even more exciting is how quickly any user can deploy a broad spectrum of data services for cloud analytics projects. The fully-managed cloud service for building predictive analytics solutions from Azure and the wizard-based, self-service cloud integration and data management user experience of Informatica Cloud helps overcome the challenges most users have in making their data work effectively and efficiently for analytics use cases.
The new solution enables companies to bring in data from multiple sources for use in Azure data services including Azure HDInsight, Azure Machine Learning, Azure Data Factory and others – for advanced analytics.
The broad availability of Azure data services, and Azure Machine Learning in particular, is a game changer for startups and large enterprises. Startups can now access cloud-based advanced analytics with minimal cost and complexity and large businesses can use scalable cloud analytics and machine learning models to generate faster and more accurate insights from their Big Data sources.
Success in using machine learning requires not only great analytics models, but also an end-to-end cloud integration and data management capability that brings in a wide breadth of data sources, ensures that data quality and data views match the requirements for machine learning modeling, and an ease of use that facilitates speed of iteration while providing high-performance and scalable data processing.
For example, the Informatica Cloud solution on Azure is designed to deliver on these critical requirements in a complementary approach and support advanced analytics and machine learning use cases that provide customers with key business insights from their largest and most complex data sets.
Using the Informatica Cloud solution on Azure connector with Informatica Cloud Data Integration enables optimized read-write capabilities for data to blobs in Azure Storage. Customers can use Azure Storage objects as sources, lookups, and targets in data synchronization tasks and advanced mapping configuration tasks for efficient data management using Informatica’s industry leading cloud integration solution.
As Informatica fulfills the promise of “making great data ready to use” to our 5,500 customers globally, we continue to form strategic partnerships and develop next-generation solutions to stay one step ahead of the market with our Cloud offerings.
My goal in 2015 is to help each of our customers say that they are Cloud Ready! And collaborating with solutions such as Azure ensures that our joint customers are also Machine Learning Ready!
To learn more, try our free Informatica Cloud trial for Microsoft Azure data services.
In our house when we paint a room, my husband does the big rolling of the walls or ceiling, I do the cut-in work. I am good at prepping the room, taping all the trim and deliberately painting the corners. However, I am thrifty and constantly concerned that we won’t have enough paint to finish a room. My husband isn’t afraid to use enough paint and is extremely efficient at painting a wall in a single even coat. As a result, I don’t do the big rolling and he doesn’t do the cutting in. It took us awhile to figure this out, and a few rooms had to be repainted while we were figuring it out. Now we know what we are good at, and what we need help with.
Payers roles are changing. Payers were previously focused on risk assessment, setting and collecting premiums, analyzing claims and making payments – all while optimizing revenues. Payers are pretty good at selling to employers, figuring out the cost/benefit ratio from an employers perspective and ensuring a good, profitable product. With the advent of the Affordable Healthcare Act along with a much more transient insured population, payers now must focus more on the individual insured and be able to communicate with the individuals in a more nimble manner than in the past.
Individual members will shop for insurance based on consumer feedback and price. They are interested in ease of enrollment and the ability to submit and substantiate claims quickly and intuitively. Payers are discovering that they need to help manage population health at a individual member level. And population health management requires less of a business-data analytics approach and more social media and gaming-style logic to understand patients. In this way, payers can help develop interventions to sustain behavioral changes for better health.
When designing such analytics, payers should consider the following key design steps:
- Extend data warehouses to an analytics appliance
- Invest in a big data platform to absorb patients’ social data
- Build predictive analytics for patient behavior
- Bridge collaborative and behavioral analytics with claims to build revenue and profitability
Due to payers’ mature predictive analytics competencies, they will have a much easier time in the next generation of population behavior compared to their provider counterparts. As clinical content is often unstructured compared to the claims data, payers need to pay extra attention to context and semantics when deciphering clinical content submitted by providers. Payers can use help from vendors that can help them understand unstructured data, individual members. They can then use that data to create fantastic predictive analytic solutions.
It’s no secret that the explosion of software-as-a-service (SaaS) apps has revolutionized the way businesses operate. From humble beginnings, the titans of SaaS today include companies such as Salesforce.com, NetSuite, Marketo, and Workday that have gone public and attained multi-billion dollar valuations. The success of these SaaS leaders has had a domino effect in adjacent areas of the cloud – infrastructure, databases, and analytics.
Amazon Web Services (AWS), which originally had only six services in 2006 with the launch of Amazon EC2, now has over 30 ranging from storage, relational databases, data warehousing, Big Data, and more. Salesforce.com’s Wave platform, Tableau Software, and Qlik have made great advances in the cloud analytics arena, to give better visibility to line-of-business users. And as SaaS applications embrace new software design paradigms that extend their functionality, application performance monitoring (APM) analytics has emerged as a specialized field from vendors such as New Relic and AppDynamics.
So, how exactly did the growth of SaaS contribute to these adjacent sectors taking off?
The growth of SaaS coincided with the growth of powerful smartphones and tablets. Seeing this form factor as important to the end user, SaaS companies rushed to produce mobile apps that offered core functionality on their mobile device. Measuring adoption of these mobile apps was necessary to ensure that future releases met all the needs of the end user. Mobile apps contain a ton of information such as app responsiveness, features utilized, and data consumed. As always, there were several types of users, with some preferring a laptop form factor over a smartphone or tablet. With the ever increasing number of data points to measure within a SaaS app, the area of application performance monitoring analytics really took off.
Simultaneously, the growth of the SaaS titans cemented their reputation as not just applications for a certain line-of-business, but into full-fledged platforms. This growth emboldened a number of SaaS startups to develop apps that solved specialized or even vertical business problems in healthcare, warranty-and-repair, quote-to-cash, and banking. To get started quickly and scale rapidly, these startups leveraged AWS and its plethora of services.
The final sector that has taken off thanks to the growth of SaaS is the area of cloud analytics. SaaS grew by leaps and bounds because of its ease of use, and rapid deployment that could be achieved by business users. Cloud analytics aims to provide the same ease of use for business users when providing deep insights into data in an interactive manner.
In all these different sectors, what’s common is the fact that SaaS growth has created an uptick in the volume of data and the technologies that serve to make it easier to understand. During Informatica’s Data Mania event (March 4th, San Francisco) you’ll find several esteemed executives from Salesforce, Amazon, Adobe, Microsoft, Dun & Bradstreet, Qlik, Marketo, and AppDynamics talk about the importance of data in the world of SaaS.
In my discussions with CIOs, their opinions differ widely about the go forward nature of the CIO role. While most feel the CIO role will remain an important function, they also feel a sea state change is in process. According to Tim Crawford, a former CIO and strategic advisor to CIOs, “CIOs are getting out of the data center business”. In my discussions, not all yet see the complete demise for their data centers. However, it is becoming more common for CIOs to see themselves “becoming an orchestrator of business services versus a builder of new operational services”. One CIO put it this way, “the building stuff is now really table stakes. Cloud and loosely oriented partnerships are bringing vendor management to the forefront”.
As more and more of the service portfolio are provided by third parties in either infrastructure as a service (IaaS) or software as a service (SaaS) modes, the CIO needs to take on what will become an increasingly important role –the service broker. An element of the service broker role that will have increasingly importance is the ability to glue together business systems w6hether they are on premise, cloud managed (Iaas), or software as a service (Saas). Regardless of who creates or manages the applications of the enterprise, it is important to remember that integration is to a large degree the nervous system that connects applications into business capabilities. As such, the CIO’s team has a critical and continuing role in managing this linkage. For example, spaghetti code integrations can easily touch 20 or more systems for ERP or expense management systems.
Brokering integration services
As CIOs start to consider the move to cloud, they need to determine how this nervous system is connected, maintained, and improved. In particular, they need to determine maybe for the first time how to integrate their cloud systems to the rest of their enterprise systems. They clearly can continue to do so by building and maintaining hand coding or by using their existing ETL tools. This can work where one takes on an infrastructure as a service model. But it falls apart when looking at the total cost of ownership of managing the change of a SaaS model. This fact begs an interesting question. Shouldn’t the advantages of SaaS occur as well for integration? Shouldn’t there be Cloud Data Management (Integration as a Service)options? The answer is yes. Instead of investing in maintain integrations of SaaS systems which because of agile methodologies can change more frequently than traditional software development, couldn’tsomeone else manage this mess for me.
The advantage of the SaaS model is total cost of ownership and faster time to value. Instead of managing, integration between SaaS and historical environments, the integration between SaaS applications and historical applications can be maintained by the cloud data Management vendor. This would save both cost and time. As well, it would free you to focus your team’s energy upon cleaning up the integrations between historical systems and each other. This is a big advantage for organizations trying to get on the SaaS bandwagon but not incur significantly increased costs as a result.
Infrastructure as a Service (IaaS)—Provides processor, databases, etc. remotely but you control and maintain what goes on them
Software as a Service (Saas)—Provides software applications and underling infrastructure as a Service
Cloud Data Management—Provides Integration of applications in particular SaaS applications as a service
CIOs are embarking upon big changes. Building stuff is becoming less and less relevant. However, even as more and more services are managed remotely (even by other parties), it remains critical that CIOs and their teams manage the glue between applications. With SaaS application in particular, this is where Cloud Data Management can really help you control integrations with less time and cost.
Author Twitter: @MylesSuer
As reviewed by Loraine Lawson, a MeriTalk survey about cloud adoption found that a “In the latest survey of 150 federal executives, nearly one in five say one-quarter of their IT services are fully or partially delivered via the cloud.”
For the most part, the shifts are more tactical in nature. These federal managers are shifting email (50 percent), web hosting (45 percent) and servers/storage (43 percent). Most interesting is that they’re not moving traditional business applications, custom business apps, or middleware. Why? Data, and data integration issues.
“Federal agencies are worried about what happens to data in the cloud, assuming they can get it there in the first place:
- 58 percent of executives fret about cloud-to-legacy system integration as a barrier.
- 57 percent are worried about migration challenges, suggesting they’re not sure the data can be moved at all.
- 54 percent are concerned about data portability once the data is in the cloud.
- 53 percent are worried about ‘contract lock-in.’ ”
The reality is that the government does not get much out of the movement to cloud without committing core business applications and thus core data. While e-mail and Web hosting, and some storage is good, the real cloud computing money is made when moving away from expensive hardware and software. Failing to do that, you fail to find the value, and, in this case, spend more taxpayer dollars than you should.
Data issues are not just a concern in the government. Most larger enterprise have the same issues as well. However, a few are able to get around these issues with good planning approaches and the right data management and data integration technology. It’s just a matter of making the initial leap, which most Federal IT executives are unwilling to do.
In working with CIOs of Federal agencies in the last few years, the larger issue is that of funding. While everyone understands that moving to cloud-based systems will save money, getting there means hiring government integrators and living with redundant systems for a time. That involves some major money. If most of the existing budget goes to existing IP operations, then the move may not be practical. Thus, there should be funds made available to work on the cloud projects with the greatest potential to reduce spending and increase efficiencies.
The shame of this situation is that the government was pretty much on the leading edge with cloud computing. back in 2008 and 2009. The CIO of the US Government, Vivek Kundra, promoted the use of cloud computing, and NIST drove the initial definitions of “The Cloud,” including IaaS, SaaS, and PaaS. But, when it came down to making the leap, most agencies balked at the opportunity citing issues with data.
Now that the technology has evolved even more, there is really no excuse for the government to delay migration to cloud-based platforms. The clouds are ready, and the data integration tools have cloud integration capabilities backed in. It’s time to see some more progress.
The technology you use in your business can either help or hinder your business objectives.
In the past, slow and manual processes had an inhibiting effect on customer services and sales interactions, thus dragging down the bottom line.
Now, with cloud technology and customers interacting at record speeds, companies expect greater returns from each business outcome. What do I mean when I say business outcome?
Well according to Bluewolf’s State of Salesforce Report, you can split these into four categories: acquisition, expansion, retention and cost reduction.
With the right technology and planning, a business can speedily acquire more customers, expand to new markets, increase customer retention and ensure they are doing all of this efficiently and cost effectively. But what happens when the data or the way you’re interacting with these technologies grow unchecked, and/or becomes corrupted and unreliable.
With data being the new fuel for decision-making, you need to make sure it’s clean, safe and reliable.
With clean data, Salesforce customers, in the above-referenced Bluewolf survey, reported efficiency and productivity gains (66%), improved customer experience (34%), revenue growth (32%) and cost reduction (21%) in 2014.
It’s been said that it costs a business 10X more to acquire new customers than it does to retain existing ones. But, despite the additional cost, real continued growth requires the acquisition of new customers.
Gaining new customers, however, requires a great sales team who knows what and to whom they’re selling. With Salesforce, you have that information at your fingertips, and the chance to let your sales team be as good as they can possibly be.
And this is where having good data fits in and becomes critically important. Because, well, you can have great technology, but it’s only going to be as good as the data you’re feeding it.
The same “garbage in, garbage out” maxim holds true for practically any data-driven or –reliant business process or outcome, whether it’s attracting new customers or building a brand. And with the Salesforce Sales Cloud and Marketing Cloud you have the technology to both attract new customers and build great brands, but if you’re feeding your Clouds with inconsistent and fragmented data, you can’t trust that you’ve made the right investments or decisions in the right places.
The combination of good data and technology can help to answer so many of your critical business questions. How do I target my audience without knowledge of previous successes? What does my ideal customer look like? What did they buy? Why did they buy it?
For better or worse, but mainly better, answering those questions with just your intuition and/or experience is pretty much out of the question. Without the tool to look at, for example, past campaigns and sales, and combining this view to see who your real market is, you’ll never be fully effective.
The same is true for sales. Without the right Leads, and the ability to interact with these Leads effectively, i.e., having the right contact details, company, knowing there’s only one version of that record, can make the discovery process a long and painful one.
But customer acquisition isn’t the only place where data plays a vital role.
When expanding to new markets or upselling and cross selling to existing customers, it’s the data you collect and report on that will help inform where you should focus your efforts.
Knowing what existing relationships you can leverage can make the difference between proactively offering solutions to your customers and losing them to a competitor. With Salesforce’s Analytics Cloud, this visibility that used to take weeks and months to view can now be put together in a matter of minutes. But how do you make strategic decisions on what market to tap into or what relationships to leverage, if you can only see one or two regions? What if you could truly visualize how you interact with your customers? Or see beyond the hairball of interconnected business hierarchies and interactions to know definitively what subsidiary, household or distributor has what? Seeing the connections you have with your customers can help uncover the white space that you could tap into.
Naturally this entire process means nothing if you’re not actually retaining these customers. Again, this is another area that is fuelled by data. Knowing who your customers are, what issues they’re having and what they could want next could help ensure you are always providing your customer with the ultimate experience.
Last, but by no means least, there is cost reduction. Only by ensuring that all of this data is clean — and continuously cleansed — and your Cloud technologies are being fully utilized, can you then help ensure the maximum return on your Cloud investment.
Learn more about how Informatica Cloud can help you maximize your business outcomes through ensuring your data is trusted in the Cloud.
A friend of mine recently reached out to me about some advice on CRM solutions in the market. Though I have not worked for a CRM vendor, I’ve had both direct experience working for companies that implemented such solutions to my current role interacting with large and small organizations regarding their data requirements to support ongoing application investments across industries. As we spoke, memories started to surface when he and I had worked on implementing Salesforce.com (SFDC) many years ago. Memories that we wanted to forget but important to call out given his new situation.
We worked together for a large mortgage lending software vendor selling loan origination solutions to brokers and small lenders mainly through email and snail mail based marketing. He was responsible for Marketing Operations, and I ran Product Marketing. The company looked at Salesforce.com to help streamline our sales operations and improve how we marketed and serviced our customers. The existing CRM system was from the early 90’s and though it did what the company needed it to do, it was heavily customized, costly to operate, and served its life. It was time to upgrade, to help grow the business, improve business productivity, and enhance customer relationships.
After 90 days of rolling out SFDC, we ran into some old familiar problems across the business. Sales reps continued to struggle in knowing who was a current customer using our software, marketing managers could not create quality mailing lists for prospecting purposes, and call center reps were not able to tell if the person on the other end was a customer or prospect. Everyone wondered why this was happening given we adopted the best CRM solution in the market. You can imagine the heartburn and ulcers we all had after making such a huge investment in our new CRM solution. C-Level executives were questioning our decisions and blaming the applications. The truth was, the issues were not related to SFDC but the data that we had migrated into the system and the lack proper governance and a capable information architecture to support the required data management integration between systems that caused these significant headaches.
During the implementation phase, IT imported our entire customer database of 200K+ unique customer entities from the old system to SFDC. Unfortunately, the mortgage industry was very transient and on average there were roughly 55K licenses mortgage brokers and lenders in the market and because no one ever validated the accuracy of who was really a customer vs. someone who had ever bought out product, we had a serious data quality issues including:
- Trial users who purchased evaluation copies of our products that expired were tagged as current customers
- Duplicate records caused by manual data entry errors consisting of companies with similar but entered slightly differently with the same business address were tagged as unique customers
- Subsidiaries of parent companies in different parts of the country that were tagged again as a unique customer.
- Lastly, we imported the marketing contact database of prospects which were incorrectly accounted for as a customer in the new system
We also failed to integrate real-time purchasing data and information from our procurement systems for sales and support to handle customer requests. Instead of integrating that data in real-time with proper technology, IT had manually loaded these records at the end of the week via FTP resulting in incorrect billing information, statement processing, and a ton of complaints from customers through our call center. The price we paid for not paying attention to our data quality and integration requirements before we rolled out Salesforce.com was significant for a company of our size. For example:
- Marketing got hit pretty hard. Each quarter we mailed evaluation copies of new products to our customer database of 200K, each costing the company $12 per to produce and mail. Total cost = $2.4M annually. Because we had such bad data, we would get 60% of our mailings returned because of invalid addresses or wrong contact information. The cost of bad data to marketing = $1.44M annually.
- Next, Sales struggled miserably when trying to upgrade a customer by running cold call campaigns using the names in the database. As a result, sales productivity dropped by 40% and experienced over 35% sales turnover that year. Within a year of using SFDC, our head of sales got let go. Not good!
- Customer support used SFDC to service customers, our average all times were 40 min per service ticket. We had believed that was “business as usual” until we surveyed what reps were spending their time each day and over 50% said it was dealing with billing issues caused by bad contact information in the CRM system.
At the end of our conversation, this was my advice to my friend:
- Conduct a data quality audit of the systems that would interact with the CRM system. Audit how complete your critical master and reference data is including names, addresses, customer ID, etc.
- Do this before you invest in a new CRM system. You may find that much of the challenges faced with your existing applications may be caused by the data gaps vs. the legacy application.
- If they had a data governance program, involve them in the CRM initiative to ensure they understand what your requirements are and see how they can help.
- However, if you do decide to modernize, collaborate and involve your IT teams, especially between your Application Development teams and your Enterprise Architects to ensure all of the best options are considered to handle your data sharing and migration needs.
- Lastly, consult with your technology partners including your new CRM vendor, they may be working with solution providers to help address these data issues as you are probably not the only one in this situation.
CRM systems have come a long way in today’s Big Data and Cloud Era. Many firms are adopting more flexible solutions offered through the Cloud like Salesforce.com, Microsoft Dynamics, and others. Regardless of how old or new, on premise or in the cloud, companies invest in CRM not to just serve their sales teams or increase marketing conversion rates, but to improve your business relationship with your customers. Period! It’s about ensuring you have data in these systems that is trustworthy, complete, up to date, and actionable to improve customer service and help drive sales of new products and services to increase wallet share. So how to do you maximize your business potential from these critical business applications?
Whether you are adopting your first CRM solution or upgrading an existing one, keep in mind that Customer Relationship Management is a business strategy, not just a software purchase. It’s also about having a sound and capable data management and governance strategy supported by people, processes, and technology to ensure you can:
- Access and migrate data from old to new avoiding develop cost overruns and project delays.
- Identify, detect, and distribute transactional and reference data from existing systems into your front line business application in real-time!
- Manage data quality errors including duplicate records, invalid names and contact information due to proper data governance and proactive data quality monitoring and measurement during and after deployment
- Govern and share authoritative master records of customer, contact, product, and other master data between systems in a trusted manner.
Will your data be ready for your new CRM investments? To learn more:
- Download Salesforce Integration for Dummies
- Download a new Whitepaper on how to Maximize Integration ROI with a Hybrid Approach
- Consolidating Multiple Salesforce Orgs: A Best Practice Guide
- Sign up for a 30 Day Trial of Informatica Cloud Integration
Follow me on Twitter @DataisGR8
With the Winter 2015 Release, Informatica Cloud Advances Real Time and Batch Integration for Citizen Integrators Everywhere
The first of these is in the area of connectivity and brings a whole new set of features and capabilities to those who use our platform to connect with Salesforce, Amazon Redshift, NetSuite and SAP.
Starting with Amazon, the Winter 2015 release leverages the new Redshift Unload Command, giving any user the ability to securely perform bulk queries, and quickly scan and place multiple columns of data in the intended target, without the need for ODBC or JDBC connectors. We are also ensuring the data is encrypted at rest on the S3 bucket while loading data into Redshift tables; this provides an additional layer of security around your data.
For SAP, we’ve added the ability to balance the load across all applications servers. With the new enhancement, we use a Type B connection to route our integration workflows through a SAP messaging server, which then connects with any available SAP application server. Now if an application server goes down, your integration workflows won’t go down with it. Instead, you’ll automatically be connected to the next available application server.
Additionally, we’ve expanded the capability of our SAP connector by adding support for ECC5. While our connector came out of the box with ECC6, ECC5 is still used by a number of our enterprise customers. The expanded support now provides them with the full coverage they and many other larger companies need.
Finally, for Salesforce, we’re updating to the newest versions of their APIs (Version 31) to ensure you have access to the latest features and capabilities. The upgrades are part of an aggressive roadmap strategy, which places updates of connectors to the latest APIs on our development schedule the instant they are announced.
The second major platform enhancement for the Winter 2015 release has to do with our Cloud Mapping Designer and is sure to please those familiar with PowerCenter. With the new release, PowerCenter users can perform secure hybrid data transformations – and sharpen their cloud data warehousing and data analytic skills – through a familiar mapping and design environment and interface.
Specifically, the new enhancement enables you to take a mapplet you’ve built in PowerCenter and bring it directly into the Cloud Mapping Designer, without any additional steps or manipulations. With the PowerCenter mapplets, you can perform multi-group transformations on objects, such as BAPIs. When you access the Mapplet via the Cloud Mapping Designer, the groupings are retained, enabling you to quickly visualize what you need, and navigate and map the fields.
Additional productivity enhancements to the Cloud Mapping Designer extend the lookup and sorting capabilities and give you the ability to upload or delete data automatically based on specific conditions you establish for each target. And with the new feature supporting fully parameterized, unconnected lookups, you’ll have increased flexibility in runtime to do your configurations.
The third and final major Winter release enhancement is to our Real Time capability. Most notable is the addition of three new features that improve the usability and functionality of the Process Designer.
The first of these is a new “Wait” step type. This new feature applies to both processes and guides and enables the user to add a time-based condition to an action within a service or process call step, and indicate how long to wait for a response before performing an action.
When used in combination with the Boundary timer event variation, the Wait step can be added to a service call step or sub-process step to interrupt the process or enable it to continue.
The second is a new select feature in the Process Designer which lets users create their own service connectors. Now when a user is presented with multiple process objects created when the XML or JSON is returned from a service, he or she can select the exact ones to include in the connector.
An additional Generate Process Objects feature automates the creation of objects, thus eliminating the tedious task of replicating hold service responses containing hierarchical XML and JSON data for large structures. These can now be conveniently auto generated when testing a Service Connector, saving integration developers a lot of time.
The final enhancement for the Process Designer makes it simpler to work with XML-based services. The new “Simplified XML” feature for the “Get From” field treats attributes as children, removing the namespaces and making sibling elements into an object list. Now if a user only needs part of the returned XML, they just have to indicate the starting point for the simplified XML.
While those conclude the major enhancements, additional improvements include:
- A JMS Enqueue step is now available to submit an XML or JSON message to a JMS Queue or Topic accessible via the a secure agent.
- Dequeuing (queue and topics) of XML or JSON request payloads is now fully supported.