Tag Archives: cloud

How Great Data in the Cloud Can Make for Greater Business Outcomes

Great Cloud Data Improves Business Outcomes

Great Cloud Data Improves Business Outcomes

The technology you use in your business can either help or hinder your business objectives.

In the past, slow and manual processes had an inhibiting effect on customer services and sales interactions, thus dragging down the bottom line.

Now, with cloud technology and customers interacting at record speeds, companies expect greater returns from each business outcome. What do I mean when I say business outcome?

Well according to Bluewolf’s State of Salesforce Report, you can split these into four categories: acquisition, expansion, retention and cost reduction.

With the right technology and planning, a business can speedily acquire more customers, expand to new markets, increase customer retention and ensure they are doing all of this efficiently and cost effectively. But what happens when the data or the way you’re interacting with these technologies grow unchecked, and/or becomes corrupted and unreliable.

With data being the new fuel for decision-making, you need to make sure it’s clean, safe and reliable.

With clean data, Salesforce customers, in the above-referenced Bluewolf survey, reported efficiency and productivity gains (66%), improved customer experience (34%), revenue growth (32%) and cost reduction (21%) in 2014.

It’s been said that it costs a business 10X more to acquire new customers than it does to retain existing ones. But, despite the additional cost, real continued growth requires the acquisition of new customers.

Gaining new customers, however, requires a great sales team who knows what and to whom they’re selling. With Salesforce, you have that information at your fingertips, and the chance to let your sales team be as good as they can possibly be.

And this is where having good data fits in and becomes critically important. Because, well, you can have great technology, but it’s only going to be as good as the data you’re feeding it.

The same “garbage in, garbage out” maxim holds true for practically any data-driven or –reliant business process or outcome, whether it’s attracting new customers or building a brand. And with the Salesforce Sales Cloud and Marketing Cloud you have the technology to both attract new customers and build great brands, but if you’re feeding your Clouds with inconsistent and fragmented data, you can’t trust that you’ve made the right investments or decisions in the right places.

The combination of good data and technology can help to answer so many of your critical business questions. How do I target my audience without knowledge of previous successes? What does my ideal customer look like? What did they buy? Why did they buy it?

For better or worse, but mainly better, answering those questions with just your intuition and/or experience is pretty much out of the question. Without the tool to look at, for example, past campaigns and sales, and combining this view to see who your real market is, you’ll never be fully effective.

The same is true for sales. Without the right Leads, and the ability to interact with these Leads effectively, i.e., having the right contact details, company, knowing there’s only one version of that record, can make the discovery process a long and painful one.

But customer acquisition isn’t the only place where data plays a vital role.

When expanding to new markets or upselling and cross selling to existing customers, it’s the data you collect and report on that will help inform where you should focus your efforts.

Knowing what existing relationships you can leverage can make the difference between proactively offering solutions to your customers and losing them to a competitor. With Salesforce’s Analytics Cloud, this visibility that used to take weeks and months to view can now be put together in a matter of minutes. But how do you make strategic decisions on what market to tap into or what relationships to leverage, if you can only see one or two regions? What if you could truly visualize how you interact with your customers?  Or see beyond the hairball of interconnected business hierarchies and interactions to know definitively what subsidiary, household or distributor has what? Seeing the connections you have with your customers can help uncover the white space that you could tap into.

Naturally this entire process means nothing if you’re not actually retaining these customers. Again, this is another area that is fuelled by data. Knowing who your customers are, what issues they’re having and what they could want next could help ensure you are always providing your customer with the ultimate experience.

Last, but by no means least, there is cost reduction. Only by ensuring that all of this data is clean — and continuously cleansed — and your Cloud technologies are being fully utilized, can you then help ensure the maximum return on your Cloud investment.

Learn more about how Informatica Cloud can help you maximize your business outcomes through ensuring your data is trusted in the Cloud.

Share
Posted in Cloud, Cloud Application Integration, Cloud Computing, Cloud Data Integration, Cloud Data Management, Data First | Tagged , , , | Leave a comment

Cloud Integration Issues? Look to the Enterprise Architects

Cloud Integration and Enterprise Architects

Cloud Integration and Enterprise Architects

According to an article by Jan Stafford, “When enterprises adopt cloud computing, many of their legacy methods of software integration are instantly obsolete. Hanging on to old integration methods is like trying to fit square pegs into round holes…”

It’s true.  Data integration is a whole new game, compared to five years ago, or, in some organizations, five minutes ago.  The right approaches to data integration continue to evolve around a few principal forces: First, the growth of cloud computing, as pointed out by Stafford.  Second, the growing use of big data systems, and the emerging use of data as a strategic asset for the business.

These forces combine to drive us to the understanding that old approaches to data integration won’t provide the value that they once did.  As someone who was a CTO of three different data integration companies, I’ve seen these patterns change over the time that I was building technology, and that change has accelerated in the last 7 years.

The core opportunities lie with the enterprise architect, and their ability to drive an understanding of the value of data integration, as well as drive change within their organization.  After all, they, or the enterprises CTOs and CIOs (whomever makes decisions about technological approaches), are supposed to drive the organization in the right technical directions that will provide the best support for the business.  While most enterprise architects follow the latest hype, such as cloud computing and big data, many have missed the underlying data integration strategies and technologies that will support these changes.

“The integration challenges of cloud adoption alone give architects and developers a once in a lifetime opportunity to retool their skillsets for a long-term, successful career, according to both analysts. With the right skills, they’ll be valued leaders as businesses transition from traditional application architectures, deployment methodologies and sourcing arrangements.”

The problem is that, while most agree that data integration is important, they typically don’t understand what it is, and the value it can bring.  These days, many developers live in a world of instant updates.  With emerging DevOps approaches and infrastructure, they really don’t get the need, or the mechanisms, required to share data between application or database silos.  In many instances, they resort to coding interfaces between source and target systems.  This leads to brittle and unreliable integration solutions, and thus hurts and does not help new cloud application and big data deployments.

The message is clear: Those charged with defining technology strategies within enterprises need to also focus on data integration approaches, methods, patterns, and technologies.  Failing to do so means that the investments made in new and emerging technology, such as cloud computing and big data, will fail to provide the anticipated value.  At the same time, enterprise architects need to be empowered to make such changes.  Most enterprises are behind on this effort.  Now it’s time to get to work.

Share
Posted in B2B, Business Impact / Benefits, Cloud, Cloud Data Integration | Tagged , , , | 2 Comments

Salesforce Lightning Connect and OData: What You Need to Know

Salesforce Lightning Connect and OData

Salesforce Lightning Connect and OData

Last month, Salesforce announced that they are democratizing integration through the introduction of Salesforce1 Lightning Connect. This new capability makes it possible to work with data that is stored outside of Salesforce using the same force.com constructs (SOQL, Apex, VisualForce, etc) that are used with Salesforce objects. The important caveat is that that external data has to be available through the OData protocol, and the provider of that protocol has to be accessible from the internet.

I think this new capability, Salesforce Lightning Connect, is an innovative development and gives OData, an OASIS standard, a leg-up on its W3C-defined competitor Linked Data. OData is a REST-based protocol that provides access to data over the web. The fundamental data model is relational and the query language closely resembles what is possible with stripped-down SQL. This is much more familiar to most people than the RDF-based model using by Linked Data or its SPARQL query language.

Standardization of OData has been going on for years (they are working on version  4), but it has suffered from a bit of a chicken-egg problem. Applications haven’t put a large priority on supporting the consumption of OData because there haven’t been enough OData providers, and data providers haven’t prioritized making their data available through OData because there haven’t been enough consumers. With Salesforce, a cloud leader declaring that they will consume OData, the equation changes significantly.

But these things take time – what does someone do who is a user of Salesforce (or any other OData consumer) if most of their data sources they have cannot be accessed as an OData provider? It is the old last-mile problem faced by any communications or integration technology. It is fine to standardize, but how do you get all the existing endpoints to conform to the standard. You need someone to do the labor-intensive work of converting to the standard representation for lots of endpoints.

Informatica has been in the last-mile business for years. As it happens, the canonical model that we always used has been a relational model that lines up very well with the model used by OData. For us to host an OData provider for any of the data sources that we already support, we only needed to do one conversion from the internal format that we’ve always used to the OData standard. This OData provider capability will be available soon.

But there is also the firewall issue. The consumer of the OData has to be able to access the OData provider. So, if you want Salesforce to be able to show data from your Oracle database, you would have to open up a hole in your firewall that provides access to your database. Not many people are interested in doing that – for good reason.

Informatica Cloud’s Vibe secure agent architecture is a solution to the firewall issue that will also work with the new OData provider. The OData provider will be hosted on Informatica’s Cloud servers, but will have access to any installed secure agents. Agents require a one-time install on-premise, but are thereafter managed from the cloud and are automatically kept up-to-date with the latest version by Informatica . An agent doesn’t require a port to be opened, but instead opens up an outbound connection to the Informatica Cloud servers through which all communication occurs. The agent then has access to any on-premise applications or data sources.

OData is especially well suited to reading external data. However, there are better ways for creating or updating external data. One problem is that Salesforce only handles reads, but even when it does handle writes, it isn’t usually appropriate to add data to most applications by just inserting records in tables. Usually a collection of related information must to be provided in order for the update to make sense. To facilitate this, applications provide APIs that provide a higher level of abstraction for updates. Informatica Cloud Application Integration can be used now to read or write data to external applications from with Salesforce through the use of guides that can be displayed from any Salesforce screen. Guides make it easy to generate a friendly user interface that shows exactly the data you want your users to see and to guide them through the collection of new or updated data that needs to be written back to your app.

Share
Posted in B2B, Business Impact / Benefits, Cloud, Cloud Computing, Cloud Data Integration, Data Governance | Tagged , , , | 1 Comment

With the Winter 2015 Release, Informatica Cloud Advances Real Time and Batch Integration for Citizen Integrators Everywhere

Informatica Cloud Winter 2015 Release

Informatica Cloud Winter 2015 Release

For those who work in tech, or even have a passing interest in the latest computing trends, it was hard to miss the buzz coming out of Dreamforce and Amazon re:Invent. As a partner to both companies, engaged on a parallel path, Informatica Cloud is equally excited about these new developments. With the upcoming Winter 2015 release, we have three new platform enhancements that will take those capabilities even further.

The first of these is in the area of connectivity and brings a whole new set of features and capabilities to those who use our platform to connect with Salesforce, Amazon Redshift, NetSuite and SAP.

Starting with Amazon, the Winter 2015 release leverages the new Redshift Unload Command, giving any user the ability to securely perform bulk queries, and quickly scan and place multiple columns of data in the intended target, without the need for ODBC or JDBC connectors.  We are also ensuring the data is encrypted at rest on the S3 bucket while loading data into Redshift tables; this provides an additional layer of security around your data.

For SAP, we’ve added the ability to balance the load across all applications servers. With the new enhancement, we use a Type B connection to route our integration workflows through a SAP messaging server, which then connects with any available SAP application server. Now if an application server goes down, your integration workflows won’t go down with it. Instead, you’ll automatically be connected to the next available application server.

Additionally, we’ve expanded the capability of our SAP connector by adding support for ECC5. While our connector came out of the box with ECC6, ECC5 is still used by a number of our enterprise customers. The expanded support now provides them with the full coverage they and many other larger companies need.

Finally, for Salesforce, we’re updating to the newest versions of their APIs (Version 31) to ensure you have access to the latest features and capabilities. The upgrades are part of an aggressive roadmap strategy, which places updates of connectors to the latest APIs on our development schedule the instant they are announced.

The second major platform enhancement for the Winter 2015 release has to do with our Cloud Mapping Designer and is sure to please those familiar with PowerCenter. With the new release, PowerCenter users can perform secure hybrid data transformations – and sharpen their cloud data warehousing and data analytic skills – through a familiar mapping and design environment and interface.

Specifically, the new enhancement enables you to take a mapplet you’ve built in PowerCenter and bring it directly into the Cloud Mapping Designer, without any additional steps or manipulations. With the PowerCenter mapplets, you can perform multi-group transformations on objects, such as BAPIs. When you access the Mapplet via the Cloud Mapping Designer, the groupings are retained, enabling you to quickly visualize what you need, and navigate and map the fields.

Additional productivity enhancements to the Cloud Mapping Designer extend the lookup and sorting capabilities and give you the ability to upload or delete data automatically based on specific conditions you establish for each target. And with the new feature supporting fully parameterized, unconnected lookups, you’ll have increased flexibility in runtime to do your configurations.

The third and final major Winter release enhancement is to our Real Time capability. Most notable is the addition of three new features that improve the usability and functionality of the Process Designer.

The first of these is a new “Wait” step type. This new feature applies to both processes and guides and enables the user to add a time-based condition to an action within a service or process call step, and indicate how long to wait for a response before performing an action.

When used in combination with the Boundary timer event variation, the Wait step can be added to a service call step or sub-process step to interrupt the process or enable it to continue.

The second is a new select feature in the Process Designer which lets users create their own service connectors. Now when a user is presented with multiple process objects created when the XML or JSON is returned from a service, he or she can select the exact ones to include in the connector.

An additional Generate Process Objects feature automates the creation of objects, thus eliminating the tedious task of replicating hold service responses containing hierarchical XML and JSON data for large structures. These can now be conveniently auto generated when testing a Service Connector, saving integration developers a lot of time.

The final enhancement for the Process Designer makes it simpler to work with XML-based services. The new “Simplified XML” feature for the “Get From” field treats attributes as children, removing the namespaces and making sibling elements into an object list. Now if a user only needs part of the returned XML, they just have to indicate the starting point for the simplified XML.

While those conclude the major enhancements, additional improvements include:

  • A JMS Enqueue step is now available to submit an XML or JSON message to a JMS Queue or Topic accessible via the a secure agent.
  • Dequeuing (queue and topics) of XML or JSON request payloads is now fully supported.
Share
Posted in B2B Data Exchange, Big Data, Cloud, Cloud Application Integration, Cloud Data Integration, Cloud Data Management | Tagged , , , | Leave a comment

Amazon re:Invent 2014 Recap: “Cloud has Become the New Normal”

It’s amazing how fast a year goes by. Last year, Informatica Cloud exhibited at Amazon re:Invent for the very first time where we showcased our connector for Amazon Redshift. At the time, customers were simply kicking the tires on Amazon’s newest cloud data warehousing service, and trying to learn where it might make sense to fit Amazon Redshift into their overall architecture. This year, it was clear that customers had adopted several AWS services and were truly “all-in” on the cloud. In the words of Andy Jassy, Senior VP of Amazon Web Services, “Cloud has become the new normal”.

During Day 1 of the keynote, Andy outlined several areas of growth across the AWS ecosystem such as a 137% YoY increase in data transfer to and from Amazon S3, and a 99% YoY increase in Amazon EC2 instance usage. On Day 2 of the keynote, Werner Vogels, CTO of Amazon made the case that there has never been a better time to build apps on AWS because of all the enterprise-grade features. Several customers came on stage during both keynotes to demonstrate their use of AWS:

  • Major League Baseball’s Statcast application consumed 17PB of raw data
  • Philips Healthcare used over a petabyte a month
  • Intuit revealed their plan to move the rest of their applications to AWS over the next few years
  • Johnson & Johnson outlined their use of Amazon’s Virtual Private Cloud (VPC) and referred to their use of hybrid cloud as the “borderless datacenter”
  • Omnifone illustrated how AWS has the network bandwidth required to deliver their hi-res audio offerings
  • The Weather Company scaled AWS across 4 regions to deliver 15 billion forecast publications a day

Informatica was also mentioned on stage by Andy Jassy as one of the premier ISVs that had built solutions on top of the AWS platform. Indeed, from having one connector in the AWS ecosystem last year (for Amazon Redshift), Informatica has released native connectors for Amazon DynamoDB, Elastic MapReduce (EMR), S3, Kinesis, and RDS.

With so many customers using AWS, it becomes hard for them to track their usage on a more granular level – this is especially true with enterprise companies using AWS because of the multitude of departments and business units using several AWS services. Informatica Cloud and Tableau developed a joint solution which was showcased at the Amazon re:Invent Partner Theater, where it was possible for an IT Operations individual to drill down into several dimensions to find out the answers they need around AWS usage and cost. IT Ops personnel can point out the relevant data points in their data model, such as availability zone, rate, and usage type, to name a few, and use Amazon Redshift as the cloud data warehouse to aggregate this data. Informatica Cloud’s Vibe Integration Packages combined with its native connectivity to Amazon Redshift and S3 allow the data model to be reflected as the correct set of tables in Redshift. Tableau’s robust visualization capabilities then allow users to drill down into the data model to extract whatever insights they require. Look for more to come from Informatica Cloud and Tableau on this joint solution in the upcoming weeks and months.

Share
Posted in B2B Data Exchange, Cloud, Cloud Computing, Cloud Data Integration | Tagged , , , | Leave a comment

Remembering Big Data Gravity – PART 2

I ended my previous blog wondering if awareness of Data Gravity should change our behavior. While Data Gravity adds Value to Big Data, I find that the application of the Value is under explained.

Exponential growth of data has naturally led us to want to categorize it into facts, relationships, entities, etc. This sounds very elementary. While this happens so quickly in our subconscious minds as humans, it takes significant effort to teach this to a machine.

A friend tweeted this to me last week: I paddled out today, now I look like a lobster. Since this tweet, Twitter has inundated my friend and me with promotions from Red Lobster. It is because the machine deconstructed the tweet: paddled <PROPULSION>, today <TIME>, like <PREFERENCE> and lobster <CRUSTACEANS>. While putting these together, the machine decided that the keyword was lobster. You and I both know that my friend was not talking about lobsters.

You may think that this maybe just a funny edge case. You can confuse any computer system if you try hard enough, right? Unfortunately, this isn’t an edge case. 140 characters has not just changed people’s tweets, it has changed how people talk on the web. More and more information is communicated in smaller and smaller amounts of language, and this trend is only going to continue.

When will the machine understand that “I look like a lobster” means I am sunburned?

I believe the reason that there are not hundreds of companies exploiting machine-learning techniques to generate a truly semantic web, is the lack of weighted edges in publicly available ontologies. Keep reading, it will all make sense in about 5 sentences. Lobster and Sunscreen are 7 hops away from each other in dbPedia – way too many to draw any correlation between the two. For that matter, any article in Wikipedia is connected to any other article within about 14 hops, and that’s the extreme. Completed unrelated concepts are often just a few hops from each other.

But by analyzing massive amounts of both written and spoken English text from articles, books, social media, and television, it is possible for a machine to automatically draw a correlation and create a weighted edge between the Lobsters and Sunscreen nodes that effectively short circuits the 7 hops necessary. Many organizations are dumping massive amounts of facts without weights into our repositories of total human knowledge because they are naïvely attempting to categorize everything without realizing that the repositories of human knowledge need to mimic how humans use knowledge.

For example – if you hear the name Babe Ruth, what is the first thing that pops to mind? Roman Catholics from Maryland born in the 1800s or Famous Baseball Player?

data gravityIf you look in Wikipedia today, he is categorized under 28 categories in Wikipedia, each of them with the same level of attachment. 1895 births | 1948 deaths | American League All-Stars | American League batting champions | American League ERA champions | American League home run champions | American League RBI champions | American people of German descent | American Roman Catholics | Babe Ruth | Baltimore Orioles (IL) players | Baseball players from Maryland | Boston Braves players | Boston Red Sox players | Brooklyn Dodgers coaches | Burials at Gate of Heaven Cemetery | Cancer deaths in New York | Deaths from esophageal cancer | Major League Baseball first base coaches | Major League Baseball left fielders | Major League Baseball pitchers | Major League Baseball players with retired numbers | Major League Baseball right fielders | National Baseball Hall of Fame inductees | New York Yankees players | Providence Grays (minor league) players | Sportspeople from Baltimore | Maryland | Vaudeville performers.

Now imagine how confused a machine would get when the distance of unweighted edges between nodes is used as a scoring mechanism for relevancy.

If I were to design an algorithm that uses weighted edges (on a scale of 1-5, with 5 being the highest), the same search would yield a much more obvious result.

data gravity1895 births [2]| 1948 deaths [2]| American League All-Stars [4]| American League batting champions [4]| American League ERA champions [4]| American League home run champions [4]| American League RBI champions [4]| American people of German descent [2]| American Roman Catholics [2]| Babe Ruth [5]| Baltimore Orioles (IL) players [4]| Baseball players from Maryland [3]| Boston Braves players [4]| Boston Red Sox players [5]| Brooklyn Dodgers coaches [4]| Burials at Gate of Heaven Cemetery [2]| Cancer deaths in New York [2]| Deaths from esophageal cancer [1]| Major League Baseball first base coaches [4]| Major League Baseball left fielders [3]| Major League Baseball pitchers [5]| Major League Baseball players with retired numbers [4]| Major League Baseball right fielders [3]| National Baseball Hall of Fame inductees [5]| New York Yankees players [5]| Providence Grays (minor league) players [3]| Sportspeople from Baltimore [1]| Maryland [1]| Vaudeville performers [1].

Now the machine starts to think more like a human. The above example forces us to ask ourselves the relevancy a.k.a. Value of the response. This is where I think Data Gravity’s becomes relevant.

You can contact me on twitter @bigdatabeat with your comments.

Share
Posted in Architects, Big Data, Cloud, Cloud Data Management, Data Aggregation, Data Archiving, Data Governance, General, Hadoop | Tagged , , , , , , | Leave a comment

Amazon Web Services and Informatica Deliver Data-Ready Cloud Computing Infrastructure for Every Business

At re:Invent 2014 in Las Vegas,  Informatica and AWS announced a broad strategic partnership to deliver data-ready cloud computing infrastructure to any type or size of business.

Informatica’s comprehensive portfolio across Informatica Cloud and PowerCenter solutions connect to multiple AWS Data Services including Amazon Redshift, RDS, DynamoDB, S3, EMR and Kinesis – the broadest pre-built connectivity available to AWS Data Services. Informatica and AWS offerings are pre-integrated, enabling customers to rapidly and cost-effectively implement data warehousing, large scale analytics, lift and shift, and other key use cases in cloud-first and hybrid IT environments. Now, any company can use Informatica’s portfolio to get a plug-and-play on-ramp to the cloud with AWS.

Economical and Flexible Path to the Cloud

As business information needs intensify and data environments become more complex, the combination of AWS and Informatica enables organizations to increase the flexibility and reduce the costs of their information infrastructures through:

  • More cost-effective data warehousing and analytics – Customers benefit from lower costs and increased agility when unlocking the value of their data with no on-premise data warehousing/analytics environment to design, deploy and manage.
  • Broad, easy connectivity to AWS – Customers gain full flexibility in integrating data from any Informatica-supported data source (the broadest set of sources supported by any integration vendor) through the use of pre-built connectors for AWS.
  • Seamless hybrid integration – Hybrid integration scenarios across Informatica PowerCenter and Informatica Cloud data integration deployments are able to connect seamlessly to AWS services.
  • Comprehensive use case coverage – Informatica solutions for data integration and warehousing, data archiving, data streaming and big data across cloud and on-premise applications mesh with AWS solutions such as RDS, Redshift, Kinesis, S3, DynamoDB, EMR and other AWS ecosystem services to drive new and rapid value for customers.

New Support for AWS Services

Informatica introduced a number of new Informatica Cloud integrations with AWS services, including connectors for Amazon DynamoDB, Amazon Elastic MapReduce (Amazon EMR) and Amazon Simple Storage Service (Amazon S3), to complement the existing connectors for Amazon Redshift and Amazon Relational Database Service (Amazon RDS).

Additionally, the latest Informatica PowerCenter release for Amazon Elastic Compute Cloud (Amazon EC2) includes support for:

  • PowerCenter Standard Edition and Data Quality Standard Edition
  • Scaling options – Grid, high availability, pushdown optimization, partitioning
  • Connectivity to Amazon RDS and Amazon Redshift
  • Domain and repository DB in Amazon RDS for current database PAM (policies and measures)

To learn more, try our 60-day free Informatica Cloud trial for Amazon Redshift.

If you’re in Vegas, please come by our booth at re:Invent, Nov. 11-14, in Booth #1031, Venetian / Sands, Hall.

Share
Posted in Business Impact / Benefits, Cloud, Cloud Application Integration, Cloud Computing | Tagged , , , | Leave a comment

Big Data Driving Data Integration at the NIH

Big Data Driving Data Integration at the NIH

Big Data Driving Data Integration at the NIH

The National Institutes of Health announced new grants to develop big data technologies and strategies.

“The NIH multi-institute awards constitute an initial investment of nearly $32 million in fiscal year 2014 by NIH’s Big Data to Knowledge (BD2K) initiative and will support development of new software, tools and training to improve access to these data and the ability to make new discoveries using them, NIH said in its announcement of the funding.”

The grants will address issues around Big Data adoption, including:

  • Locating data and the appropriate software tools to access and analyze the information.
  • Lack of data standards, or low adoption of standards across the research community.
  • Insufficient polices to facilitate data sharing while protecting privacy.
  • Unwillingness to collaborate that limits the data’s usefulness in the research community.

Among the tasks funded is the creation of a “Perturbation Data Coordination and Integration Center.”  The center will provide support for data science research that focuses on interpreting and integrating data from different data types and databases.  In other words, it will make sure the data moves to where it should move, in order to provide access to information that’s needed by the research scientist.  Fundamentally, it’s data integration practices and technologies.

This is very interesting from the standpoint that the movement into big data systems often drives the reevaluation, or even new interest in data integration.  As the data becomes strategically important, the need to provide core integration services becomes even more important.

The project at the NIH will be interesting to watch, as it progresses.  These are the guys who come up with the new paths to us being healthier and living longer.  The use of Big Data provides the researchers with the advantage of having a better understanding of patterns of data, including:

  • Patterns of symptoms that lead to the diagnosis of specific diseases and ailments.  Doctors may get these data points one at a time.  When unstructured or structured data exists, researchers can find correlations, and thus provide better guidelines to physicians who see the patients.
  • Patterns of cures that are emerging around specific treatments.  The ability to determine what treatments are most effective, by looking at the data holistically.
  • Patterns of failure.  When the outcomes are less than desirable, what seems to be a common issue that can be identified and resolved?

Of course, the uses of big data technology are limitless, when considering the value of knowledge that can be derived from petabytes of data.  However, it’s one thing to have the data, and another to have access to it.

Data integration should always be systemic to all big data strategies, and the NIH clearly understands this to be the case.  Thus, they have funded data integration along with the expansion of their big data usage.

Most enterprises will follow much the same path in the next 2 to 5 years.  Information provides a strategic advantage to businesses.  In the case of the NIH, it’s information that can save lives.  Can’t get much more important than that.

Share
Posted in Big Data, Cloud, Cloud Data Integration, Data Integration | Tagged , , , | Leave a comment

What is the Silver Lining in Cloud for Financial Services?

This was a great week of excitement and innovation here in San Francisco starting with the San Francisco Giants winning the National League Pennant for the 3rd time in 5 years on the same day Saleforce’s Dreamforce 2014 wrapped up their largest customer conference with over 140K+ attendees from all over the world talking about their new Customer Success Platform.

Salesforce has come a long way from their humble beginnings as the new kid on the cloud front for CRM. The integrated sales, marketing, support, collaboration, application, and analytics as part of the Salesforce Customer Success Platform exemplifies innovation and significant business value upside for various industries however I see it very promising for today’s financial services industry. However like any new business application, the value business gains from it are dependent in having the right data available for the business.

The reality is, SaaS adoption by financial institutions has not been as quick as other industries due to privacy concerns, regulations that govern what data can reside in public infrastructures, ability to customize to fit their business needs, cultural barriers within larger institutions that critical business applications must reside on-premise for control and management purposes, and the challenges of integrating data to and from existing systems with SaaS applications.  However, experts are optimistic that the industry may have turned the corner. Gartner (NYSE:IT) asserts more than 60 percent of banks worldwide will process the majority of their transactions in the cloud by 2016.  Let’s take a closer look at some of the challenges and what’s required to overcome these obstacles when adopting cloud solutions to power your business.

Challenge #1:  Integrating and sharing data between SaaS and on-premise must not be taken lightly

For most banks and insurance companies considering new SaaS based CRM, Marketing, and Support applications with solutions from Salesforce and others must consider the importance of migrating and sharing data between cloud and on-premise applications in their investment decisions.  Migrating existing customer, account, and transaction history data is often done by IT staff through the use of custom extracts, scripts, and manual data validations which can carry over invalid information from legacy systems making these new application investments useless in many cases.

For example, customer type descriptions from one or many existing systems may be correct in their respective databases however collapsing them into a common field in the target application seems easy to do. Unfortunately, these transformation rules can be complex and that complexity increases when dealing with tens if not hundreds of applications during the migration and synchronization phase. Having capable solutions to support the testing, development, quality management, validation, and delivery of existing data from old to new is not only good practice, but a proven way of avoiding costly workarounds and business pain in the future.

Challenge 2:  Managing and sharing a trusted source of shared business information across the enterprise.

As new SaaS applications are adopted, it is critical to understand how to best govern and synchronize common business information such as customer contact information (e.g. address, phone, email) across the enterprise. Most banks and insurance companies have multiple systems that create and update critical customer contact information, many of them which reside on-premise. For example, insurance customers who update contact information such as a phone number or email address while filing an insurance claim will often result in that claims specialist to enter/update only the claims system given the siloed nature of many traditional banking and insurance companies. This is the power of Master Data Management which is purposely designed to identify changes to master data including customer records in one or many systems, update the customer master record, and share that across other systems that house and require that update is essential for business continuity and success.

In conclusion, SaaS adoption will continue to grow in financial services and across other industries. The silver lining in the cloud is your data and the technology that supports the consumption and distribution of it across the enterprise. Banks and insurance companies investing in new SaaS solutions will operate in a hybrid environment made up of Cloud and core transaction systems that reside on-premise. Cloud adoption will continue to grow and to ensure investments yield value for businesses, it is important to invest in a capable and scalable data integration platform to integrate, govern, and share data in a hybrid eco-system. To learn more on how to deal with these challenges, click here and download a complimentary copy of the new “Salesforce Integration for Dummies”

Share
Posted in B2B, Cloud, Financial Services | Tagged , , , , | Leave a comment

Informatica Cloud Powers a New Era in Cloud Analytics with Salesforce Wave Analytics Cloud at Dreamforce 2014

We are halfway through Dreamforce and it’s been an eventful and awesome couple of days so far. The biggest launch by far was the announcement of Wave, the Salesforce Analytics Cloud, Salesforce’s new entry into Cloud analytics and business intelligence. Informatica has been the integration leader for enterprise analytics for 20 years, and our leadership continues with Cloud analytics, as our Informatica Cloud portfolio is the only solution that Completes Salesforce Analytics Cloud for Big Data, fully enabling companies to use Salesforce Analytics Cloud to understand their customers like never before. But don’t take our word for it, view the Analytics Cloud Keynote from Dreamforce 2014, and see Alex Dayon uniquely call out Informatica as their key integration partner during his keynote.

DIY Great Customer Data

DIY Great Customer Data

The Informatica Cloud Portfolio delivers a broad set of analytics-centric services for the Salesforce Analytics Cloud, including bulk and real time application integration, data integration, data preparation, test data management, data quality and master data management (MDM) services. The portfolio is designed for high volume data sets from transactional applications such as SAP, cloud applications like Workday and new data sources such as Hadoop, Microsoft Azure and Amazon Web Services.

We have a great booth in the Analytics Zone, Moscone West, 3rd floor, where you can see demos of Informatica Cloud for Salesforce Wave Analytics and get lots more details from product experts.

And, you don’t need to wait till Dreamforce is over to try out Informatica Cloud for Salesforce Analytics. The free trial of Informatica Cloud, including Springbok, for Salesforce Analytics Cloud is available now. Trial users have unlimited usage of Informatica Cloud capabilities for Salesforce Analytics Cloud for 60 days, free of charge.

Aside from new product launches, and tons of partner activities going on, we’ve also got some great customers speaking at DF. Today, we have a great session on “Get Closer to Your Customers Using Agile Data Management with Salesforce” with executive speakers from BT, Dolby and Travel Corporation explaining how they achieve customer insight with use cases ranging from integrating 9 Salesforce orgs into a single business dashboard to unifying 30+ acquired travel brands into a single customer view.

On Monday, we had Qualcomm and Warranty Group present how their companies have moved to the Cloud using Salesforce and Informatica Cloud to meet the agility needs of their businesses while simultaneously resolving the challenges of data scaling, organization complexity and evolving technology strategy to make it all happen.

Win $10k from Informatica!

Win $10k from Informatica!

Drop by our main booth in Moscone North, N1216 to see live demos showcasing solutions for Customer Centricity, Salesforce Data Lifecycle and Analytics Cloud. If you want a preview of our Informatica Cloud solutions for the Salesforce ecosystem, click here.

During Dreamforce, we also announced a significant milestone for Informatica Cloud, which now processes over 100 Billion transactions per month, on behalf of our 3,000+ joint customers with Salesforce.

Oh, and one more thing we announced at DF: the Informatica Cloud Data Wizard, our next-generation data loader for Salesforce, that delivers a beautifully simple user experience, natively inside Salesforce for non-technical business analysts and admins to easily bring external data into Salesforce with a one-touch UI, really!

For more information on how you can connect with Informatica at Dreamforce 2014, get all the details at informaticacloud.com/dreamforce

Share
Posted in Cloud, Cloud Computing, Data Integration | Tagged , , , , , | Leave a comment