Category Archives: Cloud

Salesforce Lightning Connect and OData: What You Need to Know

Salesforce Lightning Connect and OData

Salesforce Lightning Connect and OData

Last month, Salesforce announced that they are democratizing integration through the introduction of Salesforce1 Lightning Connect. This new capability makes it possible to work with data that is stored outside of Salesforce using the same force.com constructs (SOQL, Apex, VisualForce, etc) that are used with Salesforce objects. The important caveat is that that external data has to be available through the OData protocol, and the provider of that protocol has to be accessible from the internet.

I think this new capability, Salesforce Lightning Connect, is an innovative development and gives OData, an OASIS standard, a leg-up on its W3C-defined competitor Linked Data. OData is a REST-based protocol that provides access to data over the web. The fundamental data model is relational and the query language closely resembles what is possible with stripped-down SQL. This is much more familiar to most people than the RDF-based model using by Linked Data or its SPARQL query language.

Standardization of OData has been going on for years (they are working on version  4), but it has suffered from a bit of a chicken-egg problem. Applications haven’t put a large priority on supporting the consumption of OData because there haven’t been enough OData providers, and data providers haven’t prioritized making their data available through OData because there haven’t been enough consumers. With Salesforce, a cloud leader declaring that they will consume OData, the equation changes significantly.

But these things take time – what does someone do who is a user of Salesforce (or any other OData consumer) if most of their data sources they have cannot be accessed as an OData provider? It is the old last-mile problem faced by any communications or integration technology. It is fine to standardize, but how do you get all the existing endpoints to conform to the standard. You need someone to do the labor-intensive work of converting to the standard representation for lots of endpoints.

Informatica has been in the last-mile business for years. As it happens, the canonical model that we always used has been a relational model that lines up very well with the model used by OData. For us to host an OData provider for any of the data sources that we already support, we only needed to do one conversion from the internal format that we’ve always used to the OData standard. This OData provider capability will be available soon.

But there is also the firewall issue. The consumer of the OData has to be able to access the OData provider. So, if you want Salesforce to be able to show data from your Oracle database, you would have to open up a hole in your firewall that provides access to your database. Not many people are interested in doing that – for good reason.

Informatica Cloud’s Vibe secure agent architecture is a solution to the firewall issue that will also work with the new OData provider. The OData provider will be hosted on Informatica’s Cloud servers, but will have access to any installed secure agents. Agents require a one-time install on-premise, but are thereafter managed from the cloud and are automatically kept up-to-date with the latest version by Informatica . An agent doesn’t require a port to be opened, but instead opens up an outbound connection to the Informatica Cloud servers through which all communication occurs. The agent then has access to any on-premise applications or data sources.

OData is especially well suited to reading external data. However, there are better ways for creating or updating external data. One problem is that Salesforce only handles reads, but even when it does handle writes, it isn’t usually appropriate to add data to most applications by just inserting records in tables. Usually a collection of related information must to be provided in order for the update to make sense. To facilitate this, applications provide APIs that provide a higher level of abstraction for updates. Informatica Cloud Application Integration can be used now to read or write data to external applications from with Salesforce through the use of guides that can be displayed from any Salesforce screen. Guides make it easy to generate a friendly user interface that shows exactly the data you want your users to see and to guide them through the collection of new or updated data that needs to be written back to your app.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, Business Impact / Benefits, Cloud, Cloud Computing, Cloud Data Integration, Data Governance | Tagged , , , | Leave a comment

Take These Steps to Avoid Wasting Your Marketing Technology Budget

Avoid Wasting Your Marketing Technology Budget

Don’t Waste Your Marketing Tech Budget

This year, the irresistible pull of digital marketing met an unstoppable force: Girl Scout cookies. It’s an $800 million-a-year fundraiser that is only expected to increase with a newly announced addition of digital sales.

The New York Times reports that beginning in this month and into January, for the first time, the Girl Scouts of America will be able to sell Thin Mints and other favorites online through invite-only websites. The websites will be accompanied by a mobile app, giving customers new digital options.

As the Girl Scouts update from a door-to-door approach to include a newly introduced digital program, it’s just one more sign of where marketing trends are heading.

From digital cookies to digital marketing technology:

If 2014 is the year of the digital cookie, then 2015 will be the year of marketing technology. Here’s just a few of the strongest indicators:

  • A study found that 67% of marketing departments plan to increase spending on technology over the next two years, according to the Harvard Business Review.
  • Gartner predicts that by 2017, CMOs will outspend CIOs on IT-related expenses.
  • Also by 2017, one-third of the total marketing budget will be dedicated to digital marketing, according to survey results from Teradata.
  • A new LinkedIn/Salesforce survey found that 56% of marketers see their relationships with the CIO as very important or critical.
  • Social media is a mainstream channel for marketers, making technology for measuring and managing this channel of paramount importance. This is not just true of B2C companies. Of high level executive B2B buyers, 75% used social media to make purchasing decisions, according to a 2014 survey by market research firm IDC.

From social to analytics to email marketing, much of what marketers see in technology offerings is often labeled as “cloud-based.” While cloud technology has many features and benefits, what are we really saying when we talk about the cloud?

What the cloud means… to marketers.

Beginning around 2012, multitudes of businesses in many industries began adapting “the cloud” as a feature or a benefit to their products or services. Whether or not the business truly was cloud-based was not as clear, which led to the term “cloudwashing.” We hear the so much about cloud, it is easy for us to overlook what it really means and what the benefits really are.

The cloud is more than a buzzword – and in particular, marketers need to know what it truly means to them.

For marketers, “the cloud” has many benefits. A service that is cloud-based gives you amazing flexibility and choices over the way you use a product or service:

  • A cloud-enabled product or service can be integrated into your existing systems. For marketers, this can range from integration into websites, marketing automation systems, CRMs, point-of-sale platforms, and any other business application.
  • You don’t have to learn a new system, the way you might when adapting a new application, software, or other enterprise system. You won’t have to set aside a lot of time and effort for new training for you or your staff.
  • Due to the flexibility that lets you integrate anywhere, you can deploy a cloud-based product or service across all of your organization’s applications or processes, increasing efficiencies and ensuring that all of your employees have access to the same technology tools at the same time.
  • There’s no need to worry about ongoing system updates, as those happen automatically behind the scenes.

In 2015, marketers should embrace the convenience of cloud-based services, as they help put the focus on benefits instead of spending time managing the technology.

Are you using data quality in the cloud?

If you are planning to move data out of an on-premise application or software to a cloud-based service, you can take advantage of this ideal time to ensure these data quality best practices are in place.

Verify and cleanse your data first, before it is moved to the cloud. Since it’s likely that your move to the cloud will make this data available across your organization — within marketing, sales, customer service, and other departments — applying data quality best practices first will increase operational efficiency and bring down costs from invalid or unusable data.

There may be more to add to this list, depending on the nature of your own business. Make sure that:

  • Postal addresses are valid, accurate, current and complete
  • Email addresses are valid
  • Telephone numbers are valid, accurate, and current
  • Increase the effectiveness of future data analysis by making sure all data fields are consistent and every individual data element is clearly defined
  • Fill in missing data
  • Remove duplicate contact and customer records

Once you have cleansed and verified your existing data and move it to the cloud, use a real-time verification and cleansing solution at the point of entry or point of collection in real-time to ensure good data quality across your organization on an ongoing basis.

The biggest roadblock to effective marketing technology is: Bad data.

Budgeting for marketing technology is going to become a bigger and bigger piece of the pie (or cookie, if you prefer) for B2C and B2B organizations alike. The first step all marketers need to take to make sure those investments fully pay off and don’t go wasted is great customer data.

Marketing technology is fueled by data. A recent Harvard Business Review article listed some of the most important marketing technologies. They included tools for analytics, conversion, email, search engine marketing, remarketing, mobile, and marketing automation.

What do they all have in common? These tools all drive customer communication, engagement, and relationships, all of which require valid and actionable customer data to work at all.

You can’t plan your marketing strategy off of data that tells you the wrong things about who your customers are, how they prefer to be contacted, and what messages work the best. Make data quality a major part of your 2015 marketing technology planning to get the most from your investment.

Marketing technology is going to be big in 2015 — where do you start?

With all of this in mind, how can marketers prepare for their technology needs in 2015? Get started with this free virtual conference from MarketingProfs that is totally focused on marketing technology.

This great event includes a keynote from Teradata’s CMO, Lisa Arthur, on “Using Data to Build Strong Marketing Strategies.” Register here for the December 12 Marketing Technology Virtual Conference from MarketingProfs.

Even if you can’t make it live that day at the virtual conference, it’s still smart to sign up so you receive on-demand recordings from the sessions when the event ends. Register now!

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud, Data Migration, Data Quality, Data Services | Tagged , , , | Leave a comment

With the Winter 2015 Release, Informatica Cloud Advances Real Time and Batch Integration for Citizen Integrators Everywhere

Informatica Cloud Winter 2015 Release

Informatica Cloud Winter 2015 Release

For those who work in tech, or even have a passing interest in the latest computing trends, it was hard to miss the buzz coming out of Dreamforce and Amazon re:Invent. As a partner to both companies, engaged on a parallel path, Informatica Cloud is equally excited about these new developments. With the upcoming Winter 2015 release, we have three new platform enhancements that will take those capabilities even further.

The first of these is in the area of connectivity and brings a whole new set of features and capabilities to those who use our platform to connect with Salesforce, Amazon Redshift, NetSuite and SAP.

Starting with Amazon, the Winter 2015 release leverages the new Redshift Unload Command, giving any user the ability to securely perform bulk queries, and quickly scan and place multiple columns of data in the intended target, without the need for ODBC or JDBC connectors.  We are also ensuring the data is encrypted at rest on the S3 bucket while loading data into Redshift tables; this provides an additional layer of security around your data.

For SAP, we’ve added the ability to balance the load across all applications servers. With the new enhancement, we use a Type B connection to route our integration workflows through a SAP messaging server, which then connects with any available SAP application server. Now if an application server goes down, your integration workflows won’t go down with it. Instead, you’ll automatically be connected to the next available application server.

Additionally, we’ve expanded the capability of our SAP connector by adding support for ECC5. While our connector came out of the box with ECC6, ECC5 is still used by a number of our enterprise customers. The expanded support now provides them with the full coverage they and many other larger companies need.

Finally, for Salesforce, we’re updating to the newest versions of their APIs (Version 31) to ensure you have access to the latest features and capabilities. The upgrades are part of an aggressive roadmap strategy, which places updates of connectors to the latest APIs on our development schedule the instant they are announced.

The second major platform enhancement for the Winter 2015 release has to do with our Cloud Mapping Designer and is sure to please those familiar with PowerCenter. With the new release, PowerCenter users can perform secure hybrid data transformations – and sharpen their cloud data warehousing and data analytic skills – through a familiar mapping and design environment and interface.

Specifically, the new enhancement enables you to take a mapplet you’ve built in PowerCenter and bring it directly into the Cloud Mapping Designer, without any additional steps or manipulations. With the PowerCenter mapplets, you can perform multi-group transformations on objects, such as BAPIs. When you access the Mapplet via the Cloud Mapping Designer, the groupings are retained, enabling you to quickly visualize what you need, and navigate and map the fields.

Additional productivity enhancements to the Cloud Mapping Designer extend the lookup and sorting capabilities and give you the ability to upload or delete data automatically based on specific conditions you establish for each target. And with the new feature supporting fully parameterized, unconnected lookups, you’ll have increased flexibility in runtime to do your configurations.

The third and final major Winter release enhancement is to our Real Time capability. Most notable is the addition of three new features that improve the usability and functionality of the Process Designer.

The first of these is a new “Wait” step type. This new feature applies to both processes and guides and enables the user to add a time-based condition to an action within a service or process call step, and indicate how long to wait for a response before performing an action.

When used in combination with the Boundary timer event variation, the Wait step can be added to a service call step or sub-process step to interrupt the process or enable it to continue.

The second is a new select feature in the Process Designer which lets users create their own service connectors. Now when a user is presented with multiple process objects created when the XML or JSON is returned from a service, he or she can select the exact ones to include in the connector.

An additional Generate Process Objects feature automates the creation of objects, thus eliminating the tedious task of replicating hold service responses containing hierarchical XML and JSON data for large structures. These can now be conveniently auto generated when testing a Service Connector, saving integration developers a lot of time.

The final enhancement for the Process Designer makes it simpler to work with XML-based services. The new “Simplified XML” feature for the “Get From” field treats attributes as children, removing the namespaces and making sibling elements into an object list. Now if a user only needs part of the returned XML, they just have to indicate the starting point for the simplified XML.

While those conclude the major enhancements, additional improvements include:

  • A JMS Enqueue step is now available to submit an XML or JSON message to a JMS Queue or Topic accessible via the a secure agent.
  • Dequeuing (queue and topics) of XML or JSON request payloads is now fully supported.
FacebookTwitterLinkedInEmailPrintShare
Posted in B2B Data Exchange, Big Data, Cloud, Cloud Application Integration, Cloud Data Integration, Cloud Data Management | Tagged , , , | Leave a comment

Amazon re:Invent 2014 Recap: “Cloud has Become the New Normal”

It’s amazing how fast a year goes by. Last year, Informatica Cloud exhibited at Amazon re:Invent for the very first time where we showcased our connector for Amazon Redshift. At the time, customers were simply kicking the tires on Amazon’s newest cloud data warehousing service, and trying to learn where it might make sense to fit Amazon Redshift into their overall architecture. This year, it was clear that customers had adopted several AWS services and were truly “all-in” on the cloud. In the words of Andy Jassy, Senior VP of Amazon Web Services, “Cloud has become the new normal”.

During Day 1 of the keynote, Andy outlined several areas of growth across the AWS ecosystem such as a 137% YoY increase in data transfer to and from Amazon S3, and a 99% YoY increase in Amazon EC2 instance usage. On Day 2 of the keynote, Werner Vogels, CTO of Amazon made the case that there has never been a better time to build apps on AWS because of all the enterprise-grade features. Several customers came on stage during both keynotes to demonstrate their use of AWS:

  • Major League Baseball’s Statcast application consumed 17PB of raw data
  • Philips Healthcare used over a petabyte a month
  • Intuit revealed their plan to move the rest of their applications to AWS over the next few years
  • Johnson & Johnson outlined their use of Amazon’s Virtual Private Cloud (VPC) and referred to their use of hybrid cloud as the “borderless datacenter”
  • Omnifone illustrated how AWS has the network bandwidth required to deliver their hi-res audio offerings
  • The Weather Company scaled AWS across 4 regions to deliver 15 billion forecast publications a day

Informatica was also mentioned on stage by Andy Jassy as one of the premier ISVs that had built solutions on top of the AWS platform. Indeed, from having one connector in the AWS ecosystem last year (for Amazon Redshift), Informatica has released native connectors for Amazon DynamoDB, Elastic MapReduce (EMR), S3, Kinesis, and RDS.

With so many customers using AWS, it becomes hard for them to track their usage on a more granular level – this is especially true with enterprise companies using AWS because of the multitude of departments and business units using several AWS services. Informatica Cloud and Tableau developed a joint solution which was showcased at the Amazon re:Invent Partner Theater, where it was possible for an IT Operations individual to drill down into several dimensions to find out the answers they need around AWS usage and cost. IT Ops personnel can point out the relevant data points in their data model, such as availability zone, rate, and usage type, to name a few, and use Amazon Redshift as the cloud data warehouse to aggregate this data. Informatica Cloud’s Vibe Integration Packages combined with its native connectivity to Amazon Redshift and S3 allow the data model to be reflected as the correct set of tables in Redshift. Tableau’s robust visualization capabilities then allow users to drill down into the data model to extract whatever insights they require. Look for more to come from Informatica Cloud and Tableau on this joint solution in the upcoming weeks and months.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B Data Exchange, Cloud, Cloud Computing, Cloud Data Integration | Tagged , , , | Leave a comment

Remembering Big Data Gravity – PART 2

I ended my previous blog wondering if awareness of Data Gravity should change our behavior. While Data Gravity adds Value to Big Data, I find that the application of the Value is under explained.

Exponential growth of data has naturally led us to want to categorize it into facts, relationships, entities, etc. This sounds very elementary. While this happens so quickly in our subconscious minds as humans, it takes significant effort to teach this to a machine.

A friend tweeted this to me last week: I paddled out today, now I look like a lobster. Since this tweet, Twitter has inundated my friend and me with promotions from Red Lobster. It is because the machine deconstructed the tweet: paddled <PROPULSION>, today <TIME>, like <PREFERENCE> and lobster <CRUSTACEANS>. While putting these together, the machine decided that the keyword was lobster. You and I both know that my friend was not talking about lobsters.

You may think that this maybe just a funny edge case. You can confuse any computer system if you try hard enough, right? Unfortunately, this isn’t an edge case. 140 characters has not just changed people’s tweets, it has changed how people talk on the web. More and more information is communicated in smaller and smaller amounts of language, and this trend is only going to continue.

When will the machine understand that “I look like a lobster” means I am sunburned?

I believe the reason that there are not hundreds of companies exploiting machine-learning techniques to generate a truly semantic web, is the lack of weighted edges in publicly available ontologies. Keep reading, it will all make sense in about 5 sentences. Lobster and Sunscreen are 7 hops away from each other in dbPedia – way too many to draw any correlation between the two. For that matter, any article in Wikipedia is connected to any other article within about 14 hops, and that’s the extreme. Completed unrelated concepts are often just a few hops from each other.

But by analyzing massive amounts of both written and spoken English text from articles, books, social media, and television, it is possible for a machine to automatically draw a correlation and create a weighted edge between the Lobsters and Sunscreen nodes that effectively short circuits the 7 hops necessary. Many organizations are dumping massive amounts of facts without weights into our repositories of total human knowledge because they are naïvely attempting to categorize everything without realizing that the repositories of human knowledge need to mimic how humans use knowledge.

For example – if you hear the name Babe Ruth, what is the first thing that pops to mind? Roman Catholics from Maryland born in the 1800s or Famous Baseball Player?

data gravityIf you look in Wikipedia today, he is categorized under 28 categories in Wikipedia, each of them with the same level of attachment. 1895 births | 1948 deaths | American League All-Stars | American League batting champions | American League ERA champions | American League home run champions | American League RBI champions | American people of German descent | American Roman Catholics | Babe Ruth | Baltimore Orioles (IL) players | Baseball players from Maryland | Boston Braves players | Boston Red Sox players | Brooklyn Dodgers coaches | Burials at Gate of Heaven Cemetery | Cancer deaths in New York | Deaths from esophageal cancer | Major League Baseball first base coaches | Major League Baseball left fielders | Major League Baseball pitchers | Major League Baseball players with retired numbers | Major League Baseball right fielders | National Baseball Hall of Fame inductees | New York Yankees players | Providence Grays (minor league) players | Sportspeople from Baltimore | Maryland | Vaudeville performers.

Now imagine how confused a machine would get when the distance of unweighted edges between nodes is used as a scoring mechanism for relevancy.

If I were to design an algorithm that uses weighted edges (on a scale of 1-5, with 5 being the highest), the same search would yield a much more obvious result.

data gravity1895 births [2]| 1948 deaths [2]| American League All-Stars [4]| American League batting champions [4]| American League ERA champions [4]| American League home run champions [4]| American League RBI champions [4]| American people of German descent [2]| American Roman Catholics [2]| Babe Ruth [5]| Baltimore Orioles (IL) players [4]| Baseball players from Maryland [3]| Boston Braves players [4]| Boston Red Sox players [5]| Brooklyn Dodgers coaches [4]| Burials at Gate of Heaven Cemetery [2]| Cancer deaths in New York [2]| Deaths from esophageal cancer [1]| Major League Baseball first base coaches [4]| Major League Baseball left fielders [3]| Major League Baseball pitchers [5]| Major League Baseball players with retired numbers [4]| Major League Baseball right fielders [3]| National Baseball Hall of Fame inductees [5]| New York Yankees players [5]| Providence Grays (minor league) players [3]| Sportspeople from Baltimore [1]| Maryland [1]| Vaudeville performers [1].

Now the machine starts to think more like a human. The above example forces us to ask ourselves the relevancy a.k.a. Value of the response. This is where I think Data Gravity’s becomes relevant.

You can contact me on twitter @bigdatabeat with your comments.

FacebookTwitterLinkedInEmailPrintShare
Posted in Architects, Big Data, Cloud, Cloud Data Management, Data Aggregation, Data Archiving, Data Governance, General, Hadoop | Tagged , , , , , , | Leave a comment

Amazon Web Services and Informatica Deliver Data-Ready Cloud Computing Infrastructure for Every Business

At re:Invent 2014 in Las Vegas,  Informatica and AWS announced a broad strategic partnership to deliver data-ready cloud computing infrastructure to any type or size of business.

Informatica’s comprehensive portfolio across Informatica Cloud and PowerCenter solutions connect to multiple AWS Data Services including Amazon Redshift, RDS, DynamoDB, S3, EMR and Kinesis – the broadest pre-built connectivity available to AWS Data Services. Informatica and AWS offerings are pre-integrated, enabling customers to rapidly and cost-effectively implement data warehousing, large scale analytics, lift and shift, and other key use cases in cloud-first and hybrid IT environments. Now, any company can use Informatica’s portfolio to get a plug-and-play on-ramp to the cloud with AWS.

Economical and Flexible Path to the Cloud

As business information needs intensify and data environments become more complex, the combination of AWS and Informatica enables organizations to increase the flexibility and reduce the costs of their information infrastructures through:

  • More cost-effective data warehousing and analytics – Customers benefit from lower costs and increased agility when unlocking the value of their data with no on-premise data warehousing/analytics environment to design, deploy and manage.
  • Broad, easy connectivity to AWS – Customers gain full flexibility in integrating data from any Informatica-supported data source (the broadest set of sources supported by any integration vendor) through the use of pre-built connectors for AWS.
  • Seamless hybrid integration – Hybrid integration scenarios across Informatica PowerCenter and Informatica Cloud data integration deployments are able to connect seamlessly to AWS services.
  • Comprehensive use case coverage – Informatica solutions for data integration and warehousing, data archiving, data streaming and big data across cloud and on-premise applications mesh with AWS solutions such as RDS, Redshift, Kinesis, S3, DynamoDB, EMR and other AWS ecosystem services to drive new and rapid value for customers.

New Support for AWS Services

Informatica introduced a number of new Informatica Cloud integrations with AWS services, including connectors for Amazon DynamoDB, Amazon Elastic MapReduce (Amazon EMR) and Amazon Simple Storage Service (Amazon S3), to complement the existing connectors for Amazon Redshift and Amazon Relational Database Service (Amazon RDS).

Additionally, the latest Informatica PowerCenter release for Amazon Elastic Compute Cloud (Amazon EC2) includes support for:

  • PowerCenter Standard Edition and Data Quality Standard Edition
  • Scaling options – Grid, high availability, pushdown optimization, partitioning
  • Connectivity to Amazon RDS and Amazon Redshift
  • Domain and repository DB in Amazon RDS for current database PAM (policies and measures)

To learn more, try our 60-day free Informatica Cloud trial for Amazon Redshift.

If you’re in Vegas, please come by our booth at re:Invent, Nov. 11-14, in Booth #1031, Venetian / Sands, Hall.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Cloud, Cloud Application Integration, Cloud Computing | Tagged , , , | Leave a comment

Big Data Driving Data Integration at the NIH

Big Data Driving Data Integration at the NIH

Big Data Driving Data Integration at the NIH

The National Institutes of Health announced new grants to develop big data technologies and strategies.

“The NIH multi-institute awards constitute an initial investment of nearly $32 million in fiscal year 2014 by NIH’s Big Data to Knowledge (BD2K) initiative and will support development of new software, tools and training to improve access to these data and the ability to make new discoveries using them, NIH said in its announcement of the funding.”

The grants will address issues around Big Data adoption, including:

  • Locating data and the appropriate software tools to access and analyze the information.
  • Lack of data standards, or low adoption of standards across the research community.
  • Insufficient polices to facilitate data sharing while protecting privacy.
  • Unwillingness to collaborate that limits the data’s usefulness in the research community.

Among the tasks funded is the creation of a “Perturbation Data Coordination and Integration Center.”  The center will provide support for data science research that focuses on interpreting and integrating data from different data types and databases.  In other words, it will make sure the data moves to where it should move, in order to provide access to information that’s needed by the research scientist.  Fundamentally, it’s data integration practices and technologies.

This is very interesting from the standpoint that the movement into big data systems often drives the reevaluation, or even new interest in data integration.  As the data becomes strategically important, the need to provide core integration services becomes even more important.

The project at the NIH will be interesting to watch, as it progresses.  These are the guys who come up with the new paths to us being healthier and living longer.  The use of Big Data provides the researchers with the advantage of having a better understanding of patterns of data, including:

  • Patterns of symptoms that lead to the diagnosis of specific diseases and ailments.  Doctors may get these data points one at a time.  When unstructured or structured data exists, researchers can find correlations, and thus provide better guidelines to physicians who see the patients.
  • Patterns of cures that are emerging around specific treatments.  The ability to determine what treatments are most effective, by looking at the data holistically.
  • Patterns of failure.  When the outcomes are less than desirable, what seems to be a common issue that can be identified and resolved?

Of course, the uses of big data technology are limitless, when considering the value of knowledge that can be derived from petabytes of data.  However, it’s one thing to have the data, and another to have access to it.

Data integration should always be systemic to all big data strategies, and the NIH clearly understands this to be the case.  Thus, they have funded data integration along with the expansion of their big data usage.

Most enterprises will follow much the same path in the next 2 to 5 years.  Information provides a strategic advantage to businesses.  In the case of the NIH, it’s information that can save lives.  Can’t get much more important than that.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud, Cloud Data Integration, Data Integration | Tagged , , , | Leave a comment

What is the Silver Lining in Cloud for Financial Services?

This was a great week of excitement and innovation here in San Francisco starting with the San Francisco Giants winning the National League Pennant for the 3rd time in 5 years on the same day Saleforce’s Dreamforce 2014 wrapped up their largest customer conference with over 140K+ attendees from all over the world talking about their new Customer Success Platform.

Salesforce has come a long way from their humble beginnings as the new kid on the cloud front for CRM. The integrated sales, marketing, support, collaboration, application, and analytics as part of the Salesforce Customer Success Platform exemplifies innovation and significant business value upside for various industries however I see it very promising for today’s financial services industry. However like any new business application, the value business gains from it are dependent in having the right data available for the business.

The reality is, SaaS adoption by financial institutions has not been as quick as other industries due to privacy concerns, regulations that govern what data can reside in public infrastructures, ability to customize to fit their business needs, cultural barriers within larger institutions that critical business applications must reside on-premise for control and management purposes, and the challenges of integrating data to and from existing systems with SaaS applications.  However, experts are optimistic that the industry may have turned the corner. Gartner (NYSE:IT) asserts more than 60 percent of banks worldwide will process the majority of their transactions in the cloud by 2016.  Let’s take a closer look at some of the challenges and what’s required to overcome these obstacles when adopting cloud solutions to power your business.

Challenge #1:  Integrating and sharing data between SaaS and on-premise must not be taken lightly

For most banks and insurance companies considering new SaaS based CRM, Marketing, and Support applications with solutions from Salesforce and others must consider the importance of migrating and sharing data between cloud and on-premise applications in their investment decisions.  Migrating existing customer, account, and transaction history data is often done by IT staff through the use of custom extracts, scripts, and manual data validations which can carry over invalid information from legacy systems making these new application investments useless in many cases.

For example, customer type descriptions from one or many existing systems may be correct in their respective databases however collapsing them into a common field in the target application seems easy to do. Unfortunately, these transformation rules can be complex and that complexity increases when dealing with tens if not hundreds of applications during the migration and synchronization phase. Having capable solutions to support the testing, development, quality management, validation, and delivery of existing data from old to new is not only good practice, but a proven way of avoiding costly workarounds and business pain in the future.

Challenge 2:  Managing and sharing a trusted source of shared business information across the enterprise.

As new SaaS applications are adopted, it is critical to understand how to best govern and synchronize common business information such as customer contact information (e.g. address, phone, email) across the enterprise. Most banks and insurance companies have multiple systems that create and update critical customer contact information, many of them which reside on-premise. For example, insurance customers who update contact information such as a phone number or email address while filing an insurance claim will often result in that claims specialist to enter/update only the claims system given the siloed nature of many traditional banking and insurance companies. This is the power of Master Data Management which is purposely designed to identify changes to master data including customer records in one or many systems, update the customer master record, and share that across other systems that house and require that update is essential for business continuity and success.

In conclusion, SaaS adoption will continue to grow in financial services and across other industries. The silver lining in the cloud is your data and the technology that supports the consumption and distribution of it across the enterprise. Banks and insurance companies investing in new SaaS solutions will operate in a hybrid environment made up of Cloud and core transaction systems that reside on-premise. Cloud adoption will continue to grow and to ensure investments yield value for businesses, it is important to invest in a capable and scalable data integration platform to integrate, govern, and share data in a hybrid eco-system. To learn more on how to deal with these challenges, click here and download a complimentary copy of the new “Salesforce Integration for Dummies”

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, Cloud, Financial Services | Tagged , , , , | Leave a comment

Informatica and the Shellshock Security Vulnerability

The security of information systems is a complex, shared responsibility between infrastructure, system and application providers. Informatica doesn’t take lightly the responsibility our customers have entrusted to us in this complex risk equation.

As Informatica’s Chief Information Security Officer, I’d like to share three important security updates with our customers:

  1. What you need to know about Informatica products and services relative to the latest industry-wide security concern,
  2. What you need to do to secure Informatica products against the ShellShock vulnerability, and
  3. How to contact Informatica if you have questions about Informatica product security.

1 – What you need to know

On September 24, 2014 a serious new cluster of vulnerabilities to Linux/Unix distributions was announced, classified as (CVE-2014-6271, CVE-2014-7169, CVE-2014-7186, CVE-2014-7187, CVE-2014-6277 and CVE-2014-6278) aka “Shellshock” or “Bashdoor”. What makes ShellShock so impactful is that it requires relatively low effort or expertise to exploit and gain privileged access to vulnerable systems.

Informatica’s cloud-hosted products, including Informatica Cloud Services (ICS) and our recently-launched Springbok beta, have already been patched to address this issue. We continue to monitor for relevant updates to both vulnerabilities and available patches.

Because this vulnerability is a function of the underlying Operating System, we encourage administrators of potentially vulnerable systems to assess their risk levels and apply patches and/or other appropriate countermeasures.

Informatica’s Information Security team coordinated an internal response with product developers to assess the vulnerability and make recommendations necessary for our on-premise products. Specific products and actions are listed below.

2 – What you need to do

Informatica products themselves require no patches to address the Shellshock vulnerability, they are not directly impacted. However, Informatica strongly recommends that you apply your OS vendors’ patches as they become available, since some applications allow customers to use shell scripts in their pre-and post-processing scripts. Specific Informatica products and remediations are listed below:

Cloud Service Version Patch / Remediation
Springbok Beta No action necessary. The Springbok infrastructure has been patched by Informatica Cloud Operations.
ActiveVOS/Cloud All No action necessary. The ActiveVOS/Cloud infrastructure has been patched by Informatica Cloud Operations.
Cloud/ICS All Customers should apply OS patches to all of their machines running a Cloud agent. Relevant Cloud/ICS hosted infrastructure has already been patched by Informatica Cloud Operations.

 

Product Version Patch / Remediation
PowerCenter All No direct impact. Customers who use shell scripts within their pre- / post-processing steps should apply OS patches to mitigate this vulnerability.
IDQ All No direct impact. Customers who use shell scripts within their pre- / post-processing steps should apply OS patches to mitigate this vulnerability.
MM, BG, IDE All No direct impact. Customers who use shell scripts within their pre- / post-processing steps should apply OS patches to mitigate this vulnerability.
PC Express All No direct impact. Customers who use shell scripts within their pre- / post-processing steps should apply OS patches to mitigate this vulnerability.
Data Services / Mercury stack All No direct impact. Customers who use shell scripts within their pre- / post-processing steps should apply OS patches to mitigate this vulnerability.
PWX mainframe & CDC All No direct impact.  Recommend customers apply OS patch to all machines with INFA product installed.
UM, VDS All No direct impact.  Recommend customers apply OS patch to all machines with INFA product installed.
IDR, IFC All No direct impact.  Recommend customers apply OS patch to all machines with INFA product installed.
B2B DT, UDT, hparser, Atlantic All No direct impact.  Recommend customers apply OS patch to all machines with INFA product installed.
Data Archive All No direct impact.  Recommend customers apply OS patch to all machines with INFA product installed.
Dynamic data masking All No direct impact.  Recommend customers apply OS patch to all machines with INFA product installed.
IDV All No direct impact.  Recommend customers apply OS patch to all machines with INFA product installed.
SAP Nearline No direct impact.  Recommend customers apply OS patch to all machines with INFA product installed..
TDM No direct impact.  Recommend customers apply OS patch to all machines with INFA product installed.
MDM All No direct impact.  Recommend customers apply OS patch to all machines with INFA product installed.
IR / name3 No direct impact.  Recommend customers apply OS patch to all machines with INFA product installed.
B2B DX / DIH All DX & DIH on Red Hat Customers should apply OS patches.  Other OS customers still recommended to apply OS patch.
PIM All PIM core and Procurement are not not directly impacted. Recommend Media Manager customers apply OS patch to all machines with INFA product installed.
ActiveVOS All No direct impact for on-premise ActiveVOS product.  Cloud-realtime has already been patched.
Address Doctor All No direct impact for AD services run on Windows.  Procurement service has already been patched by Informatica Cloud Operations.
StrikeIron All No direct impact.

3 – How to contact Informatica about security

Informatica takes the security of our customers’ data very seriously. Please contact our Informatica’s Knowledge Base (article ID 301574), or our Global Customer Support team if you have any questions or concerns. The Informatica support portal is always available at http://mysupport.informatica.com.

If you are security researcher and have identified a potential vulnerability in an Informatica product or service, please follow our Responsible Disclosure Program.

Thank you,

Bill Burns, VP & Chief Information Security Officer

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud, Data Security, IaaS | Tagged , , , | Leave a comment