Tag Archives: B2B

Data Mania- Using REST APIs and Native Connectors to Separate Yourself from the SaaS Pack

Data Mania

Data Mania: An Event for SaaS & ISV Leaders

With Informatica’s Data Mania on Wednesday, I’ve been thinking a lot lately about REST APIs. In particular, I’ve been considering how and why they’ve become so ubiquitous, especially for SaaS companies. Today they are the prerequisite for any company looking to connect with other ecosystems, accelerate adoption and, ultimately, separate themselves from the pack.

Let’s unpack why.

To trace the rise of the REST API, we’ll first need to take a look at the SOAP web services protocol that preceded it.  SOAP is still very much in play and remains important to many application integration scenarios. But it doesn’t receive much use or love from the thousands of SaaS applications that just want to get or place data with one another or in one of the large SaaS ecosystems like Salesforce.

Why this is the case has more to do with needs and demands of a SaaS business than it does with the capabilities of SOAP web services. SOAP, as it turns out, is perfectly fine for making and receiving web service calls, but it does require work on behalf of both the calling application and the producing application. And therein lies the rub.

SOAP web service calls are by their very nature incredibly structured arrangements, with specifications that must be clearly defined by both parties. Only after both the calling and producing application have their frameworks in place can the call be validated. While the contract within SOAP WSDLs makes SOAP more robust, it also makes it too rigid, and less adaptable to change. But today’s apps need a more agile and more loosely defined API framework that requires less work to consume and can adapt to the inevitable and frequent changes demanded by cloud applications.

Enter REST APIs

REST APIs are the perfect vehicle for today’s SaaS businesses and mash-up applications. Sure, they’re more loosely defined than SOAP, but when all you want to do is get and receive some data, now, in the context you need, nothing is easier or better for the job than a REST API.

With a REST API, the calls are mostly done as HTTP with some loose structure and don’t require a lot of mechanics from the calling application, or effort on behalf of the producing application.

SaaS businesses prefer REST APIs because they are easy to consume. They also make it easy to onboard new customers and extend the use of the platform to other applications. The latter is important because it is primarily through integration that SaaS applications get to become part of an enterprise business process and gain the stickiness needed to accelerate adoption and growth.

Without APIs of any sort, integration can only be done through manual data movement, which opens the application and enterprise up to the potential errors caused by fat-finger data movement. That typically will give you the opposite result of stickiness, and is to be avoided at all costs.

While publishing an API as a way to get and receive data from other applications is a great start, it is just a means to an end. If you’re a SaaS business with greater ambitions, you may want to consider taking the next step of building native connectors to other apps using an integration system such as Informatica Cloud. A connector can provide a nice layer of abstraction on the APIs so that the data can be accessed as application data objects within business processes. Clearly, stickiness with any SaaS application improves in direct proportion to the number of business processes or other applications that it is integrated with.

The Informatica Cloud Connector SDK is Java-based and enables you easily to cut and paste the code necessary to create the connectors. Informatica Cloud’s SDKs are also richer and make it possible for you to adapt the REST API to something any business user will want to use – which is a huge advantage.

In addition to making your app stickier, native connectors have the added benefit of increasing your portability. Without this layer of abstraction, direct interaction with a REST API that’s been structurally changed would be impossible without also changing the data flows that depend on it. Building a native connector makes you more agile, and inoculates your custom built integration from breaking.

Building your connectors with Informatica Cloud also provides you with some other advantages. One of the most important is entrance to a community that includes all of the major cloud ecosystems and the thousands of business apps that orbit them. As a participant, you’ll become part of an interconnected web of applications that make up the business processes for the enterprises that use them.

Another ancillary benefit is access to integration templates that you can easily customize to connect with any number of known applications. The templates abstract the complexity from complicated integrations, can be quickly customized with just a few composition screens, and are easily invoked using Informatica Cloud’s APIs.

The best part of all this is that you can use Informatica Cloud’s integration technology to become a part of any business process without stepping outside of your application.

For those interested in continuing the conversation and learning more about how leading SaaS businesses are using REST API’s and native connectors to separate themselves, I invite you to join me at Data Mania, March 4th in San Francisco. Hope to see you there.

Share
Posted in B2B, B2B Data Exchange, Cloud, Cloud Application Integration, Cloud Data Integration, Data Integration, Data Services, SaaS | Tagged , , , , , , , | Leave a comment

Internet of Things (IoT) Changes the Data Integration Game in 2015

Data Integration

Internet of Things (IoT) Changes the Data Integration Game in 2015

As reported by the Economic Times, “In the coming years, enormous volumes of machine-generated data from the Internet of Things (IoT) will emerge. If exploited properly, this data – often dubbed machine or sensor data, and often seen as the next evolution in Big Data – can fuel a wide range of data-driven business process improvements across numerous industries.”

We can all see this happening in our personal lives.  Our thermostats are connected now, our cars have been for years, even my toothbrush has a Bluetooth connection with my phone.  On the industrial sides, devices have also been connected for years, tossing off megabytes of data per day that have been typically used for monitoring, with the data tossed away as quickly as it appears.

So, what changed?  With the advent of big data, cheap cloud, and on-premise storage, we now have the ability to store machine or sensor data spinning out of industrial machines, airliners, health diagnostic devices, etc., and leverage that data for new and valuable uses.

For example, the ability determine the likelihood that a jet engine will fail, based upon the sensor data gathered, and how that data compared with existing known patterns of failure.  Instead of getting an engine failure light on the flight deck, the pilots can see that the engine has a 20 percent likelihood of failure, and get the engine serviced before it fails completely.

The problem with all of this very cool stuff is that we need to once again rethink data integration.  Indeed, if the data can’t get from the machine sensors to a persistent data store for analysis, then none of this has a chance of working.

That’s why those who are moving to IoT-based systems need to do two things.  First, they must create a strategy for extracting data from devices, such as industrial robots or ann  Audi A8.  Second, they need a strategy to take  all of this disparate data that’s firing out of devices at megabytes per second, and put it where it needs to go, and in the right native structure (or in an unstructured data lake), so it can be leveraged in useful ways, and in real time.

The challenge is that machines and devices are not traditional IT systems.  I’ve built connectors for industrial applications in my career.  The fact is, you need to adapt to the way that the machines and devices produce data, and not the other way around.  Data integration technology needs to adapt as well, making sure that it can deal with streaming and unstructured data, including many instances where the data needs to be processed in flight as it moves from the device, to the database.

This becomes a huge opportunity for data integration providers who understand the special needs of IoT, as well as the technology that those who build IoT-based systems can leverage.  However, the larger value is for those businesses that learn how to leverage IoT to provide better services to their customers by offering insights that have previously been impossible.  Be it jet engine reliability, the fuel efficiency of my car, or feedback to my physician from sensors on my body, this is game changing stuff.  At the heart of its ability to succeed is the ability to move data from place-to-place.

Share
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Integration, Data Services | Tagged , , , | Leave a comment

Analytics-Casual Versus Analytics-Driven

Analytics

Analytics-Casual Versus Analytics-Driven

What does it take to be an analytics-driven business? That’s a question that requires a long answer. Recently, Gartner research director Lisa Kart took on this question, noting how the key to becoming an analytics-driven business.

So, the secret of becoming an analytics-driven business is to bust down the silos — easier than done, of course. The good news, as Kart tells it, is that one doesn’t need to be casting a wide net across the world in search of the right data for the right occasion. The biggest opportunities are with connecting the data you already have, she says.

Taking Kart’s differentiation of just-using-analytics versus analytics-driven culture a step further, hare is a brief rundown of how businesses just using analytics approach the challenge, versus their more enlightened counterparts:

Business just using analytics: Lots of data, but no one really understands how much is around, or what to do with it.

Analytics-driven business: The enterprise has a vision and strategy, supported from the top down, closely tied to the business strategy. Management also recognizes that existing data has great value to the business.

Business just using analytics: Every department does its own thing, with varying degrees of success.

Analytics-driven business: Makes connections between all the data – of all types — floating around the organization. For example, gets a cross-channel view of a customer by digging deeper and connecting the silos together to transform the data into something consumable.

Business just using analytics: Some people in marketing have been collecting customer data and making recommendations to their managers.

Analytics-driven business: Marketing departments, through analytics, engage and interact with customers, Kart says. An example would be creating high end, in-store customer experiences that gave customers greater intimacy and interaction.

Business just using analytics: The CFO’s staff crunches numbers within their BI tools and arrive at what-if scenarios.

Analytics-driven business: Operations and finance departments share online data to improve performance using analytics. For example, a company may tap into a variety of data, including satellite images, weather patterns, and other factors that may shape business conditions, Kart says.

Business just using analytics: Some quants in the organization pour over the data and crank out reports.

Analytics-driven business: Encourages maximum opportunities for innovation by putting analytics in the hands of all employees. Analytics-driven businesses recognize that more innovation comes from front-line decision-makers than the executive suite.

Business just using analytics: Decision makers put in report requests to IT for analysis.

Analytics-driven business: Decision makers can go to an online interface that enables them to build and display reports with a click (or two).

Business just using analytics: Analytics spits out standard bar charts, perhaps a scattergram.

Analytics-driven business: Decision makers can quickly visualize insights through 3D graphics, also reflecting real-time shifts.

Share
Posted in B2B, B2B Data Exchange | Tagged , | Leave a comment

Big Data is Nice to Have, But Big Culture is What Delivers Success

big_data

Big Data is Nice to Have, But Big Culture is What Delivers Success

Despite spending more than $30 Billion in annual spending on Big Data, successful big data implementations elude most organizations. That’s the sobering assessment of a recent study of 226 senior executives from Capgemini, which found that only 13 percent feel they have truly have made any headway with their big data efforts.

The reasons for Big Data’s lackluster performance include the following:

  • Data is in silos or legacy systems, scattered across the enterprise
  • No convincing business case
  • Ineffective alignment of Big Data and analytics teams across the organization
  • Most data locked up in petrified, difficult to access legacy systems
  • Lack of Big Data and analytics skills

Actually, there is nothing new about any of these issues – in fact, the perceived issues with Big Data initiatives so far map closely with the failed expect many other technology-driven initiatives. First, there’s the hype that tends to get way ahead of any actual well-functioning case studies. Second, there’s the notion that managers can simply take a solution of impressive magnitude and drop it on top of their organizations, expecting overnight delivery of profits and enhanced competitiveness.

Technology, and Big Data itself, is but a tool that supports the vision, well-designed plans and hard work of forward-looking organizations. Those managers seeking transformative effects need to look deep inside their organizations, at how deeply innovation is allowed to flourish, and in turn, how their employees are allowed to flourish. Think about it: if line employees suddenly have access to alternative ways of doing things, would they be allowed to run with it? If someone discovers through Big Data that customers are using a product differently than intended, do they have the latitude to promote that new use? Or do they have to go through chains of approval?

Big Data may be what everybody is after, but Big Culture is the ultimate key to success.

For its part, Capgemini provides some high-level recommendations for better baking in transformative values as part of Big Data initiatives, based on their observations of best-in-class enterprises:

The vision thing: “It all starts with vision,” says Capgemini’s Ron Tolido. “If the company executive leadership does not actively, demonstrably embrace the power of technology and data as the driver of change and future performance, nothing digitally convincing will happen. We have not even found one single exception to this rule. The CIO may live and breathe Big Data and there may even be a separate Chief Data Officer appointed – expect more of these soon – if they fail to commit their board of executives to data as the engine of success, there will be a dark void beyond the proof of concept.”

Establish a well-defined organizational structure: “Big Data initiatives are rarely, if ever, division-centric,” the Capgemini report states. “They often cut across various departments in an organization. Organizations that have clear organizational structures for managing rollout can minimize the problems of having to engage multiple stakeholders.”

Adopt a systematic implementation approach:  Surprisingly, even the largest and most sophisticated organizations that do everything on process don’t necessarily approach Big Data this way, the report states. “Intuitively, it would seem that a systematic and structured approach should be the way to go in large-scale implementations. However, our survey shows that this philosophy and approach are rare. Seventy-four percent of organizations did not have well-defined criteria to identify, qualify and select Big Data use-cases. Sixty-seven percent of companies did not have clearly defined KPIs to assess initiatives. The lack of a systematic approach affects success rates.”

Adopt a “venture capitalist” approach to securing buy-in and funding: “The returns from investments in emerging digital technologies such as Big Data are often highly speculative, given the lack of historical benchmarks,” the Capgemini report points out. “Consequently, in many organizations, Big Data initiatives get stuck due to the lack of a clear and attributable business case.” To address this challenge, the report urges that Big Data leaders manage investments “by using a similar approach to venture capitalists. This involves making multiple small investments in a variety of proofs of concept, allowing rapid iteration, and then identifying PoCs that have potential and discarding those that do not.”

Leverage multiple channels to secure skills and capabilities: “The Big Data talent gap is something that organizations are increasingly coming face-to-face with. Closing this gap is a larger societal challenge. However, smart organizations realize that they need to adopt a multi-pronged strategy. They not only invest more on hiring and training, but also explore unconventional channels to source talent. Capgemini advises reaching out to partner organizations for the skills needed to develop Big Data initiatives. These can be employee exchanges, or “setting up innovation labs in high-tech hubs such as Silicon Valley.” Startups may also be another source of Big Data talent.

Share
Posted in B2B, B2B Data Exchange, Big Data, Business Impact / Benefits | Tagged , , | Leave a comment

Finding Love with Big Data Analytics

Big Data Ready for Analytics

Getting Ready for Love by Getting Big Data Ready for Analytics

You might think this year’s Valentine’s Day is no different than any other.  But make no mistake – Valentine’s Day has never been more powered by technology or more fueled by data.

It doesn’t get a lot of press coverage, but did you know the online dating industry is now a billion dollar industry globally? As technology empowers us all to live and work in nearly any place, we can no longer rely on geographic colocation to find our friends and soul mates. So, it’s no surprise that online dating and social networking grew in popularity as the smartphone revolution happened over the last eight years. Dating and networking are no longer just about running into people at the local bar on a Friday night. People are connecting with one another on every consumer device they have available, from their computers to their tablets to their phones. This mass consumerization of devices and the online social applications we run on them have fundamentally changed how we all connect with one another in the offline world.

There’s a lot of talk about big data in the tech industry, but there isn’t a lot of understanding of how big data actually affects the real world. Online dating serves as a fantastic example here. Did you know about 1 out of 6 people in the United States is single and that about 1 of 3 of them are members of an online dating site?[1] With tens of millions of members just in the United States, the opportunity to meet new people online has never been more compelling. But the challenge is helping people sift through a “big data” store of millions of profiles. The solution to this has been the use of predictive recommendation engines. We are all familiar with recommendation engines on e-commerce sites that give us suggestions for new items to buy. The exact same analytics are now being applied to people and their preferences to help them find new friends and companions. Big data analytics is not just some fancy technology used by retailers and Wall Street. The proof is in the math: 1 of 18 people in the United States is using big data analytics today to fulfill the most human needs of all in finding companionship.

However, this everyday revolution of big data analytics is not just limited to online dating. US News and World Report[2] estimates that a whopping $19 billion dollars will be spent around Valentine’s Day this year.  This spending includes gifts, flowers, candy, jewelry and other forms of celebration. The online sites that sell these products are no foreigners to big data either. Organizations with e-commerce sites, many of whom are Informatica customers, are collecting real-time weblog and clickstream information to dynamically offer the best possible customer experiences. Big data is not only helping to build relationships between consumers, but is also building them between enterprises and consumers.

In an increasingly data-driven world, you cannot connect people unless you can connect data. The billion dollar online dating industry and the $19 billion dollar Valentine’s Day industry would not exist if they were not fueled by the ability to quickly derive meaning out of data assets and turn them into compelling analytical outcomes. Informatica is powering this data-driven world of connected data and connected people by making all types of data ready for analytics.  Our leading technologies have helped customers collect data quickly, re-direct workloads easily, and perfect datasets completely.  So on this Valentine’s Day, I invite you to connect with your customers and help them to connect with one another by first connecting with your data. There is no better way to help get ready for love than by getting big data ready for analytics!

[1] http://www.statisticbrain.com/online-dating-statistics/

[2] http://www.usnews.com/news/blogs/data-mine/2015/02/11/valentines-day-spending-to-approach-19-billion

Share
Posted in B2B, Big Data, Data Services | Tagged , , | Leave a comment

Asia-Pacific Ecommerce surpassed Europe and North America

With a total B2C e-commerce turnover of $567.3bn in 2013, Asia-Pacific was the strongest e-commerce region in the world in 2013, as it surpassed Europe ($482.3bn) and North America ($452.4bn). Online sales in Asia-Pacific expected to have reached $799.2 billion in 2014, due to latest report from the Ecommerce Foundation.

Revenue: China, followed by Japan and Australia
As a matter of fact, China was the second-largest e-commerce market in the world, only behind the US ($419.0 billion), and for 2014 it is estimated that China even surpassed the US ($537.0 billion vs. $456.0 billion). In terms of B2C e-commerce turnover, Japan ($136.7 billion) ranked second, followed by Australia ($35.7 billion), South Korea ($20.2 billion) and India ($10.7 billion).

On average, Asian-Pacific e-shoppers spent $1,268 online in 2013
Ecommerce Europe’s research reveals that 235.7 million consumers in Asia-Pacific purchased goods and services online in 2013. On average, APAC online consumers each spent $1,268 online in 2013. This is slightly lower than the global average of $1,304. At $2,167, Australian e-shopper were the biggest spenders online, followed by the Japanese ($1,808) and China ($1,087).

Mobile: Japan and Australia lead the pack
In the frequency of mobile purchasing Japan shows the highest adoption, followed by Japan. An interesting fact is that 50% of transactions are done at home, 20% at work and 10% on the go.

frequency mobile shopping APAC

You can download the full report here. What does this mean for your business opportunity? Read more on the omnichannel trends 2015, which are driving customer experience. Happy to discuss @benrund.

Share
Posted in CMO, Customer Acquisition & Retention, DaaS, Data Quality, Manufacturing, Master Data Management, PiM, Product Information Management, Retail | Tagged , , , , | Leave a comment

Data Governance, Transparency and Lineage with Informatica and Hortonworks

Data GovernanceInformatica users leveraging HDP are now able to see a complete end-to-end visual data lineage map of everything done through the Informatica platform. In this blog post, Scott Hedrick, director Big Data Partnerships at Informatica, tells us more about end-to-end visual data lineage.

Hadoop adoption continues to accelerate within mainstream enterprise IT and, as always, organizations need the ability to govern their end-to-end data pipelines for compliance and visibility purposes. Working with Hortonworks, Informatica has extended the metadata management capabilities in Informatica Big Data Governance Edition to include data lineage visibility of data movement, transformation and cleansing beyond traditional systems to cover Apache Hadoop.

Informatica users are now able to see a complete end-to-end visual data lineage map of everything done through Informatica, which includes sources outside Hortonworks Data Platform (HDP) being loaded into HDP, all data integration, parsing and data quality transformation running on Hortonworks and then loading of curated data sets onto data warehouses, analytics tools and operational systems outside Hadoop.

Regulated industries such as banking, insurance and healthcare are required to have detailed histories of data management for audit purposes. Without tools to provide data lineage, compliance with regulations and gathering the required information for audits can prove challenging.

With Informatica, the data scientist and analyst can now visualize data lineage and detailed history of data transformations providing unprecedented transparency into their data analysis. They can be more confident in their findings based on this visibility into the origins and quality of the data they are working with to create valuable insights for their organizations. Web-based access to visual data lineage for analysts also facilitates team collaboration on challenging and evolving data analytics and operational system projects.

The Informatica and Hortonworks partnership brings together leading enterprise data governance tools with open source Hadoop leadership to extend governance to this new platform. Deploying Informatica for data integration, parsing, data quality and data lineage on Hortonworks reduces risk to deployment schedules.

A demo of Informatica’s end-to-end metadata management capabilities on Hadoop and beyond is available here:

Learn More

  • A free trial of Informatica Big Data Edition in the Hortonworks Sandbox is available here .
Share
Posted in B2B, Data Governance, Data Security, Data Services | Tagged , , , , | Leave a comment

Informatica University offers New Flexible Pricing for Private Training

Informatica UniversityCustomers often inquire about the best way to get their team up to speed on the Informatica solutions.  The question Informatica University hears frequently is whether a team should attend our public scheduled courses or hold a Private training event.  The number of resources to be skilled on the products will help to determine which option to choose.  If your team, or multiple teams within your company, has 7 or more resources that require getting up to speed on the Informatica products, then a Private training event is the recommended choice.

Seven (7) for a remote instructor and nine (9) for an onsite instructor is the break even cost per resource when determining whether to hold a private training and is the most cost efficient delivery for a team.  In addition to the cost benefit, customers who have taken this option value the daily access to their team members to keep business operations humming along, and the opportunity to collaborate with key team members not attending by allowing them to provide input to project perspective.

These reserved events also provide the opportunity to be adapted to focus on a customers needs by tailoring course materials to highlight topics that will be key to a project’s implementation which provide creative options to get a team up to speed on the Informatica projects at hand.

With Informatica University’s new flexible pricing, hosting a Private Training event is easy.  All it takes is:

  • A conference room
  • Training PC’s or laptops for participants
  • Access to the Internet
  • An LCD projector, screen, white board, and appropriate markers

Private training events provide the opportunity to get your resources comfortable and efficient with the Informatica Solutions and have a positive impact on the success of your projects.

To understand more about Informatica’s New Flexible Pricing, contact trainingsales@informatica.com

Share
Posted in Business Impact / Benefits, Business/IT Collaboration, CIO, Informatica Feature, Informatica University | Tagged , , , | Leave a comment

Great Data for Great Analytics – Evolving Best Practices for Data Management

By Philip Russom, TDWI, Research Director for Data Management.

Data ManagementI recently broadcast a really interesting Webinar with David Lyle, a vice president of product strategy at Informatica Corporation. David and I had a “fire-side chat” where we discussed one of the most pressing questions in data management today, namely: How can we prepare great data for great analytics, while still leveraging older best practices in data management? Please allow me to summarize our discussion.

Both old and new requirements are driving organizations toward analytics. David and I started the Webinar by talking about prominent trends:

  • Wringing value from big data – The consensus today says that advanced analytics is the primary path to business value from big data and other types of new data, such as data from sensors, devices, machinery, logs, and social media.
  • Getting more value from traditional enterprise data – Analytics continues to reveal customer segments, sales opportunities, and threats for risk, fraud, and security.
  • Competing on analytics – The modern business is run by the numbers – not just gut feel – to study markets, refine differentiation, and identify competitive advantages.

The rise of analytics is a bit confusing for some data people. As experienced data professionals do more work with advanced forms of analytics (enabled by data mining, clustering, text mining, statistical analysis, etc.) they can’t help but notice that the requirements for preparing analytic data are similar-but-different as compared to their other projects, such as ETL for a data warehouse that feeds standard reports.

Analytics and reporting are two different practices. In the Webinar, David and I talked about how the two involve pretty much the same data management practices, but it different orders and priorities:

  • Reporting is mostly about entities and facts you know well, represented by highly polished data that you know well. Squeaky clean report data demands elaborate data processing (for ETL, quality, metadata, master data, and so on). This is especially true of reports that demand numeric precision (about financials or inventory) or will be published outside the organization (regulatory or partner reports).
  • Advanced analytics, in general, enables the discovery of new facts you didn’t know, based on the exploration and analysis of data that’s probably new to you. Preparing raw source data for analytics is simple, though at high levels of scale. With big data and other new data, preparation may be as simple as collocating large datasets on Hadoop or another platform suited to data exploration. When using modern tools, users can further prepare the data as they explore it, by profiling, modeling, aggregating, and standardizing data on the fly.

Operationalizing analytics brings reporting and analysis together in a unified process. For example, once an epiphany is discovered through analytics (e.g., the root cause of a new form of customer churn), that discovery should become a repeatable BI deliverable (e.g., metrics and KPIs that enable managers to track the new form of churn in dashboards). In these situations, the best practices of data management apply to a lesser degree (perhaps on the fly) during the early analytic steps of the process, but then are applied fully during the operationalization steps.

Architectural ramifications ensue from the growing diversity of data and workloads for analytics, reporting, multi-structured data, real time, and so on. For example, modern data warehouse environments (DWEs) include multiple tools and data platforms, from traditional relational databases to appliances and columnar databases to Hadoop and other NoSQL platforms. Some are on premises and others are on clouds. On the down side, this results in high complexity, with data strewn across multiple platforms. On the upside, users get great data for great analytics by moving data to a platform within the DWE that’s optimized for a particular data type, analytic workload, or price point, or data management best practice.

For example, a number of data architecture uses cases have emerged successfully in recent years, largely to assure great data for great analytics:

  • Leveraging new data warehouse platform types gives analytics the high performance it needs. Toward this end, TDWI has seen many users successfully adopt new platforms based on appliances, columnar data stores, and a variety of in-memory functions.
  • Offloading data and its processing to Hadoop frees up capacity on EDWs. And it also gives unstructured and multi-structured data types a platform that is better suited to their management and processing, all at a favorable cost point.
  • Virtualizing data assets yields greater agility and simpler data management. Multi-platform data architectures too often entail a lot of data movement among the platforms. But this can be mitigated by federated and virtual data management practices, as well as by emerging practices for data lakes and enterprise data hubs.

If you’d like to hear more of my discussion with Informatica’s David Lyle, please replay the Webinar from the Informatica archive.

Share
Posted in B2B, Enterprise Data Management | Tagged , | Leave a comment

Announcing the New Formation of the Informatica Data Security Group

The Informatica Data Security Group

The Informatica Data Security Group

The technology world has and continues to change rapidly in front of our eyes. One of the areas where this change has become most obvious is Security, and in particular the approach to Security. Network and perimeter-based security controls alone are insufficient as data proliferates outside the firewall to social networks, outsourced and offshore resources and mobile devices. Organizations are more focused on understanding and protecting their data, which is the most prized asset they have vs all the infrastucture around it. Informatica is poised to lead this transformation of the security market to focus on a data-centric security approach.

The Ponemon Institute stated that the biggest concern for security professionals is that they do not know where sensitive data resides.  Informatica’s Intelligent Data Platform provides data security professionals with the technology required to discover, profile, classify and assess the risk of confidential and sensitive data.

Last year, we began significant investments in data security R&D support the initiative.  This year, we continue the commitment by organizing around the vision.  I am thrilled to be leading the Informatica Data Security Group, a newly-formed business unit comprised of a team dedicated to data security innovation.  The business unit includes the former Application ILM business unit which consists of data masking, test data management and data archive technologies from previous acquisitions, including Applimation, ActiveBase, and TierData.

By having a dedicated business unit and engineering resources applying Informatica’s Intelligent Data Platform technology to a security problem, we believe we can make a significant difference addressing a serious challenge for enterprises across the globe.  The newly formed Data Security Group will focus on new innovations in the data security intelligence market, while continuing to invest and enhance our existing data-centric security solutions such as data masking, data archiving and information lifecycle management solutions.

The world of data is transforming around us and we are committed to transforming the data security industry to keep our customer’s data clean, safe and connected.

For more details regarding how these changes will be reflected in our products, message and support, please refer to the FAQs listed below:

Q: What is the Data Security Group (DSG)?

A: Informatica has created a newly formed business unit, the Informatica Data Security Group, as a dedicated team focusing on data security innovation to meet the needs of our customers while leveraging the Informatica Intelligent Data Platform

Q: Why did Informatica create a dedicated Data Security Group business unit?

A:  Reducing Risk is among the top 3 business initiatives for our customers in 2015.  Data Security is a top IT and business initiative for just about every industry and organization that store sensitive, private, regulated or confidential data.  Data Security is a Board room topic.  By building upon our success with the Application ILM product portfolio and the Intelligent Data Platform, we can address more pressing issues while solving mission-critical challenges that matter to most of our customers.

Q: Is this the same as the Application ILM Business Unit?

A: The Informatica Data Security Group is a business unit that includes the former Application ILM business unit products comprised of data masking, data archive and test data management products from previous acquisitions, including Applimation, ActiveBase, and TierData, and additional resources developing and supporting Informatica’s data security products GTM, such as Secure@Source.

Q: How big is the Data Security market opportunity?

A: Data Security software market is estimated to be a $3B market in 2015 according to Gartner. Total information security spending will grow a further 8.2 percent in 2015 to reach $76.9 billion.[1]

Q: Who would be most interested in this announcement and why?

A: All leaders are impacted when a data breach occurs. Understanding the risk of sensitive data is a board room topic.  Informatica is investing and committing to securing and safeguarding sensitive, private and confidential data. If you are an existing customer, you will be able to leverage your existing skills on the Informatica platform to address a challenge facing every team who manages or handles sensitive or confidential data.

Q: How does this announcement impact the Application ILM products – Data Masking, Data Archive and Test Data Management?

A: The existing Application ILM products are foundational to the Data Security Group product portfolio.  These products will continue to be invested in, supported and updated.  We are building upon our success with the Data Masking, Data Archive and Test Data Management products.

Q: How will this change impact my customer experience?

A: The Informatica product website will reflect this new organization by listing the Data Masking, Data Archive, and Test Data Management products under the Data Security product category.  The customer support portal will reference Data Security as the top level product category.  Older versions of the product and corresponding documentation will not be updated and will continue to reflect Application ILM nomenclature and messaging.

[1] http://www.gartner.com/newsroom/id/2828722

Share
Posted in B2B, Data Security, Data Services, Enterprise Data Management | Tagged , , , | Leave a comment