Tag Archives: Hadoop

Lessons From Kindergarten: The ABC’s of Data

People are obsessed with data. Data captured from our smartphones. Internet data showing how we shop and search — and what marketers do with that data. Big Data, which I loosely define as people throwing every conceivable data point into a giant Hadoop cluster with the hope of figuring out what it all means.

Too bad all that attention stems from fear, uncertainty and doubt about the data that defines us. I blame the technology industry, which — in the immortal words of Cool Hand Luke has had a “failure to communicate.” For decades we’ve talked the language of IT and left it up to our direct customers to explain the proper care-and-feeding of data to their business users. Small wonder it’s way too hard for regular people to understand what we, as an industry, are doing. After all, how we can expect others to explain the do’s and don’ts of data management when we haven’t clearly explained it ourselves?

I say we need to start talking about the ABC’s of handling data in a way that’s easy for anyone to understand. I’m convinced we can because — if you think about it — everything you learned about data you learned in kindergarten: It has to be clean, safe and connected. Here’s what I mean:

Clean

Data cleanliness has always been important, but assumes real urgency with the move toward Big Data. I blame Hadoop, the underlying technology that makes Big Data possible. On the plus side, Hadoop gives companies a cost-effective way to store, process and analyze petabytes of nearly every imaginable data type. And that’s the problem as companies go through the enormous time suck of cataloging and organizing vast stores of data. Put bluntly, big data can be a swamp.

The question is, how to make it potable. This isn’t always easy, but it’s always, always necessary. It begins, naturally, by ensuring the data is accurate, de-deduped and complete.

Connected

Now comes the truly difficult part: Knowing where that data originated, where it’s been, how it’s related to other data and its lineage. That data provenance is absolutely vital in our hyper-connected world where one company’s data interacts with data from suppliers, partners, and customers. Someone else’s dirty data, regardless of origin, can ruin reputations and drive down sales faster than you can say “Target breach.” In fact, we now know that hackers entered Target’s point-of-sales terminals through a supplier’s project management and electronic billing system. We won’t know for a while the full extent of the damage. We do know the hack affected one-third of the entire U.S. population. Which brings us to:

Safe

Obviously, being safe means keeping data out of the hands of criminals. But it doesn’t stop there. That’s because today’s technologies make it oh so easy to misuse the data we have at our disposal. If we’re really determined to keep data safe, we have to think long and hard about responsibility and governance. We have to constantly question the data we use, and how we use it. Questions like:

  • How much of our data should be accessible, and by whom?
  • Do we really need to include personal information, like social security numbers or medical data, in our Hadoop clusters?
  • When do we go the extra step of making that data anonymous?

And as I think about it, I realize that everything we learned in kindergarten boils down to down to the ethics of data: How, for example, do we know if we’re using data for good or for evil?

That question is especially relevant for marketers, who have a tendency to use data to scare people, for crass commercialism, or to violate our privacy just because technology makes it possible. Use data ethically, and we can help change the use.

In fact, I believe that the ethics of data is such an important topic that I’ve decided to make it the title of my new blog.

Stay tuned for more musings on The Ethics of Data.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Governance | Tagged , | 2 Comments

Big Data, So Mom Can Understand

Dear Mom,

I’m glad to hear you feel comfortable explaining data to your friends, and I completely understand why you’ll avoid discussing metadata with them.  You’re in great company – most business leaders also avoid discussing metadata at all costs! You mentioned during our last call that you keep reading articles in the New York Times about this thing called “Big Data” so as promised I’ll try to explain it as best I can.   (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Governance | Tagged , , , | 2 Comments

Data Warehouse Optimization: Not All Data is Created Equal

Data Warehouse Optimization (DWO) is becoming a popular term that describes how an organization optimizes their data storage and processing for cost and performance while data volumes continue to grow from an ever increasing variety of data sources.

Data warehouses are reaching their capacity much too quickly as the demand for more data and more types of data are forcing IT organizations into very costly upgrades.  Further compounding the problem is that many organizations don’t have a strategy for managing the lifecycle of their data.  It is not uncommon for much of the data in a data warehouse to be unused or infrequently used or that too much compute capacity is consumed by extract-load-transform (ELT) processing.  This is sometimes the result of business requests for one off business reports that are no longer used or staging raw data in the data warehouse.  A large global bank’s data warehouse was exploding with 200TB of data forcing them to consider an upgrade that would cost $20 million.  They discovered that much of the data was no longer being used and could be archived to lower cost storage thereby avoiding the upgrade and saving millions.  This same bank continues to retire data monthly resulting in on-going savings of $2-3 million annually.  A large healthcare insurance company discovered that fewer than 2% of their ELT scripts were consuming 65% of their data warehouse CPU capacity.  This company is now looking at Hadoop as a staging platform to offload the storage of raw data and ELT processing freeing up their data warehouse to support the hundreds of concurrent business users.  A global media & entertainment company saw their data increase by 20x per year and the associated costs increase 3x within 6 months as they on-boarded more data such as web clickstream data from thousands of web sites and in-game telemetry data.

In this era of big data, not all data is created equal with most raw data originating from machine log files, social media, or years of original transaction data considered to be of lower value – at least until it has been prepared and refined for analysis. This raw data should be staged in Hadoop to reduce storage and data preparation costs while the data warehouse capacity should be reserved for refined, curated and frequently used datasets.  Therefore, it’s time to consider optimizing your data warehouse environment to lower costs, increase capacity, optimize performance, and establish an infrastructure that can support growing data volumes from a variety of data sources.  Informatica has a complete solution available for data warehouse optimization.

The first step in the optimization process as illustrated in Figure 1 below is to identify inactive and infrequently used data and ELT performance bottlenecks in the data warehouse.  Step 2 is to offload the data and ELT processing identified in step 1 to Hadoop.  PowerCenter customers have the advantage of Vibe which allows them to map once and deploy anywhere so that ELT processing executed through PowerCenter pushdown capabilities can be converted to ETL processing on Hadoop as part of a simple configuration step during deployment.  Most raw data, such as original transaction data, log files (e.g. Internet clickstream), social media, sensor device, and machine data should be staged in Hadoop as noted in step 3.  Informatica provides near-universal connectivity to all types of data so that you can load data directly into Hadoop.  You can even replicate entire schemas and files into Hadoop, capture just the changes, and stream millions of transactions per second into Hadoop such as machine data.  The Informatica PowerCenter Big Data Edition makes every PowerCenter developer a Hadoop developer without having to learn Hadoop so that all ETL, data integration and data quality can be executed natively on Hadoop using readily available resource skills while increasing productivity up to 5x over hand-coding.  Informatica also provides data discovery and profiling tools on Hadoop to help data science teams collaborate and understand their data.  The final step is to move the resulting high value and frequently used data sets prepared and refined on Hadoop into the data warehouse that supports your enterprise BI and analytics applications.

To get started, Informatica has teamed up with Cloudera to deliver a reference architecture for data warehouse optimization so organizations can lower infrastructure and operational costs, optimize performance and scalability, and ensure enterprise-ready deployments that meet business SLA’s.  To learn more please join the webinar A Big Data Reference Architecture for Data Warehouse Optimization on Tuesday November 19 at 8:00am PST.

dwo4

Figure 1:  Process steps for Data Warehouse Optimization

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data | Tagged , , | 2 Comments

Strata and Dinner with my French Neighbor: Part deux

So I missed Strata this year so I can only report back what I heard from my team.  I was out on the road talking with customers while the gang was at Strata,  talking to customers and prospective customers.   That said, the conversations they had with new cool Hadoop companies were and my conversations were quite similar.  Lots of talk about trials on Hadoop, but outside of the big internet firms, some startups that are focused on solving “big data” problems and some wall street firms, most companies are still kicking the Hadoop tires.

Which reminds me of a picture my neighbor took of a presentation that he saw on Hadoop.  The presenter had a slide with a rehash of an old joke that went something like this (I am paraphrasing here as I don’t have the exact quote):

“Hadoop is a lot like teenage sex.  Everyone says they do it, but most are not.  And for those who are doing it, most of them aren’t very good at it yet. “

So if you haven’t gotten started on your Hadoop project, don’t worry, you aren’t as far behind as you think.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Integration | Tagged , , | 1 Comment

Combining Cloud Integration and Big Data in the Cloud to Accelerate Analytics

An explosion in mobile devices and social media usage has been the driving force behind large brands using big data solutions for deep, insightful analytics.  In fact, a recent mobile consumer survey found that 71% of people used their mobile devices to access social media.

With social media becoming a major avenue for advertising, and mobile devices being the medium of access, there are numerous data points that global brands can cross-reference to get a more complete picture of their consumer, and their buying propensities. Analyzing these multitudes of data points is the reason behind the rise of big data solutions such as Hadoop.

Related: Informatica Cloud’s Marketplace connectors for Cloudera Hadoop CDH 4.1 and Hortonworks Hadoop HDP 1.1

However, Hadoop itself is only one Big Data framework, and consists of several different flavors. Facebook, which called itself the owner of the world’s largest Hadoop cluster, at 100 petabytes, outgrew its capabilities on Hadoop and is looking into a technology which would allow it to abstract its Hadoop workloads across several geographically dispersed datacenters.

When it comes to analytics projects that require intensive data warehousing, there is no one-size fits all answer for Big Data as the use cases can be extremely varied, ranging from short-term to long-term. Deploying Hadoop clusters requires specialized skills and proper capacity planning. In contrast, Big Data solutions in the cloud such as Amazon RedShift allow users to provision database nodes on demand and in a matter of minutes, without the need to take into account large outlays of infrastructure such as servers, and datacenter space. As a result, cloud-based Big Data can be a viable alternative for short-term analytics projects as well as fulfilling sandbox requirements to test out larger Big Data integration projects. Cloud-based Big Data may also make sense in situations where only a subset of the data is required for analysis as opposed to the entire dataset.

With cloud integration, much of the complexity of connecting to data sources and targets is abstracted away. Consequently, when a cloud-based Big Data deployment is combined with a cloud integration solution, it can result in even more time and cost savings and get the projects off the ground much faster.

We’ll be discussing several use cases around cloud-based Big Data in our webinar on August 22nd, Big Data in the Cloud with Informatica Cloud and Amazon Redshift, with special guests from Amazon on the event.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud Computing, Data Integration | Tagged , , , | 1 Comment

Avoiding Big Data, and Big Data Integration Confusion

We discussed Big Data and Big Data integration last month, but the rise of Big Data and the systemic use of data integration approaches and technology continues to be a source of confusion.  As with any evolution of technology, assumptions are being made that could get many enterprises into a great deal of trouble as they move to Big Data.

Case in point: The rise of big data gave many people the impression that data integration is not needed when implementing big data technology.  The notion is, if we consolidate all of the data into a single cluster of servers, than the integration is systemic to the solution.  Not the case.

As you may recall, we made many of the same mistakes around the rise of service oriented architecture (SOA).  Don’t let history repeat itself with the rise of cloud computing.  Data integration, if anything, becomes more important as new technology is layered within the enterprise.

Hadoop’s storage approach leverages a distributed file system that maps data wherever it sits in a cluster.  This means that massive amounts of data reside in these clusters, and you can map and remap the data to any number of structures.  Moreover, you’re able to work with both structured and unstructured data.

As covered in a recent Read Write article, the movement to Big Data does indeed come with built-in business value.  “Hadoop, then, allows companies to store data much more cheaply. How much more cheaply? In 2012, Rainstor estimated that running a 75-node, 300TB Hadoop cluster would cost $1.05 million over three years. In 2008, Oracle sold a database with a little over half the storage (168TB) for $2.33 million – and that’s not including operating costs. Throw in the salary of an Oracle admin at around $95,000 per year, and you’re talking an operational cost of $2.62 million over three years – 2.5 times the cost, for just over half of the storage capacity.”

Thus, if these data points are indeed correct, Hadoop clearly enables companies to hold all of their data on a single cluster of servers.  Moreover, this data really has no fixed structure.  “Fixed assumptions don’t need to be made in advance. All data becomes equal and equally available, so business scenarios can be run with raw data at any time as needed, without limitation or assumption.”

While this process may look like data integration to some, the heavy lifting around supplying these clusters with data is always a data integration solution, leveraging the right enabling technology.  Indeed, consider what’s required around the movement to Big Data systems additional stress and you’ll realize why strain is placed upon the data integration solution.  A Big Data strategy that leverages Big Data technology increases, not decreases, the need for a solid data integration strategy and a sound data integration technology solution.

Big Data is a killer application that most enterprises should at least consider.  The business strategic benefits are crystal clear, and the movement around finally being able to see and analyze all of your business data in real time is underway for most of the Global 2000 and the government.  However, you won’t achieve these objectives without a sound approach to data integration, and a solid plan to leverage the right data integration technology.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Integration, SOA, Uncategorized | Tagged , , , , , | Leave a comment

Two Sci-Fi Big Data Predictions From Over 50 Years Ago

Science fiction represents some of the most impactful stories I’ve read throughout my life.  By impactful, I mean the ideas have stuck with me 30 years since I last read them.  I recently recalled two of these stories and realized they represent two very different paths for Big Data.  One path, quite literally, was towards enlightenment.  Let’s just say the other path went in a different direction.  The amazing thing is that both of these stories were written between 50-60 years ago. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Governance | Tagged , , , , , | 5 Comments

Informatica’s Vibe virtual data machine can streamline big data work and allow data scientists to be more efficient

Informatica introduced an embeddable Vibe engine for not only transformation, but also for data quality, data profiling, data masking and a host of other data integration tasks. It will have a meaningful impact on the data scientist shortage.

Some clear economic facts are already apparent in the current world of data. Hadoop provides a significantly less expensive platform for gathering and analyzing data; cloud computing (potentially) is a more economical computing location than on-premises, if managed well. These are clearly positive developments. On the other hand, the human resources required to exploit these new opportunities are actually quite expensive. When there is greater demand than can be met in the short term for a hot product, suppliers put customers “on allocation” to manage the distribution to the most strategic customers.

This is the situation with “data scientists,” this new breed of experts with quantitative skills, data management skills, presentation skills and deep domain expertise. Current estimates are that there are 60,000 – 120,000 unfilled positions in the US alone. Naturally, data scientists are “allocated” to the most critical (economically lucrative) efforts, and their time is limited to those tasks that most completely leverage their unique skills.

To address this shortage, industry turns to universities to develop curricula to manufacture data scientists, but this will take time. In the meantime, salaries for data scientists are very high. Unfortunately, most data science work involves a great deal of effort that does not require data science skills, especially in the areas of managing the data prior to the insightful analytics. Some estimates are that data scientists spend 50-80% of their time finding and cleaning data, managing their computing platforms and writing programs. Reducing this effort  with better tools can not only make data scientists more effective, it have an impact on the most expensive component of big data – human resources.

Informatica today introduced Vibe, its embeddable virtual data machine to do exactly that. Informatica has, for over 20 years, provided tools that allow developers to design and execute transformation of data without the need for writing or maintaining code. With Vibe, this capability is extended to include data quality, masking and profiling and the engine itself can be embedded in the platforms where the work is performed. In addition, the engine can generate separate code from a single data management design.

In the case of Hadoop, Informatica designers can continue to operate in the familiar design studio, and have Vibe generate the code for whatever platform is needed.In this way, it is possible for an Informatica developer to develop these data management routines for Hadoop, without learning Hadoop or writing code in Java. And the real advantage is that the data scientist is freed from work that can be performed by those in lower pay grades and can parallelize that work too – multiple programmers and integration developers to one data scientist.

Vibe is a major innovation for Informatica that provides many interesting opportunities for it’s customers. Easing the data scientist problem is only one.

———————————

Neil Raden

This is a guest blog penned by Neil Raden, a well-known industry figure as an author, lecturer and practitioner. He has in-depth experience as a developer, consultant and analyst in all areas of Analytics and Decision Services including Big Data strategy and implementation, Business Intelligence, Data Warehousing, Statistical/Predictive Modeling, Decision Management, and IT systems integration including assessment, architecture, planning, project management and execution. Neil has authored dozens of sponsored white papers and articles, blogger and co-author of “Smart Enough) Systems” (Prentice Hall, 2007). He has 25 years as an actuary, software engineer and systems integrator.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Integration, Data masking, Data Quality | Tagged , , , , , , , | Leave a comment

The Safe On-Ramp to Big Data

The hype around big data is certainly top of mind with executives at most companies today but what I am really seeing are companies finally making the connection between innovation and data. Data as a corporate asset is now getting the respect it deserves in terms of a business strategy to introduce new innovative products and services and improve business operations. The most advanced companies have C-level executives responsible for delivering top and bottom line results by managing their data assets to their maximum potential. The Chief Data Officer and Chief Analytics Officer own this responsibility and report directly to the CEO. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data | Tagged , , , , , , , | 3 Comments

Aligning Big Data with MDM

In my recent blog posts, we have looked at ways that master data management can become an integral component to the enterprise architecture, and I would be remiss if I did not look at how MDM dovetails with an emerging data management imperative: big data and big data analytics. Fortunately, the value of identity resolution and MDM has the potential for both contributing to performance improvement while enabling efficient entity extraction and recognition. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Integration, Master Data Management | Tagged , , , | Leave a comment