Category Archives: Big Data

The Streetlight Is Watching You

The Streetlight Is Watching You

The Streetlight Is Watching You

We are hugely dependent upon technology and sometimes take it for granted. It is always worth reminding ourselves where it all began so we can fully appreciate how lucky we are. Take the Light Emitting Diodes (LEDs) for example. They have come a long way in a relatively short time. They were used first as low-intensity light emitters in electronic devices, it is difficult to believe anyone would foresee them one day lighting our homes.

The future of lighting may first be peeking through at Newark Liberty Airport in New Jersey. The airport has installed 171 new LED-based light fixtures that include a variety of sensors to detect and record what’s going in the airport, as reported by Diane Cardwell in The New York Times. Together they make a network of devices that communicates wirelessly and allows authorities to scan license plates of passing cars, watch out for lines and delays, and check out travelers for suspicious activities.

I get the feeling that Newark’s new gear will not be the last of lighting-based digital networks. Over the last few years, LED street lights have gone from something cities would love to have to the sector standard. That the market has shifted so swiftly is thanks to the efforts of early movers such as the City of Los Angeles, which last year completed the world’s largest LED street light replacement project, with LED fixtures installed on 150,000 streetlights.

Los Angeles is certainly not alone in making the switch to LED street lighting. In March 2013, Las Vegas outfitted 50,000 streetlights with LED fixtures. One month later, the Austin TX announced plans to install 35,000 LED street lights. Not to be outdone, New York City, is planning to go all-LED by 2017, which would save $14 million and many tons of carbon emissions each year.

The impending switch to LEDs is an excellent opportunity for LED light fixture makers and Big Data software vendors like Informatica. These fixtures are made with a wide variety of sensors that can be tailored to whatever the user wants to detect, including temperature, humidity, seismic activity, radiation, audio, and video, among other things. The sensors could even detect and triangulate the source of a gunshot.

This steady stream of real-time data collected from these fixtures can be transformed into torrents of small messages and events with unprecedented agility using Informatica Vibe Data Stream. Analyzed data can then be distributed to various governmental and non-governmental agencies, such as; law enforcement, environmental monitors, retailers, etc.

If I were to guess the number of streetlights in the world, I would say 4 billion. Upgrading these is a “once-in-a-generation opportunity” to harness “lots of data, i.e., Sensory big data.”

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Utilities & Energy, Vibe | Tagged , , , , , | Leave a comment

Download the Informatica Big Data Edition Trial and Unleash the Power of Hadoop

Cloudera Hadoop

Big Data Edition Trial Sandbox for Cloudera

Come and get it.  For developers hungry to get their hands on Informatica on Hadoop, a downloadable free trial of Informatica Big Data Edition was launched today on the Informatica Marketplace.  See for yourself the power of the killer app on Hadoop from the leader in data integration and quality.

Thanks to the generous help of our partners, the Informatica Big Data team has preinstalled the Big Data Edition inside the sandbox VMs of the two leading Hadoop distributions.  This empowers Hadoop and Informatica developers to easily try the codeless, GUI driven Big Data Edition to build and execute ETL and data integration pipelines natively on Hadoop for Big Data analytics.

Informatica Big Data Edition is the most complete and powerful suite for Hadoop data pipelines and can increase productivity up to 5 times. Developers can leverage hundreds of out-of-the-box Informatica pre-built transforms and connectors for structured and unstructured data processing on Hadoop.  With the Informatica Vibe Virtual Data Machine running directly on each node of the Hadoop cluster, the Big Data Edition can profile, parse, transform and cleanse data at any scale to prepare data for data science, business intelligence and operational analytics.

The Informatica Big Data Edition Trial Sandbox VMs will have a 60 day trial version of the Big Data Edition preinstalled inside a 1-node Hadoop cluster.  The trials include sample data and mappings as well as getting started documentation and videos.  It is possible to try your own data with the trials, but processing is limited to the 1-node Hadoop cluster and the machine you have it running on.  Any mappings you develop in the trial can be easily moved on to a production Hadoop cluster running the Big Data Edition. The Informatica Big Data Edition also supports MapR and Pivotal Hadoop distributions, however, the trial is currently only available for Cloudera and Hortonworks.

Hadoop Hortonworks

Big Data Edition Trial Sandbox for Hortonworks

Accelerate your ability to bring Hadoop from the sandbox into production by leveraging Informatica’s Big Data Edition. Informatica’s visual development approach means that more than one hundred thousand existing Informatica developers are now Hadoop developers without having to learn Hadoop or new hand coding techniques and languages. Informatica can help organizations easily integrate Hadoop into their enterprise data infrastructure and bring the PowerCenter data pipeline mappings running on traditional servers onto Hadoop clusters with minimal modification. Informatica Big Data Edition reduces the risk of Hadoop projects and increases agility by enabling more of your organization to interact with the data in your Hadoop cluster.

To get the Informatica Big Data Edition Trial Sandbox VMs and more information please visit Informatica Marketplace

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Integration, Hadoop | Tagged , , , , | Leave a comment

Is the Internet of Things relevant for the government?

Get connected. Be connected. Make connections. Find connections. The Internet of Things (IoT) is all about connecting people, processes, data and, as the name suggests, things. The recent social media frenzy surrounding the ALS Ice Bucket Challenge has certainly reminded everyone of the power of social media, the Internet and a willingness to answer a challenge. Fueled by personal and professional connections, the craze has transformed fund raising for at least one charity. Similarly, IoT may potentially be transformational to the business of the public sector, should government step up to the challenge.

shutterstock_132378518

Is the Internet of Things relevant for the government?

Government is struggling with the concept and reality of how IoT really relates to the business of government, and perhaps rightfully so. For commercial enterprises, IoT is far more tangible and simply more fun. Gaming, televisions, watches, Google glasses, smartphones and tablets are all about delivering over-the-top, new and exciting consumer experiences. Industry is delivering transformational innovations, which are connecting people to places, data and other people at a record pace.

It’s time to accept the challenge. Government agencies need to keep pace with their commercial counterparts and harness the power of the Internet of Things. The end game is not to deliver new, faster, smaller, cooler electronics; the end game is to create solutions that let devices connecting to the Internet interact and share data, regardless of their location, manufacturer or format and make or find connections that may have been previously undetectable. For some, this concept is as foreign or scary as pouring ice water over their heads. For others, the new opportunity to transform policy, service delivery, leadership, legislation and regulation is fueling a transformation in government. And it starts with one connection.

One way to start could be linking previously siloed systems together or creating a golden record of all citizen interactions through a Master Data Management (MDM) initiative. It could start with a big data and analytics project to determine and mitigate risk factors in education or linking sensor data across multiple networks to increase intelligence about potential hacking or breaches. Agencies could stop waste, fraud and abuse before it happens by linking critical payment, procurement and geospatial data together in real time.

This is the Internet of Things for government. This is the challenge. This is transformation.

This article was originally published on www.federaltimes.com. Please view the original listing here

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Data Integration, Data Security, Master Data Management, Public Sector, Uncategorized | Tagged , , , , , | Leave a comment

In a Data First World, IT must Empower Business Change!

IT must Empower Business ChangeYou probably know this already, but I’m going to say it anyway: It’s time you changed your infrastructure. I say this because most companies are still running infrastructure optimized for ERP, CRM and other transactional systems. That’s all well and good for running IT-intensive, back-office tasks. Unfortunately, this sort of infrastructure isn’t great for today’s business imperatives of mobility, cloud computing and Big Data analytics.

Virtually all of these imperatives are fueled by information gleaned from potentially dozens of sources to reveal our users’ and customers’ activities, relationships and likes. Forward-thinking companies are using such data to find new customers, retain existing ones and increase their market share. The trick lies in translating all this disparate data into useful meaning. And to do that, IT needs to move beyond focusing solely on transactions, and instead shine a light on the interactions that matter to their customers, their products and their business processes.

They need what we at Informatica call a “Data First” perspective. You can check out my first blog first about being Data First here.

A Data First POV changes everything from product development, to business processes, to how IT organizes itself and —most especially — the impact IT has on your company’s business. That’s because cloud computing, Big Data and mobile app development shift IT’s responsibilities away from running and administering equipment, onto aggregating, organizing and improving myriad data types pulled in from internal and external databases, online posts and public sources. And that shift makes IT a more-empowering force for business change. Think about it: The ability to connect and relate the dots across data from multiple sources finally gives you real power to improve entire business processes, departments and organizations.

I like to say that the role of IT is now “big I, little t,” with that lowercase “t” representing both technology and transactions. But that role requires a new set of priorities. They are:

  1. Think about information infrastructure first and application infrastructure second.
  2. Create great data by design. Architect for connectivity, cleanliness and security. Check out the eBook Data Integration for Dummies.
  3. Optimize for speed and ease of use – SaaS and mobile applications change often. Click here to try Informatica Cloud for free for 30 days.
  4. Make data a team sport. Get tools into your users’ hands so they can prepare and interact with it.

I never said this would be easy, and there’s no blueprint for how to go about doing it. Still, I recognize that a little guidance will be helpful. In a few weeks, Informatica’s CIO Eric Johnson and I will talk about how we at Informatica practice what we preach.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, Big Data, Business Impact / Benefits, Data Integration, Data Security, Data Services, Enterprise Data Management | Tagged , , , | Leave a comment

Competing on Analytics: A Follow Up to Thomas H. Davenport’s Post in HBR

CompetitionIf you ask a CIO today about the importance of data to their enterprises, they will likely tell you about the need to “compete on analytics” and to enable faster business decisions. At the same time, CIOs believe they “need to provide the intelligence to make better business decisions”. One CIO said it was in fact their personal goal to get the business to a new place faster, to enable them to derive new business insights, and to get to the gold at the end of the rainbow”.

Similarly, another CIO said that Big Data and Analytics were her highest priorities. “We have so much knowledge locked up in the data, it is just huge. We need the data cleaning and analytics to pull this knowledge out of data”. At the same time the CIOs that we talked to see their organizations as “entering an era of ubiquitous computing where users want all data on any device when they need it.”

Why does faster, better data really matters to the enterprise?

DavenportSo why does it matter? Thomas H. Davenport says, “at a time when firms in many industries offer similar products and use comparable technologies, business processes are among the last remaining points of differentiation.” A CIO that we have talked to concurred in saying, “today, we need to move from “management by exception to management by observation”. Derick Abell amplified upon this idea when he said in his book Managing with Dual Strategies “for control to be effective, data must be timely and provided at intervals that allow effective intervention”.

Davenport explains why timely data matters in this way “analytics competitors wring every last drop of value from those processes”. Given this, “they know what products their customers want, but they also know what prices those customers will pay, how many items each will buy in a lifetime, and what triggers will make people buy more. Like other companies, they know compensation costs and turnover rates, but they can also calculate how much personnel contribute to or detract from the bottom line and how salary levels relate to individuals’ performance. Like other companies, they know when inventories are running low, but they can also predict problems with demand and supply chains, to achieve low rates of inventory and high rates of perfect orders”.

What then prevents businesses from competing on analytics?

FixMoving to what Davenport imagines requires not just a visualizing tool. It involves fixing what is allying IT’s systems. One CIO suggested this process can be thought of like an athlete building the muscles they need to compete. He said that businesses really need the same thing. In his eyes, data cleaning, data security, data governance, and master data management represent the muscles to compete effectively on analytics. Unless you do these things, you cannot truly compete on analytics. At UMASS Memorial Health, for example, they “had four independent patient registration systems supporting the operations of their health system, with each of these having its own means of identifying patients, assigning medical record numbers, and recording patient care and encounter information”. As a result, “UMass lacked an accurate, reliable, and trustworthy picture of how many unique patients were being treated by its health system. In order to fix things, UMASS needed to “resolve patient, provider and encounter data quality problems across 11 source systems to allow aggregation and analysis of data”. Prior to fixing its data management system, this meant that “UMass lacked a top-down, comprehensive view of clinical and financial performance across its extended healthcare enterprise”.

UMASS demonstrates how IT needs to fix their data management in order to improve their organization’s information intelligence and drive real and substantial business advantage. Fixing data management clearly involves delivering the good data that business users can safely use to make business decisions. It, also, involves ensuring that data created is protected. CFOs that we have talked to say Target was a watershed event for them—something that they expect will receive more and more auditing attention.

Once our data is good and safe, we need to connect current data sources and new data sources. And this needs to not take as long as it did in the past. The delivery of data needs to happen fast enough that business problems can be recognized as they occur and be solved before they become systemic.  For this reason, users need to get access to data when and where they it is needed.

With data management fixed, data intelligence is needed so that business users can make sense out of things faster. Business users need to be able to search and find data. They need self-service so they can combine existing and new unstructured data sources to test data interrelationship hypothesis. This means the ability to assemble data from different sources at different times. Simply put this is all about data orchestration without having any preconceived process. And lastly, they need the intelligence to automatically sense and respond to changes as new data becomes collected.

Some parting thoughts

The next question may be whether competing upon data actual pay business dividends. Alvin Toffler says “Tiny insights can yield huge outputs”. In other words, the payoff can be huge. And those that do so will increasingly have the “right to win” against their competitors as you use information to wring every last drop of value from your business processes.

Related links

Solution Brief: The Intelligent Data Platform

Related Blogs

Is Big Data Destined To Become Small And Vertical?
Big Data Why?
The Business Case for Better Data Connectivity
What is big data and why should your business care?
Twitter: @MylesSuer

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, CIO | Tagged , , , , , | Leave a comment

Malcolm Gladwell, Big Data and What’s to be Done About Too Much Information

Malcolm Gladwell wrote an article in The New Yorker magazine in January, 2007 entitled “Open Secrets.” In the article, he pointed out that a national-security expert had famously made a distinction between puzzles and mysteries.

New Yorker writer Malcolm Gladwell

New Yorker writer Malcolm Gladwell

Osama bin Laden’s whereabouts were, for many years, a puzzle. We couldn’t find him because we didn’t have enough information. The key to the puzzle, it was assumed, would eventually come from someone close to bin Laden, and until we could find that source, bin Laden would remain at large. In fact, that’s precisely what happened. Al-Qaida’s No. 3 leader, Khalid Sheikh Mohammed, gave authorities the nicknames of one of bin Laden’s couriers, who then became the linchpin to the CIA’s efforts to locate Bin Laden.

By contrast, the problem of what would happen in Iraq after the toppling of Saddam Hussein was a mystery. It wasn’t a question that had a simple, factual answer. Mysteries require judgments and the assessment of uncertainty, and the hard part is not that we have too little information but that we have too much.

This was written before “Big Data” was a household word and it begs the very interesting question of whether organizations and corporations that are, by anyone’s standards, totally deluged with data, are facing puzzles or mysteries. Consider the amount of data that a company like Western Union deals with.

Western Union is a 160-year old company. Having built scale in the money transfer business, the company is in the process of evolving its business model by enabling the expansion of digital products, growth of web and mobile channels, and a more personalized online customer experience. Sounds good – but get this: the company processes more than 29 transactions per seconds on average. That’s 242 million consumer-to-consumer transactions and 459 million business payments in a year. Nearly a billion transactions – a billion! As my six-year-old might say, that number is big enough “to go to the moon and back.” Layer on top of that the fact that the company operates in 200+ countries and territories, and conducts business in 120+ currencies. Senior Director and Head of Engineering Abhishek Banerjee has said, “The data is speaking to us. We just need to react to it.” That implies a puzzle, not a mystery – but only if data scientists are able to conduct statistical modeling and predictive analysis, systematically noting trends in sending and receiving behaviors. Check out what Banerjee and Western Union CTO Sanjay Saraf have to say about it here.

Or consider General Electric’s aggressive and pioneering move into what’s dubbed as the industrial internet. In a white paper entitled “The Case for an Industrial Big Data Platform: Laying the Groundwork for the New Industrial Age,” GE reveals some of the staggering statistics related to the industrial equipment that it manufactures and supports (services comprise 75% of GE’s bottom line):

  • A modern wind turbine contains approximately 50 sensors and control loops which collect data every 40 milliseconds.
  • A farm controller then receives more than 30 signals from each turbine at 160-millisecond intervals.
  • At every one-second interval, the farm monitoring software processes 200 raw sensor data points with various associated properties with each turbine.

Phew! I’m no electricity operations expert, and you probably aren’t either. And most of us will get no further than simply wrapping our heads around the simple fact that GE turbines are collecting a LOT of data. But what the paper goes on to say should grab your attention in a big way: “The key to success for this wind farm lies in the ability to collect and deliver the right data, at the right velocity, and in the right quantities to a wide set of well-orchestrated analytics.” And the paper goes on to recommend that anyone involved in the Industrial Internet revolution strongly consider its talent requirements, with the suggestion that Chief Data officers and/or Data Scientists may be the next critical hires.

Which brings us back to Malcolm Gladwell. In the aforementioned article, Gladwell goes on to pull apart the Enron debacle, and argues that it was a prime example of the perils of too much information. “If you sat through the trial of (former CEO) Jeffrey Skilling, you’d think that the Enron scandal was a puzzle. The company, the prosecution said, conducted shady side deals that no one quite understood. Senior executives withheld critical information from investors…We were not told enough—the classic puzzle premise—was the central assumption of the Enron prosecution.” But in fact, that was not true. Enron employed complicated – but perfectly legal–accounting techniques used by companies that engage in complicated financial trading. Many journalists and professors have gone back and looked at the firm’s regulatory filings, and have come to the conclusion that, while complex and difficult to identify, all of the company’s shenanigans were right there in plain view. Enron cannot be blamed for covering up the existence of its side deals. It didn’t; it disclosed them. As Gladwell summarizes:

“Puzzles are ‘transmitter-dependent’; they turn on what we are told. Mysteries are ‘receiver dependent’; they turn on the skills of the listener.”

big data

Wind turbines, jet engines and other machinery sensors generate unprecedented amounts of data

I would argue that this extremely complex, fast moving and seismic shift that we call Big Data will favor those who have developed the ability to attune, to listen and make sense of the data. Winners in this new world will recognize what looks like an overwhelming and intractable mystery, and break that mystery down into small and manageable chunks and demystify the landscape, to uncover the important nuggets of truth and significance.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Enterprise Data Management | Tagged , , , | 1 Comment

Informatica Cloud Summer ’14 Release Breaks Down Barriers with Unified Data Integration and Application Integration for Real Time and Bulk Patterns

This past week, Informatica Cloud marked an important milestone with the Summer 2014 release of the Informatica Cloud platform. This was the 20th Cloud release, and I am extremely proud of what our team has accomplished.

“SDL’s vision is to help our customers use data insights to create meaningful experiences, regardless of where or how the engagement occurs. It’s multilingual, multichannel and on a global scale. Being able to deliver the right information at the right time to the right customer with Informatica Cloud Summer 2014 is critical to our business and will continue to set us apart from our competition.”

– Paul Harris, Global Business Applications Director, SDL Pic

When I joined Informatica Cloud, I knew that it had the broadest cloud integration portfolio in the marketplace: leading data integration and analytic capabilities for bulk integration, comprehensive cloud master data management and test data management, and over a hundred connectors for cloud apps, enterprise systems and legacy data sources.. all delivered in a self-service design with point-and-click wizards for citizen integrators, without the need for complex and costly manual custom coding.

But, I also learned that our broad portfolio belies another structural advantage: because of Informatica Cloud’s unique, unified platform architecture, it has the ability to surface application (or real time) integration capabilities alongside its data integration capabilities with shared metadata across real time and batch workflows.

With the Summer 2014 release, we’ve brought our application integration capabilities to the forefront. We now provide the most-complete cloud app integration capability in the marketplace. With a design environment that’s meant not for just developers but also line of business IT, now app admins can also build real time process workflows that cut across on-premise and cloud and include built-in human workflows. And with the capability to translate these process workflows instantly into mobile apps for iPhone and Android mobile devices, we’re not just setting ourselves apart but also giving customers the unique capabilities they need for their increasingly mobile employees.

InformaticaCloud

Informatica Cloud Summer Release Webinar Replay

“Schneider’s strategic initiative to improve front-office performance relied on recording and measuring sales person engagement in real time on any mobile device or desktop. The enhanced real time cloud application integration features of Informatica Cloud Summer 2014 makes it all possible and was key to the success of a highly visible and transformative initiative.”

– Mark Nardella, Global Sales Process Director, Schneider Electric SE

With this release, we’re also giving customers the ability to create workflows around data sharing that mix and match batch and real time integration patterns. This is really important.  Because unlike the past, where you had to choose between batch and real time, in today’s world of on-premise, cloud-based, transactional and social data, you’re now more than ever having to deal with both real time interactions and the processing of large volumes of data. For example, let’s surmise a typical scenario these days at high-end retail stores. Using a clienteling iPad app, the sales rep looks up bulk purchase history and inventory availability data in SAP, confirms availability and delivery date, and then processes the customer’s order via real time integration with NetSuite. And if you ask any customer, having a single workflow to unify all of that for instant and actionable insights is a huge advantage.

“Our industry demands absolute efficiency, speed and trust when dealing with financial information, and the new cloud application integration feature in the latest release of Informatica Cloud will help us service our customers more effectively by delivering the data they require in a timely fashion. Keeping call-times to a minimum and improving customer satisfaction in real time.”

– Kimberly Jansen, Director CRM, Misys PLC

We’ve also included some exciting new Vibe Integration packages or VIPs. VIPs deliver pre-built business process mappings between front-office and back-office applications. The Summer 2014 release includes new bidirectional VIPs for Siebel to Salesforce and SAP to Salesforce that make it easier for customers to connect their Salesforce with these mission-critical business applications.

And lastly, but not least importantly, the release includes a critical upgrade to our API Framework that provides the Informatica Cloud iPaaS end-to-end support for connectivity to any company’s internal or external APIs. With the newly available API creation, definition and consumption patterns, developers or citizen integrators can now easily expose integrations as APIs and users can consume them via integration workflows or apps, without the need for any additional custom code.

The features and capabilities released this summer are available to all existing Informatica Cloud customers, and everyone else through our free 30-day trial offer.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud, Cloud Application Integration, Cloud Computing, Cloud Data Integration, Data Integration, Uncategorized | Tagged , , , , , | Leave a comment

Driving Third Wave Businesses: Ensuring Your Business Has The Right To Win

TofflerAs adjunct university faculty, I get to talk to students about how business strategy increasingly depends upon understanding how to leverage information. To make discussion more concrete, I share with students the work of Alvin Toffler. In The Third Wave, Toffler asserts that we live in a world where competition will increasingly take place upon the currency and usability of information.

In a recent interview, Toffler said that “given the acceleration of change; companies, individuals, and governments base many of their daily decisions on obsoledge—knowledge whose shelf life has expired.” He continues by stating that “companies everywhere are trying to put a price on certain forms of intellectual property. But if…knowledge is at the core of the money economy, than we need to understand knowledge much better than we do now. And tiny insights can yield huge outputs”. 

Driving better information management in the information age

information age

To me, this drives to three salient conclusions for information age businesses:

  1. Information needs to drive further down organizations because top decision makers do not have the background to respond at the pace of change.
  2. Information needs to be available faster which means that we need to reducing the processing time for structure and unstructured information sources.
  3. Information needs to be available when the organization is ready for it. For multinational enterprises this means “Always On” 24/7 across multiple time zones on any device.

Effective managers today are effective managers of people and information

information

Effective managers today are effective managers of information. Because processing may take too much time, Toffler’s remarks suggest to me we need to consider human information—the ideas and communications we share every day—within the mix of getting access to the right information when it is needed and where it is needed. Now more than ever is the time for enterprises to ensure their decision makers have the timely information to make better business decisions when they are relevant. This means that unstructured data, a non-trivial majority of business information, needs to be made available to business users and related to existing structured sources of data.

Derick Abell says that “for (management) control to be effective, data must be timely and provided at interval that allows effective intervention”. Today this is a problem for most information businesses. As I see it, information optimization is the basis of powering the enterprise through “Third Wave” business competition. Organizations that have the “right to win” will have as a core capability better-than-class access to current information for decision makers.

Putting in place a winning information management strategy

If you talk to CIOs today, they will tell you that they are currently facing 4 major information age challenges.

  • Mobility—Enabling their users to view data anytime, anyplace, and any device
  • Information Trust—Making data dependable enough for business decisions as well as governing data across all business systems.
  • Competing on Analytics—Getting information to business users fast enough to avoid Toffler’s Obsoledge.
  • New and Big Data Sources—Connecting existing data to new value added sources of data.

Some information age

siloedLots of things, however, get in the way of delivering on the promises of the Information Age. Our current data architecture is siloed, fragile, and built upon layer after layer of spaghetti code integrations. Think about what is involved just to cobble together data on a company’s supply chain. A morass of structured data systems have vendor and transaction records locked up in application databases and data warehouses all over the extended enterprise. So it is not amazing that enterprises struggle to put together current, relevant data to run their businesses upon. Functions like finance depend largely upon manual extracts being massaged and integrated in spreadsheets because of concern over the quality of data being provided by financial systems. Some information age!

How do we connect to new sources of data?

At the same time, many are trying today to extend the information architecture to add social media data, mobile location data, and even machine data. Much of this data is not put together in the same way as data in an application database or data warehouse. However, being able to relate this data to existing data sources can yield significant benefits. Think about the potential benefit of being able to relate social interactions and mobile location data to sales data or to relate machine data to compliance data.

A big problem is many of these new data types potentially have even more data quality gaps than historical structured data systems. Often the signal to noise for this data can be very low for this reason. But this data can be invaluable to business decision making. For this reason, this data needs to be cleaned up and related to older data sources. Finally, it needs to be provided to business users in whatever manner they want to consume it. 

How then do we fix the Information Age?

fixing

Enabling the kind of Information Age that Toffler imagined requires two things. Enterprises fix their data management and enable the information intelligence needed to drive real business competitive advantage. Fixing data management involves delivering good data that business users can safely make decisions from. It, also, involves ensuring that data once created is protected. CFOs that we have talked to say Target was a watershed event for them—something that they expect will receive more and more auditing attention.

We need at the same time to build the connection between old data sources and new data sources. And this needs to not take as long as in the past to connect data. Delivery needs to happen faster so business problems can be recognized and solved more quickly.  Users need to get access to data when and where they need it.

With data management fixed, data intelligence needs to provide business users the ability to make sense out of things they find in the data. Business users need as well to be able to search and find data. They, also, need self-service so they can combine existing and new unstructured data sources to test data interrelationship hypothesis. This means the ability to assemble data and put it together and do it from different sources at different times. Simply put this is about data orchestration without any preconceived process. And lastly, business users need the intelligence to automatically sense and respond to changes as new data is collecting.

Tiny insights can yield huge outputs

payoffs

Obviously, there is a cost to solving our information age issues, but it is important to remember what Toffler says. “Tiny insights can yield huge outputs”. In other words, the payoff is huge for shaking off the shackles of our early information age business architecture. And those that do this will increasingly have the “right to win” against their competitors as they use information to wring every last drop of value from their business processes.

Related links
Solution Brief: The Intelligent Data Platform
Related Blogs

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, CIO, Data Integration, Data Quality | Tagged , , , , , , | Leave a comment

The Data-Driven CMO: A Q&A with Glenn Gow (CEO of Crimson Research)

Q&A with Crimson Research

I recently had the opportunity to have a very interesting discussion with Glenn Gow, the CEO of Crimson Marketing.  I was impressed at what an interesting and smart guy he was, and with the tremendous insight he has into the marketing discipline.  He consults with over 150 CMOs every year, and has a pretty solid understanding about the pains they are facing, the opportunities in front of them, and the approaches that the best-of-the-best are taking that are leading them towards new levels of success.

I asked Glenn if he would be willing to do a Q&A in order to share some of his insight.  I hope you find his perspective as interesting as I did!

 crimson_logo

______________________________________________

Q: What do you believe is the single biggest advantage that marketers have today?

A: Being able to use data in marketing is absolutely your single biggest competitive advantage as a marketer.  And therefore your biggest challenge is capturing, leveraging and rationalizing that data.  The marketers we speak with tend to fall into two buckets.

  1. Those who understand that the way they manage data is critical to their marketing success.  These marketers use data to inform their decisions, and then rely on it to measure their effectiveness.
  2. Those who haven’t yet discovered that data is the key to their success. Often these people start with systems in mind – marketing automation, CRM, etc.  But after implementing and beginning to use these systems, they almost always come to the realization that they have a data problem.

______________________________________________

Q:  How has this world of unprecedented data sources and volumes changed the marketing discipline?

A:  In short… dramatically.  The shift has really happened in the last two years. The big impetus for this change has really been the availability of data.  You’ve probably heard this figure, but Google’s Eric Schmidt likes to say that every two days now, we create as much information as we did from the dawn of civilization until 2003.

We believe this is a massive opportunity for marketers.  The question is, how do we leverage this data.  How do we pull the golden nuggets out that will help us do our jobs better.  Marketers now have access to information they’ve never had access to or even contemplated before.  This gives them the ability to become a more effective marketer. And by the way… they have to!  Customers expect them to!

For example, ad re-targeting.  Customers expect to be shown ads that are relevant to them, and if marketers don’t successfully do this, they can actually damage their brand.

In addition, competitors are taking full advantage of data, and are getting better every day at winning the hearts and minds of their customers – so marketers need to act before their competitors do.

Marketers have a tremendous opportunity – rich data is available and the technology is available to harness it is now, so that they can win a war that they could never before.

______________________________________________

Q:  Where are the barriers they are up against in harnessing this data?

A:
  I’d say that barriers can really be broken down into 4 main buckets: existing architecture, skill sets, relationships, and governance.

  • Existing Architecture: The way that data has historically been collected and stored doesn’t have the CMO’s needs in mind.  The CMO has an abundance of data theoretically at their fingertips, but they cannot do what they want with it.  The CMO needs to insist on, and work together with the CIO to build an overarching data strategy that meets their needs – both today and tomorrow because the marketing profession and tool sets are rapidly changing.  That means the CMO and their team need to step into a conversation they’ve never had before with the CIO and his/her team.  And it’s not about systems integration but it’s about data integration.
  • Existing Skill Sets:  The average marketer today is a right-brained individual.  They entered the profession because they are naturally gifted at branding, communications, and outbound perspectives.  And that requirement doesn’t go away – it’s still important.  But today’s marketer now needs to grow their left-brained skills, so they can take advantage of inbound information, marketing technologies, data, etc.  It’s hard to ask a right-brained person to suddenly be effective at managing this data.  The CMO needs to fill this skillset gap primarily by bringing in people that understand it, but they cannot ignore it themselves.  The CMO needs to understand how to manage a team of data scientists and operations people to dig through and analyze this data.  Some CMOs have actually learned to love data analysis themselves (in fact your CMO at Informatica Marge Breya is one of them).
  • Existing Relationships:  In a data-driven marketing world, relationships with the CIO become paramount.  They have historically determined what data is collected, where it is stored, what it is connected to, and how it is managed.  Today’s CMO isn’t just going to the CIO with a simple task, as in asking them to build a new dashboard.  They have to collectively work together to build a data strategy that will work for the organization as a whole.  And marketing is the “new kid on the block” in this discussion – the CIO has been working with finance, manufacturing, etc. for years, so it takes some time (and great data points!) to build that kind of cohesive relationship.  But most CIOs understand that it’s important, if for no other reason that they see budgets increasingly shifting to marketing and the rest of the Lines of Business.
  • Governance:  Who is ultimately responsible for the data that lives within an organization?  It’s not an easy question to answer.  And since marketing is a relatively new entrant into the data discussion, there are often a lot of questions left to answer. If marketing wants access to the customer data, what are we going to let them do with it? Read it?  Append to it?  How quickly does this happen? Who needs to author or approve changes to a data flow?  Who manages opt ins/outs and regulatory black lists?  And how does that impact our responsibility as an organization?  This is a new set of conversations for the CMO – but they’re absolutely critical.

______________________________________________

Q:  Are the CMOs you speak with concerned with measuring marketing success?

A:  Absolutely.  CMOs are feeling tremendous pressure from the CEO to quantify their results.  There was a recent Duke University study of CMOs that asked if they were feeling pressure from the CEO or board to justify what they’re doing.  64% of the respondents said that they do feel this pressure, and 63% say this pressure is increasing.

CMOs cannot ignore this.  They need to have access to the right data that they can trust to track the effectiveness of their organizations.  They need to quantitatively demonstrate the impact that their activities have had on corporate revenue – not just ROI or Marketing Qualified Leads.  They need to track data points all the way through the sales cycle to close and revenue, and to show their actual impact on what the CEO really cares about.

______________________________________________

Q:  Do you think marketers who undertake marketing automation products without a solid handle on their data first are getting solid results?

A:
  That is a tricky one.  Ideally, yes, they’d have their data in great shape before undertaking a marketing automation process.  The vast majority of companies who have implemented the various marketing technology tools have encountered dramatic data quality issues, often coming to light during the process of implementing their systems. So data quality and data integration is the ideal first step.

But the truth is, solving a company’s data problem isn’t a simple, straight-forward challenge.  It takes time and it’s not always obvious how to solve the problem.  Marketers need to be part of this conversation.  They need to drive how they’re going to be managing data moving forward.  And they need to involve people who understand data well, whether they be internal (typically in IT), or external (consulting companies like Crimson, and technology providers like Informatica).

So the reality for a CMO, is that it has to be a parallel path.  CMOs need to get involved in ensuring that data is managed in a way they can use effectively as a marketer, but in the meantime, they cannot stop doing their day-to-day job.  So, sure, they may not be getting the most out of their investment in marketing automation, but it’s the beginning of a process that will see tremendous returns over the long term.

______________________________________________

Q:  Is anybody really getting it “right” yet?

A:  This is the best part… yes!  We are starting to see more and more forward-thinking organizations really harnessing their data for competitive advantage, and using technology in very smart ways to tie it all together and make sense of it.  In fact, we are in the process of writing a book entitled “Moneyball for Marketing” that features eleven different companies who have marketing strategies and execution plans that we feel are leading their industries.

______________________________________________

So readers, what do you think?  Who do you think is getting it “right” by leveraging their data with smart technology and truly getting meaningful an impactful results?

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, CMO, Customer Acquisition & Retention, Operational Efficiency, Vibe | Tagged , , , , , , | Leave a comment