Category Archives: Business Impact / Benefits

Wearables Hackfest: What to do with that Data?

I had the opportunity to participate in the Boulder edition of the “Where are your Wearables” Hackfest last week hosted by Quick Left.

34420_large_Under_Armour_Wearable_Concept_FP_Wide

With over a 100 people showing up and lots of first time hackers we saw eight teams form up with a mix of talents. Being a non-coder myself I simply picked a group that was not too big and offered my product talents in talking through concepts. More on what we built in a moment.

First a little bit about the hackfest: We were provided access to the Fitbit API,UnderArmour API and had Sparkfun LilyPad kit with the goal to build something in 3 hours.

Observations

Hackfests almost always create issues when it comes to integration. Unless someone on the team already knows a lot about an API the first issues are getting setup.

  • Connecting to data. Most teams used the Fitbit API only or a couple had another device they brought and then used that as a data source. Most teams spent time just getting access to the API setup, OAuth working and then dumping their data into a client for the app (if they got that far)
  • Security. OAuth was a requirement for the Fitbit API which is pretty standard these days. The team I worked on had some issues that slowed us down getting this to work correctly.
  • Clean data & real-time data. Given most teams had a basic scenario we were all only working with a single data set, but still getting access to the data in real-time for the client app added complexity to the solution. In the real world for most wearables to break through the basic health examples we see today they are going to need to blend multiple data sets to provide value to the end user.

A few thoughts on the solutions that were built

Most of the teams just tried to get one integration working and then add a secondary calculation or evaluation that would provide the value to the user. It’s only 3 hours and the KISS principle was generally followed by the best examples of the night.

In no order these were some of the solutions that stood out to me.

WaterGoals: Wearable integrated: Fitbit. This seemed like a very practical solution. The idea was that a lot of people are not able to read the display screen on the typical watch if they are exercising so why not integrate the data and add visuals that give them data like heart rate and water consumption. For example, the data could be integrated into the clothing they are wearing and either a single visual element or the entire garment could change color based on the person being below or above their hear rate goal range. Another example was adding a touch pad that would add a quantity of water consumed – say 1 cup – when the person pushed it while they were exercising because it’s not easy to fumble with your wearable.

Fitbeer: Wearable integrated: Fitbit. I worked on this team and the idea was just to provide an easy way for someone to track consumption of any liquid, but we used beer for the example. By tracking an activity type on a Fitbit and identifying arm movement the goal would be to track the number of times a person picked up their glass to consume and track real-time the calories being burned by the activity and the calories being consumed. In addition we planned an integration to Twitter so someone could share their results as a social component.

Where’s the damn remote: Wearable integrated: Myo armband. This team used a Myo to integrate to an Apple TV so they could do selection of shows/movies/music with hand gestures. This was interesting since they had to define the hand gestures and then do the integration to map them to get the desired action in the Apple TV. This was the most real demo of the night in terms of fully working.

My main take away from the event was that people are still searching for the way that wearables can be dead simple for the user and provide lots of value. Most of the generation 1 solutions provide some type of health measurements and now also provide access to the Internet (e.g. Google Glass) but finding ways to combine multiple data sets to provide a life changing solution are what will let us know when we are starting to see generation 2 solutions. And for the enterprise IT area it still seems wearables are a long way off as they remain very much a consumer oriented solution today with some things to work out before we see anything but very early experimentation for enterprise IT.

Share
Posted in Big Data, Business Impact / Benefits | Tagged | Leave a comment

Managing a Vast Amount of Data Successfully

managedataI have two kids. In school. They generate a remarkable amount of paper. From math worksheets, permission slips, book reports (now called reading responses) to newsletters from the school. That’s a lot of paper. All of it is presented in different forms with different results – the math worksheets tell me how my child is doing in math, the permission slips tell me when my kids will be leaving school property and the book reports tell me what kind of books my child is interested in reading. I need to put the math worksheet information into a storage space so I can figure out how to prop up my kid if needed on the basic geometry constructs. The dates that permission slips are covering need to go into the calendar. The book reports can be used at the library to choose the next book.

We are facing a similar problem (albeit on a MUCH larger scale) in the insurance market. We are getting data from clinicians. Many of you are developing and deploying mobile applications to help patients manage their care, locate providers and improve their health. You may capture licensing data to assist pharmaceutical companies identify patients for inclusion in clinical trials. You have advanced analytics systems for fraud detection and to check the accuracy and consistency of claims. Possibly you are at the point of near real-time claim authorization.

The amount of data generated in our world is expected to increase significantly in the coming years. There are an estimated 50 petabytes of data in the Healthcare realm, which is predicted to grow by a factor of 50 to 25,000 petabytes by 2020. Healthcare payers already store and analyze some of this data. However in order to capture, integrate and interrogate large information sets, the scope of the payer information will have to increase significantly to include provider data, social data, government data, pharmaceutical and medical product manufacturers data, and information aggregator data.

Right now – you probably depend on a traditional data warehouse model and structured data analytics to access some of your data. This has worked adequately for you up to now, but with the amount of data that will be generated in the future, you need the processing capability to load and query multi-terabyte datasets in a timely fashion. You need the ability to manage both semi-structured and unstructured data.

Fortunately, a set of emerging technologies (called “Big Data”) may provide the technical foundation of a solution. Big Data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage and process data within a tolerable amount of time. While some existing technology may prove inadequate to future tasks, many of the information management methods of the past will prove to be as valuable as ever. Assembling successful Big Data solutions will require a fusion of new technology and old-school disciplines:

Which of these technologies do you have? Which of these technologies can integrate with on-premise AND cloud based solutions? On which of these technologies does your organization have knowledgeable resources that can utilize the capabilities to take advantage of Big Data?

Share
Posted in 5 Sales Plays, Big Data, Business Impact / Benefits, Data Integration, Data Warehousing, Healthcare, Master Data Management | Tagged , , , , , | Leave a comment

How I Learned to Stop Worrying and Love the Data Mania – A Marketer’s Tale

Event_Data_Mania_for_SaaS_Leaders

Data Mania Event for SaaS Leaders

I have always loved making connections: between people, between a product and its message, between partner companies and their messages. Coming from a creative agency background where I worked with our clients, created messaging, found images, and wrote copy all day, what I did not love was cold hard data. In fact, I’m embarrassed to admit I rarely thought about it. I handed off my creative work and let the client worry about the boring details. As long as they kept coming back, all was well.

Enter year 2008. I decided to go in-house for an IaaS provider, with a laser focus on SaaS companies. In many ways it was an easy transition, with one glaring difference – METRICS. I used every trick in my book to escape tracking and reporting. I was “too busy” with “more important” things. Needless to say, this did not go over well. But my background, along with our non-compatible (how I saw it at the time) systems, had me up nights worrying about the reports I should be doing. In truth, I was too busy to spend an extra several hours pulling information from three systems, sifting through and manually mashing it together in a spreadsheet to get the report I needed. And by the end I always had a huge headache and wasn’t even sure my information was correct. But none of that got me out of doing the work I hated.

Then came the first time I was able to prove a program’s worth; there was a spark of excitement – an awakening to the power of data. For the next several years, through both start-up and enterprise environments, I had a love / hate relationship with data. No company I worked for had integrated SaaS/software systems, and reporting took hours of manual work for me, and my teams. The desire was there, even the occasional win – but it was laden with bitter feelings, from the pain of wasted time and uncertain results.

Everything changed last year when I joined Informatica. For the first time, my marketing automation was integrated with my CRM, which was integrated with my… you know the rest. And reporting? Even that was now easy. For a company that lives and breathes data integration, obviously this makes sense. As a person who’s never experienced this before, I had no idea what a relief it would be until I lived it.

Now imagine unlocking this ease of use not only for your employees (very important), but also for your customers (maybe even more important). I’d like to invite you to Informatica’s first SaaS ecosystem event where data-driven executives from Salesforce.com, AWS, Tableau, Marketo, AppDynamics, D&B, Adobe, NewRelic, and more will share their stories around data and the difference it’s made in their competitive differentiation.

Data Mania is a private event for SaaS leaders, March 4, in San Francisco. Right now, it’s the stealth version of Dreamforce or Oracle Openworld. And like any A-list after party, it’s drawing a who’s-who of SaaS & data industry insiders. It is the event to attend if you are a product management, engineering, professional services or customer success executive at a SaaS company and want to know the data story behind some of the most successful companies in your space.

Planned sessions and panels include something for everyone.

For customer success management, we offer the chance to learn firsthand how native connectors quickly onboard new customers, improve business processes and establish connectivity with other best-of-breed applications.

Engineers and developers – and anyone involved with R&D – will hear how their peers have figured out a way to refocus their attention on developing new products and enhanced features while still providing the data integration required for mass adoption.

And, finally, for product management, we offer freedom — to consider all the potential opportunities and applications that open up when you quit worrying about how “to make the data work and scale” and instead focus on “all the ways data can make your product better” and provide your customers with greater insights and value.

Leading up to Data Mania, we’re also holding Connect-a-thon, a hackathon-like event to get you connected to hundreds of your customers’ cloud and on-premises apps. Connect-a-thon will give your dev team direct access to Informatica Cloud R&D resources – at no cost – to help them develop connectors and custom mappings to make these connections. And if your company is under $5M in annual bookings, and you choose to embed Informatica Cloud, we have a very special offer* for you (think free software and services). Then come to the show for advice on the next steps from your peers and data-driven leaders.

In the end, if you think Salesforce, Adobe, Amazon Web Services, Tableau, Qlik, Dun & Bradstreet and Informatica have something to say about connection and data — and the role they play helping to create the customer-driven enterprise – then you want to be at Data Mania to hear it.

I’m proud of the event we’ve put together and I know you won’t be disappointed. Conceiving and producing Data Mania with a small team here has been my chance to come full circle back to my love of making connections in the SaaS community, using my creative background AND working with the data and metrics I’ve learned to love. I’m counting down the days to the event on March 4th, and I hope you’ll join me. I’ll be the Data Maniac with the biggest smile.

*Offer applies to the first 25 participants

Share
Posted in B2B, Business Impact / Benefits, Data Services, SaaS | Tagged , , , | Leave a comment

Big Data Is Neither-Part II

Big_DataYou Say Big Dayta, I say Big Dahta

Some say Big Data is a great challenge while others say Big Data creates new opportunities. Where do you stand?  For most companies concerned with their Big Data challenges, it shouldn’t be so difficult – at least on paper. Computing costs (both hardware and software) have vastly shrunk. Databases and storage techniques have become more sophisticated and scale massively, and companies such as Informatica have made connecting and integrating all the “big” and disparate data sources much easier and have helped companies achieve a sort of “big data synchronicity”. As it is.

In the process of creating solutions to Big Data problems, humans (and the supra-species known as IT Sapiens) have a tendency to use theories based on linear thinking and the scientific method. There is data as our systems know it and data as our systems don’t. The reality, in my opinion, is that “Really Big Data” problems now and in the future will have complex correlations and unintuitive relationships that need to utilize mathematical disciplines, data models and algorithms that haven’t even been discovered or invented yet and when eventually discovered, will make current database science positively primordial.

At some point in the future, machines will be able to predict, based on big, perhaps unknown data types when someone is having a bad day or a good day, or more importantly whether a person may behave in a good or bad way. Many people do this now when they take a glance at someone across a room and infer how that person is feeling or what they will do next. They see eyes that are shiny or dull, crinkles around eyes or sides of mouths, then hear the “tone” in a voice and then their neurons put it altogether that this is a person that is having a bad day and needs a hug. Quickly. No one knows exactly how the human brain does this, but it does what it does and we go with it and we are usually right.

U.S._Air_Force_Senior_Airman__130429-F-ZX232-013

And some day, Big Data will be able to derive this and it will be an evolution point and it will also be a big business opportunity. Through bigger and better data ingestion and integration techniques and more sophisticated math and data models, a machine will do this fast and relatively speaking, cheaply. The vast majority won’t understand why or how it’s done, but it will work and it will be fairly accurate.

And my question to you all is this.

Do you see any other alternate scenarios regarding the future of big data? Is contextual computing an important evolution and will big data integration be more or less of a problem in the future.

PS. Oh yeah, one last thing to chew on concerning Big Data… If Big Data becomes big enough, does that spell the end of modelling as we know it?

Share
Posted in Big Data, Business Impact / Benefits, Business/IT Collaboration, CMO, Complex Event Processing, Data Integration Platform, Hadoop, Intelligent Data Platform | Tagged , , , | Leave a comment

Cloud & BigData: Days of Future Past

silos_clouds-data_integrationA lot of the trends we are seeing in enterprise integration today are being driven by the adoption of cloud based technologies from IaaS, PaaS and SaaS. I just was reading this story about a recent survey on cloud adoption and thought that a lot of this sounds very similar to things that we have seen before in enterprise IT.

Why discuss this? What can we learn? A couple of competing quotes come to mind.

Those who forget the past are bound to repeat it. – Edmund Burke

We are doomed to repeat the past no matter what. – Kurt Vonnegut

While every enterprise has to deal with their own complexities there are several past technology adoption patterns that can be used to drive discussion and compare today’s issues in order to drive decisions in how a company designs and deploys their current enterprise cloud architecture. Flexibility in design should be a key goal in addition to satisfying current business and technical requirements. So, what are the big patterns we have seen in the last 25 years that have shaped the cloud integration discussion?

1. 90s: Migration and replacement at the solution or application level. A big trend of the 90s was replacing older home grown systems or main frame based solutions with new packaged software solutions. SAP really started a lot of this with ERP and then we saw the rise of additional solutions for CRM, SCM, HRM, etc.

This kept a lot of people that do data integration very busy. From my point of view this era was very focused on replacement of technologies and this drove a lot of focus on data migration. While there were some scenarios around data integration to leave solutions in place these tended to be more in the area of systems that required transactional integrity and high level of messaging or back office solutions. On the classic front office solutions enterprises in large numbers did rip & replace and migration to new solutions.

2. 00s: Embrace and extend existing solutions with web applications. The rise of the Internet Browser combined with a popular and powerful standard programming language in Java shaped and drove enterprise integration in this time period. In addition, due to many of the mistakes and issues that IT groups had in the 90s there appeared to be a very strong drive to extend existing investments and not do rip and replace. IT and businesses were trying to figure out how to add new solutions to what they had in place. A lot of enterprise integration, service bus and what we consider as classic application development and deployment solutions came to market and were put in place.

3. 00s: Adoption of new web application based packaged solutions. A big part of this trend was driven by .Net & Java becoming more or less the de-facto desired language of enterprise IT. Software vendors not on these platforms were for the most part forced to re-platform or lose customers. New software vendors in many ways had an advantage because enterprises were already looking at large data migration to upgrade the solutions they had in place. In either case IT shops were looking to be either a .Net or Java shop and it caused a lot of churn.

4. 00s: First generation cloud applications and platforms. The first adoption of cloud applications and platforms were driven by projects and specific company needs. From Salesforce.com being used just for sales management before it became a platform to Amazon being used as just a run-time to develop and deploy applications before it became a full scale platform and an every growing list of examples as every vendor wants to be the cloud platform of choice. The integration needs originally were often on the light side because so many enterprises treated it as an experiment at first or a one off for a specific set of users. This has changed a lot in the last 10 years as many companies repeated their on premise silo of data problems in the cloud as they usage went from one cloud app to 2, 5, +10, etc. In fact, if you strip away where a solution happens to be deployed (on prem or cloud) the reality is that if an enterprise had previously had a poorly planned on premise architecture and solution portfolio they probably have just as poorly planned cloud architecture solution and portfolio. Adding them together just leads to disjoint solutions that are hard to integrate, hard to maintain and hard to evolve. In other words the opposite of the being flexible goal.

5. 10s: Consolidation of technology and battle of the cloud platforms. It appears we are just getting started in the next great market consolidation and every enterprise IT group is going to need to decide their own criteria for how they balance current and future investments. Today we have Salesforce, Amazon, Google, Apple, SAP and a few others. In 10 years some of these will either not exist as they do today or be marginalized. No one can say which ones for sure and this is why prioritizing flexibility in terms or architecture for cloud adoption.

For me the main take aways from the past 25 years of technology adoption trends for anyone that thinks about enterprise and data integration would be the following.

a) It’s all starts and ends with data. Yes, applications, process, and people are important but it’s about the data.

b) Coarse grain and loosely coupled approaches to integration are the most flexible. (e.g. avoid point to point at all costs)

c) Design with the knowledge of what data is critical and what data might or should be accessible or movable

d) Identify data and applications that might have to stay where it is no matter what.(e.g. the main frame is never dying)

e) Make sure your integration and application groups have access to or include someone that understand security. While a lot of integration developers think they understand security it’s usually after the fact that you find out they really do not.

So, it’s possible to shape your cloud adoption and architecture future by at least understanding how past technology and solution adoption has shaped the present. For me it is important to remember it is all about the data and prioritizing flexibility as a technology requirement at least at the same level as features and functions. Good luck.

Share
Posted in Business Impact / Benefits, Cloud Computing, Data Integration | Tagged , , , , | Leave a comment

Big Data Is Neither-Part I

humongdataI’ve been having some interesting conversations with work colleagues recently about the Big Data hubbub and I’ve come to the conclusion that “Big Data” as hyped is neither, really. In fact, both terms are relative. “Big” 20 years ago to many may have been 1 terabyte. “Data” 20 years ago may have been Flat files, Sybase, Oracle, Informix, SQL Server or DB2 tables. Fast forward to today and “Big” is now Exabytes (or millions of terabytes). “Data” are now expanded to include events, sensors, messages, RFID, telemetry, GPS, accelerometers, magnetometers, IoT / M2M and other new and evolving data classifications.

And then there’s social and search data.

Surely you would classify Google data as really really big data – I can tell when I do a search, and get 487,464,685 answers within fractions of a second that they appear to have gotten a handle on their big data speeds and feeds. However, it’s also telling that nearly all of those bazillion results are actually not relevant to what I am searching for.

My conclusion is that if you have the right algorithms, invest in and use the right hardware and software technology and make sure to measure the pertinent data sources, harnessing big data can yield speedy &“big”results.

So what’s the rub then?

It usually boils down to having larger and more sophisticated data stores and still not understanding its structure, OR it can’t be integrated into cohesive formats, OR there is important hidden meaning in the data that we don’t have the wherewithal to derive, see or understand a la Google? So how DO you find the timely and important information out of your company’s big data (AKA the needle in the haystack)?

needlehaystack-Big Data

More to the point, how do you better ingest, integrate, parse, analyze, prepare, and cleanse your data to get the speed, but also the relevancy in a Big Data world?

Hadoop related tools are one of the current technologies of choice when it comes to solving Big Data related problems, and as an Informatica customer, you can leverage these tools, regardless of whether it’s Big Data or Not So Big Data, fast data or slow data. In fact, it actually astounds me that many IT professionals would want to go back to hand coding with a Hadoop tool just because they don’t know that the tools to do so are right under their nose, installed and running in their familiar Informatica User Interface (AND that work with Hadoop right out of the box.)

So what does your company get out of using Informatica in conjunction with Hadoop tools? Namely, better customer service and responsiveness, better operational efficiencies, more effective supply chains, better governance, service assurance, and the ability to discover previously unknown opportunities as well as stopping problems when they are an issue – not after the fact. In other words, Big Data done right can be a great advantage to many of today’s organizations.

Much more to say on this this subject as I delve into the future of Big Data. For more, see Part 2.

Share
Posted in Big Data, Business Impact / Benefits, Complex Event Processing, Intelligent Data Platform | Tagged , , , , | Leave a comment

Stop Asking Your IT for a Roadmap

it_roadmap

IT Roadmap

Knowing business’s trends and needs change frequently, why is it that we plan multi-year IT-driven roadmaps?

Understandably, IT managers have honed their skills in working with the line to predict business needs. They have learned to spend money and time wisely and to have the right infrastructure in place to meet the business’ needs. Whether it is launching in a new market, implementing a new technology, or one of many other areas where IT can help its firm find a competitive advantage.

Not so long ago, IT was so complex and unwieldy that it needed specially-trained professionals to source, build, and run almost every aspect of it, and when line managers had scant understanding which technology would suit their activities best, making a plan based on long-term business goals was a good one.

Today, we talk of IT as a utility, just like electricity, you press a button, and IT turns “on.” However that is not the case, the extent to which IT has saturated the day-to-day business life means they are better placed to determine how technology should be used to achieve the company’s objectives.

In the next five years, the economic climate will change, customer preferences will shift, and new competitors will threaten the business. Innovations in technology will provide new opportunities to explore, and new leadership could send the firm in a new direction. While most organizations have long-term growth targets, their strategies constantly evolve.

This new scenario has caused those in the enterprise architecture (EA) function to ask whether long-term road mapping is still a valuable investment.

EAs admit that long-term IT-led road mapping is no longer feasible. If the business does not have a detailed and stable five-year plan, these architects argue, how can IT develop a technology roadmap to help them achieve it? At best, creating long-term roadmaps is a waste of effort, a never-ending cycle of updates and revisions.

Without a long-range vision of business technology demand, IT has started to focus purely on the supply side. These architects focus on existing systems, identifying ways to reduce redundancies or improve flexibility. However, without a clear connection to business plans, they struggle to secure funding to make their plans a reality.

IT has turned their focus to the near-term, trying to influence the small decisions made every day in their organizations. IT can have greater impact, they believe, if they serve as advisors to IT and business stakeholders, guiding them to make cost-efficient, enterprise-aligned technology decisions.

Rather than taking a top-down perspective, shaping architecture through one master plan, they work from the bottom-up, encouraging more efficient working by influencing the myriad technology decisions being made each day.

Twitter @bigdatabeat

Share
Posted in Architects, Business Impact / Benefits, Business/IT Collaboration | Tagged , , , , | Leave a comment

Federal Migration to Cloud Computing, Hindered by Data Issues

cloud_computing

Moving towards Cloud Computing

As reviewed by Loraine Lawson,  a MeriTalk survey about cloud adoption found that a “In the latest survey of 150 federal executives, nearly one in five say one-quarter of their IT services are fully or partially delivered via the cloud.”

For the most part, the shifts are more tactical in nature.  These federal managers are shifting email (50 percent), web hosting (45 percent) and servers/storage (43 percent).  Most interesting is that they’re not moving traditional business applications, custom business apps, or middleware. Why? Data, and data integration issues.

“Federal agencies are worried about what happens to data in the cloud, assuming they can get it there in the first place:

  • 58 percent of executives fret about cloud-to-legacy system integration as a barrier.
  • 57 percent are worried about migration challenges, suggesting they’re not sure the data can be moved at all.
  • 54 percent are concerned about data portability once the data is in the cloud.
  • 53 percent are worried about ‘contract lock-in.’ ”

The reality is that the government does not get much out of the movement to cloud without committing core business applications and thus core data.  While e-mail and Web hosting, and some storage is good, the real cloud computing money is made when moving away from expensive hardware and software.  Failing to do that, you fail to find the value, and, in this case, spend more taxpayer dollars than you should.

Data issues are not just a concern in the government.  Most larger enterprise have the same issues as well.  However, a few are able to get around these issues with good planning approaches and the right data management and data integration technology.  It’s just a matter of making the initial leap, which most Federal IT executives are unwilling to do.

In working with CIOs of Federal agencies in the last few years, the larger issue is that of funding.  While everyone understands that moving to cloud-based systems will save money, getting there means hiring government integrators and living with redundant systems for a time.  That involves some major money.  If most of the existing budget goes to existing IP operations, then the move may not be practical.  Thus, there should be funds made available to work on the cloud projects with the greatest potential to reduce spending and increase efficiencies.

The shame of this situation is that the government was pretty much on the leading edge with cloud computing. back in 2008 and 2009.  The CIO of the US Government, Vivek Kundra, promoted the use of cloud computing, and NIST drove the initial definitions of “The Cloud,” including IaaS, SaaS, and PaaS.  But, when it came down to making the leap, most agencies balked at the opportunity citing issues with data.

Now that the technology has evolved even more, there is really no excuse for the government to delay migration to cloud-based platforms.  The clouds are ready, and the data integration tools have cloud integration capabilities backed in.  It’s time to see some more progress.

Share
Posted in Business Impact / Benefits, Cloud, Cloud Application Integration, Cloud Data Integration, Data Integration, Data Integration Platform | Tagged , , , | 2 Comments

The Quality of the Ingredients Make the Dish-Applies to Data Quality

Data_Quality

Data Quality Leads to Other Integrated Benefits

In a previous life, I was a pastry chef in a now-defunct restaurant. One of the things I noticed while working there (and frankly while cooking at home) is that the better the ingredients, the better the final result. If we used poor quality apples in the apple tart, we ended up with a soupy, flavorless mess with a chewy crust.

The same analogy can be applied to Data Analytics. With poor quality data, you get poor results from your analytics projects. We all know that companies that can implement fantastic analytic solutions that can provide near real-time access to consumer trends are the same companies that can do successful targeted marketing campaigns that are of the minute. The Data Warehousing Institute estimates that data quality problems cost U.S. businesses more than $600 billion a year.

The business impact of poor data quality cannot be underestimated. If not identified and corrected early on, defective data can contaminate all downstream systems and information assets, jacking up costs, jeopardizing customer relationships, and causing imprecise forecasts and poor decisions.

  • To help you quantify: Let’s say your company receives 2 million claims per month with 377 data elements per claim. Even at an error rate of .001, the claims data contains more than 754,000 errors per month and more than 9.04 million errors per year! If you determine that 10 percent of the data elements are critical to your business decisions and processes, you still must fix almost 1 million errors each year!
  • What is your exposure to these errors? Let’s estimate the risk at $10 per error (including staff time required to fix the error downstream after a customer discovers it, the loss of customer trust and loyalty and erroneous payouts. Your company’s risk exposure to poor quality claims data is $10 million a year.

Once your company values quality data as a critical resource – it is much easier to perform high-value analytics that have an impact on your bottom line. Start with creation of a Data Quality program. Data is a critical asset in the information economy, and the quality of a company’s data is a good predictor of its future success.

Share
Posted in Business Impact / Benefits, Cloud Data Integration, Customer Services, Data Aggregation, Data Integration, Data Quality, Data Warehousing, Database Archiving, Healthcare, Master Data Management, Profiling, Scorecarding, Total Customer Relationship | Tagged , , , , | 2 Comments

British Cycling: A Big Data Champion?

Big_Data

British Cycling: A Big Data Champion?

I think I may have gone to too many conferences in 2014 in which the potential of big data was discussed.  After a while all the stories blurred into two main themes:

  1. Companies have gone bankrupt at a time when demand for their core products increased.
  2. Data from mobile phones, cars and other machines house a gold mine of value – we should all be using it.

My main take away from 2014 conferences was that no amount of data is a substitute for poor strategy, or lack of organisational agility to adapt business processes in times of disruption.  However, I still feel as an industry our stories are stuck in the phase of ‘Big Data Hype’, but most organisations are beyond the hype and need practicalities, guidance and inspiration to turn their big data projects into a success.  This is possibly due to a limited number of big data projects in production, or perhaps it is too early to measure the long term results of existing projects.  Another possibility is that the projects are delivering significant competitive advantage, so the stories will remain under wraps for the time being.

However, towards the end of 2014 I stumbled across a big data success story in an unexpected place.  It did (literally) provide competitive advantage, and since it has been running for a number of years the results are plain to see.  It started with a book recommendation from a friend.   ‘Faster’ by Michael Hutchinson is written as a self-propelled investigation as to the difference between world champion and world class althletes.  It promised to satisfy my slightly geeky tendency to enjoy facts, numerical details and statistics.  It did this – but it really struck me as a ‘how-to’ guide for big data projects.

Mr Hutchinson’s book is an excellent read as an insight into professional cycling by a professional cyclist.  It is stacked with interesting facts and well-written anecdotes, and I highly recommend the reading the book.  Since the big-data aspect was a sub-plot, I will pull out the highlights without distracting from the main story.

Here are the five steps I extracted for big data project success:

1. Have a clear vision and goal for your project

The Sydney Olympics in 2000 had only produced 4 medals across all cycling disciplines for British cyclists.  With a home Olympics set for 2012, British Cycling desperately wanted to improve this performance.  Specific targets were clearly set across all disciplines stated in times that an athlete needed to achieve in order to win a race.

2. Determine data the required to support these goals

Unlike many big data projects which start with a data set and then wonder what to do with it, British Cycling did this the other way around.  They worked out what they needed to measure in order to establish the influencers on their goal (track time) and set about gathering this information.  In their case this involved gathering wind tunnel data to compare & contrast equipment, as well as physiological data from athletes and all information from cycling activities.

3. Experiment in order to establish causality

Most big data projects involve experimentation by changing the environment  whilst gathering a sub-set of data points.  The number of variables to adjust in cycling is large, but all were embraced. Data (including video) was gathered on the effects of small changes in each component:  Bike, Clothing, Athlete (training and nutrition).

4. Guide your employees on how to use the results of the data

Like many employees, cyclists and coaches were convinced of the ‘best way’ to achieve results based on their own personal experience.  Analysis of data in some cases showed that the perceived best way, was in fact not the best way.   Coaching staff trusted the data, and convinced the athletes to change aspects of both training and nutrition.  This was not necessarily easy to do, as it could mean fundamental changes in the athlete’s lifestyle.

5. Embrace innovation

Cycling is a very conservative sport by nature, with many of the key innovations coming from adjacent sports such as triathlon.  Data however, is not steeped in tradition and does not have pre-conceived ideas as to what equipment should look like, or what constitutes an excellent recovery drink.  What made British Cycling’s big data initiatives successful is that they allowed themselves to be guided by the data and put the recommendations into practice.  Plastic finished skin suits are probably not the most obvious choice for clothing, but they proved to be the biggest advantage cyclist could get.  Far more than tinkering with the bike.  (In fact they produced so much advantage they were banned shortly after the 2008 Olympics.)

The results:  British Cycling won 4 Olympic medals in 2000, one of which was gold.  In 2012 they grabbed 8 gold, 2 silver and 2 bronze medals.  A quick glance at their website shows that it is not just Olympic medals they are wining – but medals won across all world championship events has increased since 2000.

To me, this is one of the best big data stories, as it directly shows how to be successful using big data strategies in a completely analogue world.  I think it is more insightful that the mere fact that we are producing ever-increasing volumes of data.  The real value of big data is in understanding what portion of all avaiable data will constribute to you acieving your goals, and then embracing the use the results of analysis to make constructive changes in daily activities.

But then again, I may just like the story because it involves geeky facts, statistics and fast bicycles.

Share
Posted in Big Data, Business Impact / Benefits, Data Integration, Data Quality, Data Security, Data Services | Tagged , , , | Leave a comment