Tag Archives: Data Services

How organizations can prepare for 2015 data privacy legislation

Original article can be found here, scmagazine.com

On Jan. 13 the White House announced President Barack Obama’s proposal  for new data privacy legislation, the Personal Data Notification and Protection Act.  Many states have laws today that require corporations and government agencies to notify consumers in the event of a breach – but it is not enough.  This new proposal aims to improve cybersecurity standards nationwide with the following tactics:

Enable cyber-security information sharing between private and public sectors. 

Government agencies and corporations with a vested interest in protecting our information assets need a streamlined way to communicate and share threat information. This component of the proposed legislation incents organizations that participate in knowledge-sharing with targeted liability protection, as long as they are responsible for how they share, manage and retain privacy data.

Modernize the tools law enforcement has to combat cybercrime.
Existing laws, such as the Computer Fraud and Abuse Act, need to be updated to incorporate the latest cyber-crime classifications while giving prosecutors the ability to target insiders with privileged access to sensitive and privacy data.  The proposal also specifically calls out pursuing prosecution when selling privacy data nationally and internationally.

Standardize breach notification policies nationwide.
Many states have some sort of policy that requires notification of customers that their data has been compromised.  Three leading examples include California , Florida’s Information Protection Act (FIPA) and Massachusetts Standards for the Protection of Personal Information of Residents of the Commonwealth.  New Mexico, Alabama and South Dakota have no data breach protection legislation.  Enforcing standardization and simplifying the requirement for companies to notify customers and employees when a breach occurs will ensure consistent protection no matter where you live or transact.

Invest in increasing cyber-security skill sets.
For a number of years, security professionals have reported an ever-increasing skills gap in the cybersecurity profession.  In fact, in a recent Ponemon Institute report, 57 percent of respondents said a data breach incident could have been avoided if the organization had more skilled personnel with data security responsibilities. Increasingly, colleges and universities are adding cybersecurity curriculum and degrees to meet the demand. In support of this need, the proposed legislation mentions that the Department of Energy will provide $25 million in educational grants to Historically Black Colleges and Universities (HBCU) and two national labs to support a cybersecurity education consortium.

This proposal is clearly comprehensive, but it also raises the critical question: How can organizations prepare themselves for this privacy legislation?

The International Association of Privacy Professionals conducted a study of Federal Trade Commission (FTC) enforcement actions.  From the report, organizations can infer best practices implied by FTC enforcement and ensure these are covered by their organization’s security architecture, policies and practices:

  • Perform assessments to identify reasonably foreseeable risks to the security, integrity, and confidentiality of personal information collected and stored on the network, online or in paper files.
  • Limited access policies curb unnecessary security risks and minimize the number and type of network access points that an information security team must monitor for potential violations.
  • Limit employee access to (and copying of) personal information, based on employee’s role.
  • Implement and monitor compliance with policies and procedures for rendering information unreadable or otherwise secure in the course of disposal. Securely disposed information must not practicably be read or reconstructed.
  • Restrict third party access to personal information based on business need, for example, by restricting access based on IP address, granting temporary access privileges, or similar procedures.

The Personal Data Notification and Protection Act fills a void at the national level; most states have privacy laws with California pioneering the movement with SB 1386.  However, enforcement at the state AG level has been uneven at best and absent at worse.

In preparing for this national legislation organization need to heed the policies derived from the FTC’s enforcement practices. They can also track the progress of this legislation and look for agencies such as the National Institute of Standards and Technology to issue guidance. Furthermore, organizations can encourage employees to take advantage of cybersecurity internship programs at nearby colleges and universities to avoid critical skills shortages.

With online security a clear priority for President Obama’s administration, it’s essential for organizations and consumers to understand upcoming legislation and learn the benefits/risks of sharing data. We’re looking forward to celebrating safeguarding data and enabling trust on Data Privacy Day, held annually on January 28, and hope that these tips will make 2015 your safest year yet.

Share
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Integration, Data Security, Data Services | Tagged , , , | Leave a comment

Informatica and Pivotal Delivering Great Data to Customers

Informatica and Pivotal Delivering Great Data to Customers

Delivering Great Data to Customers

As we head into Strata + Hadoop World San Jose, Pivotal has made some interesting announcements that are sure to be the talk of the show. Pivotal’s move to open-source some of their advanced products (and to form a new organization to foster Hadoop community cooperation) are signs of the dynamism and momentum of the Big Data market.

Informatica applauds these initiatives by Pivotal and we hope that they will contribute to the accelerating maturity of Hadoop and its expansion beyond early adopters into mainstream industry adoption. By contributing HAWQ, GemFire and the Greenplum Database to the open source community, Pivotal creates further open options in the evolving Hadoop data infrastructure technology. We expect this to be well received by the open source community.

As Informatica has long served as the industry’s neutral data connector for more than 5,500 customers and have developed a rich set of capabilities for Hadoop, we are also excited to see efforts to try to reduce fragmentation in the Hadoop community.

Even before the new company Pivotal was formed, Informatica had a long history working with the Greenplum team to ensure that joint customers could confidently use Informatica tools to include the Greenplum Database in their enterprise data pipelines. Informatica has mature and high-performance native connectivity to load data in and out of Greenplum reliably using Informatica’s codeless, visual data pipelining tools. In 2014, Informatica expanded out Hadoop support to include Pivotal HD Hadoop and we have joint customers using Informatica to do data profiling, transformation, parsing and cleansing using Informatica Big Data Edition running on Pivotal HD Hadoop.

We expect these innovative developments driven by Pivotal in the Big Data technology landscape to help to move the industry forward and contribute to Pivotal’s market progress. We look forward to continuing to support Pivotal technology and to an ever increasing number of successful joint customers. Please reach out to us if you have any questions about how Informatica and Pivotal can help your organization to put Big Data into production. We want to ensure that we can help you answer the question … Are you Big Data Ready?

Share
Posted in Big Data, Data Governance, Hadoop | Tagged , , , , , | Leave a comment

Can You Find True Love Using A Wearable Device?

Wearable Device

Can You Find True Love Using A Wearable Device?

How do you know if you have found ‘true love’?

Biologists and psychologists tell us that when we are struck by cupid’s arrow, our body is reacting to a set of chemicals that are released in the brain that evoke emotions and feelings of lust, attraction and attachment.[1]  When those chemicals are released, our bodies respond.  Our hearts race, blood pumps through our veins, faces flush, body temperatures rise. Some say it feels like electricity is conducting all over the skin. It releases a flood of emotions that may cloud our judgment and may even cause us to make a choice considered unreasonable to others.  Sound familiar?

But what causes our brains to react to one person and not another?  Are we predisposed to how certain people look or smell?  Do our genes play a role in determining an affinity toward a body type or shape?

Pheromone research has shown how sensors in our nose can smell whether or not someone’s immune system compliments our own based on the scent of urine and sweat.  Meaning, if someone has a similar immune deficiency, that individual won’t smell good to us. We are more likely to prefer the smell of someone who has an immune system that is different. Is our genetic code programming our instincts to preselect who we should mate with so our offspring has a higher chance of surviving?

It is probably not surprising that most men are attracted to women with symmetrical faces and hourglass figures. Genetic research hints that men’s predispositions are also based on a genetic code.  There is a correlation between asymmetric facial characteristics and genetic disorders as well as between waist to hip ratios and fertility. Depending on where you are in your stage in life, these characteristics could have a weighting factor in how your brain responds to the smell of the perfect pheromone and how someone appears.  And, some argue it is all influenced by body language, voice tone and actual words used in dialogue.[2]

Psychologists report it takes only two to four minutes to decide if you are falling in love with someone.  Even if you dismiss some or accept all of the possibilities I am presenting, experiencing love is impacted by a variety and intensity of senses, interpretations and emotions combined together in a short period of time. If you are a data nerd like myself, variety, volume and velocity of ‘signals’ begins to sound like a Big Data marketing pitch. This really is an application of predictive analytics using different data types, large volumes of data and real-time decision making algorithms. But, I’m actually more interested in how affective computing, wearable devices and analytics could help determine whether or not what you feel is actually ’true love’ or just a bad case of indigestion.

Affective computing, according to researcher Rosalind Picard,[3] gives a computer the ability to recognize and express emotions, develop that ability and enable it to regulate and utilize emotions. When applied to wearable devices that can listen to how you talk, measure blood pressure, detect changes in heart and respiration rate and even measure electro-dermal responses, is it possible that technology could sense when your body is responding to the chemicals of love?

What about mood rings, you may ask?  Mood rings, the original form of an affective wearable device that grew in popularity in the 1970s changed color based on your mood. Unfortunately, mood rings only change based on body temperature. Through data collection and research, researchers[4]  have shown that physiology patterns cannot be determined by body temperature alone. In order to truly differentiate emotion of, let’s say ‘true love,’ you need to be able to collect multiple physiological signals and detect a pattern using multi-variant pattern recognition algorithms. And, if you only have 2-4 minutes, it pretty much needs to calculate chances of ‘true love’ in real-time to prevent making a life-altering mistake.

The evolution of wearables technology has reached medical grade, allowing parents to detect when their children are about to have an epileptic seizure or are experiencing acute levels of stress.  When tuned to love-seekers’ queues, is it possible that this same technology could send an audio or visual signal to your smart phone alerting you as to whether or not this person is a ‘true love’ candidate? Or glow red if you are in the proximity of someone who is experiencing similar physiological changes?   Maybe this is the next application for match-making companies such as eHarmony or Match.com?

The reality is this. Assuming that the data is clean and accurate, safe from violating any data privacy concerns and truly connected to your physiological signals, wearable device technology that could detect close proximity of ‘true love’ is probably five years out. It is more likely to show up in a popular science fiction film than at an Apple store in the near term. But, when it does, think about how the signal on your smart phone device tells you the proximity of a potential candidate, where a local flower shop is, integrated with facial recognition and Facebook photos and ‘status’ (assuming it is true), with an iTunes recommendation of ‘Love Is In The Air’ by John Paul Young, ‘True Love’ is only 2-4 minutes away.

[1] http://www.bbc.co.uk/science/hottopics/love/

[2] http://www.youramazingbrain.org/lovesex/sciencelove.htm

[3] R. Picard. Affective Computing. Pages 227-239, MIT Press, 2000

[4] Cacioppa and Tassinary (1990)

Share
Posted in Business Impact / Benefits, Data Aggregation, Data Services, Intelligent Data Platform, Real-Time, Wearable Devices | Tagged , , | Leave a comment

Finding Love with Big Data Analytics

Big Data Ready for Analytics

Getting Ready for Love by Getting Big Data Ready for Analytics

You might think this year’s Valentine’s Day is no different than any other.  But make no mistake – Valentine’s Day has never been more powered by technology or more fueled by data.

It doesn’t get a lot of press coverage, but did you know the online dating industry is now a billion dollar industry globally? As technology empowers us all to live and work in nearly any place, we can no longer rely on geographic colocation to find our friends and soul mates. So, it’s no surprise that online dating and social networking grew in popularity as the smartphone revolution happened over the last eight years. Dating and networking are no longer just about running into people at the local bar on a Friday night. People are connecting with one another on every consumer device they have available, from their computers to their tablets to their phones. This mass consumerization of devices and the online social applications we run on them have fundamentally changed how we all connect with one another in the offline world.

There’s a lot of talk about big data in the tech industry, but there isn’t a lot of understanding of how big data actually affects the real world. Online dating serves as a fantastic example here. Did you know about 1 out of 6 people in the United States is single and that about 1 of 3 of them are members of an online dating site?[1] With tens of millions of members just in the United States, the opportunity to meet new people online has never been more compelling. But the challenge is helping people sift through a “big data” store of millions of profiles. The solution to this has been the use of predictive recommendation engines. We are all familiar with recommendation engines on e-commerce sites that give us suggestions for new items to buy. The exact same analytics are now being applied to people and their preferences to help them find new friends and companions. Big data analytics is not just some fancy technology used by retailers and Wall Street. The proof is in the math: 1 of 18 people in the United States is using big data analytics today to fulfill the most human needs of all in finding companionship.

However, this everyday revolution of big data analytics is not just limited to online dating. US News and World Report[2] estimates that a whopping $19 billion dollars will be spent around Valentine’s Day this year.  This spending includes gifts, flowers, candy, jewelry and other forms of celebration. The online sites that sell these products are no foreigners to big data either. Organizations with e-commerce sites, many of whom are Informatica customers, are collecting real-time weblog and clickstream information to dynamically offer the best possible customer experiences. Big data is not only helping to build relationships between consumers, but is also building them between enterprises and consumers.

In an increasingly data-driven world, you cannot connect people unless you can connect data. The billion dollar online dating industry and the $19 billion dollar Valentine’s Day industry would not exist if they were not fueled by the ability to quickly derive meaning out of data assets and turn them into compelling analytical outcomes. Informatica is powering this data-driven world of connected data and connected people by making all types of data ready for analytics.  Our leading technologies have helped customers collect data quickly, re-direct workloads easily, and perfect datasets completely.  So on this Valentine’s Day, I invite you to connect with your customers and help them to connect with one another by first connecting with your data. There is no better way to help get ready for love than by getting big data ready for analytics!

[1] http://www.statisticbrain.com/online-dating-statistics/

[2] http://www.usnews.com/news/blogs/data-mine/2015/02/11/valentines-day-spending-to-approach-19-billion

Share
Posted in B2B, Big Data, Data Services | Tagged , , | Leave a comment

Lovenomics: The Price of Love This Valentine’s Day, Cash In on the Hopeless Romantics

retailers

Lovenomics: The Price of Love This Valentine’s Day, Cash In on the Hopeless Romantics

This blog post was originally featured on Business.com here: Lovenomics: The Price of Love This Valentine’s Day.

After the Blue Cross sales that dominate January, Valentine’s Day offers welcome relief to the high street. Valentine’s Day marks the end of Christmas sales and the first of the year’s seasonal hooks providing retailers with an opportunity to upsell. According to the National Retail Federation’s Valentine’s Day Consumer Spending Survey, American consumers plan to spend a total of $4.8 billion on jewelry and a survey high of nearly $2 billion on clothing this year. However, to successfully capture customers, retailers need to develop an omni-channel strategy designed to sell the right product.

Target the indecisive

For the most part, the majority of Valentine’s Day shoppers will be undecided when they begin their purchasing journey. Based on this assumption, a targeted sales approach at the point of interest (POI) and point of sale (POS) will be increasingly important. Not only do retailers need to track and understand the purchasing decision of every customer as they move between channels, but they also need to have a real-time view of the product lines, pricing and content that the competition is using. Once armed with this information, retailers can concentrate on delivering personalized ads or timely product placements that drive consumers to the checkout as they move across different channels.

Related Article: 11 Cheeky Business Valentine’s Day Cards for the BFF In Your Office

Start with search

Consumers will start their shopping journey with a search engine and will rarely scroll past the first page. So brands need to be prepared by turning Valentine’s Day product lines into searchable content. To capture a greater share of online traffic, retailers should concentrate on making relevant products easy to find by managing meta-information, optimizing media assets with the keywords that consumers are using, deploying rich text and automatically sending products to search engines.

Next generation loyalty

Retailers and restaurants can now integrate loyalty schemes into specialized smartphone apps, or maybe integrate customer communication to automatically deliver personalized ads (e.g., offers for last minute gifts for those who forget). However, to ensure success, brands need to know as much about their customers as consumers know about their products. By being able to monitor customers’ behavior, the information that they are looking at and the channels that they are using to interact with brands, loyalty programs can be used to deliver timely special offers or information at the right moment.

Digital signage

Valentine’s Day represents an opportunity to reinvent the in-store experience. By introducing digital signage for special product promotions, retailers can showcase a wide range of eclectic merchandise to showroom consumers. This could be done by targeting any smartphone consumers (who have allowed geo-located ads on their phones) with a personalized text message when they enter the store. Use this message to direct them to the most relevant areas for Valentine’s Day gifts or present them with a customized offer based on previous buying history.

Related Article: Small Business Marketing Tips for Valentine’s Day

Quick fulfillment

supermarkets have become established as the one-stop shop for lovers in a rush. Last year, Tesco, a British multinational grocery and general merchandise retailer, revealed that 85 percent of all Valentine’s Day bouquets were bought on the day itself, with three-quarters of all Valentine’s Day chocolates sold on February 14.

To tap into the last-minute attitude of panicked couples searching for a gift, retailers should have a dedicated Valentine’s Day section online and provide timely offers that come with the promise of delivery in time for Valentine’s Day. For example, BCBGMAXAZRIA is using data quality services to ensure its email list is clean and updated, keeping its sender reputation high so that when they need to reach customers during critical times like Valentine’s Day, they have confidence in their data.

Alternatively, retailers can help customers by closely managing local inventory levels to offer same-day click-and-collect initiatives or showing consumers the number of items that are currently in-stock and in-store across all channels.

Valentine’s Day may seem like a minor holiday after Christmas, but for retailers it generates billions of dollars in annual spending and presents a tremendous opportunity to boost their customer base. With these tips, retailers will hopefully be able to sweeten their sales by effectively targeting customers looking for the perfect gift for their special someone.

Share
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Services, Retail | Tagged , | Leave a comment

Data Governance, Transparency and Lineage with Informatica and Hortonworks

Data GovernanceInformatica users leveraging HDP are now able to see a complete end-to-end visual data lineage map of everything done through the Informatica platform. In this blog post, Scott Hedrick, director Big Data Partnerships at Informatica, tells us more about end-to-end visual data lineage.

Hadoop adoption continues to accelerate within mainstream enterprise IT and, as always, organizations need the ability to govern their end-to-end data pipelines for compliance and visibility purposes. Working with Hortonworks, Informatica has extended the metadata management capabilities in Informatica Big Data Governance Edition to include data lineage visibility of data movement, transformation and cleansing beyond traditional systems to cover Apache Hadoop.

Informatica users are now able to see a complete end-to-end visual data lineage map of everything done through Informatica, which includes sources outside Hortonworks Data Platform (HDP) being loaded into HDP, all data integration, parsing and data quality transformation running on Hortonworks and then loading of curated data sets onto data warehouses, analytics tools and operational systems outside Hadoop.

Regulated industries such as banking, insurance and healthcare are required to have detailed histories of data management for audit purposes. Without tools to provide data lineage, compliance with regulations and gathering the required information for audits can prove challenging.

With Informatica, the data scientist and analyst can now visualize data lineage and detailed history of data transformations providing unprecedented transparency into their data analysis. They can be more confident in their findings based on this visibility into the origins and quality of the data they are working with to create valuable insights for their organizations. Web-based access to visual data lineage for analysts also facilitates team collaboration on challenging and evolving data analytics and operational system projects.

The Informatica and Hortonworks partnership brings together leading enterprise data governance tools with open source Hadoop leadership to extend governance to this new platform. Deploying Informatica for data integration, parsing, data quality and data lineage on Hortonworks reduces risk to deployment schedules.

A demo of Informatica’s end-to-end metadata management capabilities on Hadoop and beyond is available here:

Learn More

  • A free trial of Informatica Big Data Edition in the Hortonworks Sandbox is available here .
Share
Posted in B2B, Data Governance, Data Security, Data Services | Tagged , , , , | Leave a comment

Announcing the New Formation of the Informatica Data Security Group

The Informatica Data Security Group

The Informatica Data Security Group

The technology world has and continues to change rapidly in front of our eyes. One of the areas where this change has become most obvious is Security, and in particular the approach to Security. Network and perimeter-based security controls alone are insufficient as data proliferates outside the firewall to social networks, outsourced and offshore resources and mobile devices. Organizations are more focused on understanding and protecting their data, which is the most prized asset they have vs all the infrastucture around it. Informatica is poised to lead this transformation of the security market to focus on a data-centric security approach.

The Ponemon Institute stated that the biggest concern for security professionals is that they do not know where sensitive data resides.  Informatica’s Intelligent Data Platform provides data security professionals with the technology required to discover, profile, classify and assess the risk of confidential and sensitive data.

Last year, we began significant investments in data security R&D support the initiative.  This year, we continue the commitment by organizing around the vision.  I am thrilled to be leading the Informatica Data Security Group, a newly-formed business unit comprised of a team dedicated to data security innovation.  The business unit includes the former Application ILM business unit which consists of data masking, test data management and data archive technologies from previous acquisitions, including Applimation, ActiveBase, and TierData.

By having a dedicated business unit and engineering resources applying Informatica’s Intelligent Data Platform technology to a security problem, we believe we can make a significant difference addressing a serious challenge for enterprises across the globe.  The newly formed Data Security Group will focus on new innovations in the data security intelligence market, while continuing to invest and enhance our existing data-centric security solutions such as data masking, data archiving and information lifecycle management solutions.

The world of data is transforming around us and we are committed to transforming the data security industry to keep our customer’s data clean, safe and connected.

For more details regarding how these changes will be reflected in our products, message and support, please refer to the FAQs listed below:

Q: What is the Data Security Group (DSG)?

A: Informatica has created a newly formed business unit, the Informatica Data Security Group, as a dedicated team focusing on data security innovation to meet the needs of our customers while leveraging the Informatica Intelligent Data Platform

Q: Why did Informatica create a dedicated Data Security Group business unit?

A:  Reducing Risk is among the top 3 business initiatives for our customers in 2015.  Data Security is a top IT and business initiative for just about every industry and organization that store sensitive, private, regulated or confidential data.  Data Security is a Board room topic.  By building upon our success with the Application ILM product portfolio and the Intelligent Data Platform, we can address more pressing issues while solving mission-critical challenges that matter to most of our customers.

Q: Is this the same as the Application ILM Business Unit?

A: The Informatica Data Security Group is a business unit that includes the former Application ILM business unit products comprised of data masking, data archive and test data management products from previous acquisitions, including Applimation, ActiveBase, and TierData, and additional resources developing and supporting Informatica’s data security products GTM, such as Secure@Source.

Q: How big is the Data Security market opportunity?

A: Data Security software market is estimated to be a $3B market in 2015 according to Gartner. Total information security spending will grow a further 8.2 percent in 2015 to reach $76.9 billion.[1]

Q: Who would be most interested in this announcement and why?

A: All leaders are impacted when a data breach occurs. Understanding the risk of sensitive data is a board room topic.  Informatica is investing and committing to securing and safeguarding sensitive, private and confidential data. If you are an existing customer, you will be able to leverage your existing skills on the Informatica platform to address a challenge facing every team who manages or handles sensitive or confidential data.

Q: How does this announcement impact the Application ILM products – Data Masking, Data Archive and Test Data Management?

A: The existing Application ILM products are foundational to the Data Security Group product portfolio.  These products will continue to be invested in, supported and updated.  We are building upon our success with the Data Masking, Data Archive and Test Data Management products.

Q: How will this change impact my customer experience?

A: The Informatica product website will reflect this new organization by listing the Data Masking, Data Archive, and Test Data Management products under the Data Security product category.  The customer support portal will reference Data Security as the top level product category.  Older versions of the product and corresponding documentation will not be updated and will continue to reflect Application ILM nomenclature and messaging.

[1] http://www.gartner.com/newsroom/id/2828722

Share
Posted in B2B, Data Security, Data Services, Enterprise Data Management | Tagged , , , | Leave a comment

Patient Experience-The Quality of Your Data is Important!

Patient_Experience

Patient Experience-Viable Data is Important!

Patient experience is key to growth and success for all health delivery organizations.  Gartner has stated that the patient experience needs to be one of the highest priorities for organizations. The quality of your data is critical to achieving that goal.  My recent experience with my physician’s office demonstrates how easy it is for the quality of data to influence the patient experience and undermine a patient’s trust in their physician and the organization with which they are interacting.

I have a great relationship with my doctor and have always been impressed by the efficiency of the office.  I never wait beyond my appointment time, the care is excellent and the staff is friendly and professional.  There is an online tool that allows me to see my records, send messages to my doctor, request an appointment and get test results. The organization enjoys the highest reputation for clinical quality.  Pretty much perfect from my perspective – until now.

I needed to change a scheduled appointment due to a business conflict.  Since I expected some negotiation I decided to make the phone call rather than request it on line…there are still transactions for which human to human is optimal!  I had all my information at hand and made the call.  The phone was pleasantly answered and the request given.  The receptionist requested my name and date of birth, but then stated that I did not have a future appointment.  I am looking at the online tool, which clearly states that I am scheduled for February 17 at 8:30 AM.  The pleasant young woman confirms my name, date of birth and address and then tells me that I do not have an appointment scheduled.  I am reasonably savvy about these things and figured out the core problem, which is that my last name is hyphenated.  Armed with that information, my other record is found and a new appointment scheduled. The transaction is completed.

But now I am worried. My name has been like this for many years and none of my other key data has changed.   Are there parts of my clinical history missing in the record that my doctor is using?   Will that have a negative impact on the quality of my care?  If I were to be unable to clearly respond, might that older record be accessed and my current medications and history not be available?  The receptionist did not address the duplicate issue clearly by telling me that she would attend to merging the records, so I have no reason to believe that she will.  My confidence is now shaken and I am less trustful of the system and how well it will serve me going forward.  I have resolved my issue, but not everyone would be able to push back to insure that their records are now accurate.

Many millions of dollars are being spent on electronic health records.  Many more millions are being spent to redesign work flow to accommodate the new EHR’s. Physicians and other clinicians are learning new ways to access data and treat their patients. The foundation for all of this is accurate data.  Nicely displayed but inaccurate data will not result in improved care or enhanced member experience.  As healthcare organizations move forward with the razzle dazzle of new systems they need to remember the basics of good quality data and insure that it is available to these new applications.

Share
Posted in Data Privacy, Data Quality, Healthcare, Operational Efficiency | Tagged , , , | Leave a comment

Cloud & BigData: Days of Future Past

silos_clouds-data_integrationA lot of the trends we are seeing in enterprise integration today are being driven by the adoption of cloud based technologies from IaaS, PaaS and SaaS. I just was reading this story about a recent survey on cloud adoption and thought that a lot of this sounds very similar to things that we have seen before in enterprise IT.

Why discuss this? What can we learn? A couple of competing quotes come to mind.

Those who forget the past are bound to repeat it. – Edmund Burke

We are doomed to repeat the past no matter what. – Kurt Vonnegut

While every enterprise has to deal with their own complexities there are several past technology adoption patterns that can be used to drive discussion and compare today’s issues in order to drive decisions in how a company designs and deploys their current enterprise cloud architecture. Flexibility in design should be a key goal in addition to satisfying current business and technical requirements. So, what are the big patterns we have seen in the last 25 years that have shaped the cloud integration discussion?

1. 90s: Migration and replacement at the solution or application level. A big trend of the 90s was replacing older home grown systems or main frame based solutions with new packaged software solutions. SAP really started a lot of this with ERP and then we saw the rise of additional solutions for CRM, SCM, HRM, etc.

This kept a lot of people that do data integration very busy. From my point of view this era was very focused on replacement of technologies and this drove a lot of focus on data migration. While there were some scenarios around data integration to leave solutions in place these tended to be more in the area of systems that required transactional integrity and high level of messaging or back office solutions. On the classic front office solutions enterprises in large numbers did rip & replace and migration to new solutions.

2. 00s: Embrace and extend existing solutions with web applications. The rise of the Internet Browser combined with a popular and powerful standard programming language in Java shaped and drove enterprise integration in this time period. In addition, due to many of the mistakes and issues that IT groups had in the 90s there appeared to be a very strong drive to extend existing investments and not do rip and replace. IT and businesses were trying to figure out how to add new solutions to what they had in place. A lot of enterprise integration, service bus and what we consider as classic application development and deployment solutions came to market and were put in place.

3. 00s: Adoption of new web application based packaged solutions. A big part of this trend was driven by .Net & Java becoming more or less the de-facto desired language of enterprise IT. Software vendors not on these platforms were for the most part forced to re-platform or lose customers. New software vendors in many ways had an advantage because enterprises were already looking at large data migration to upgrade the solutions they had in place. In either case IT shops were looking to be either a .Net or Java shop and it caused a lot of churn.

4. 00s: First generation cloud applications and platforms. The first adoption of cloud applications and platforms were driven by projects and specific company needs. From Salesforce.com being used just for sales management before it became a platform to Amazon being used as just a run-time to develop and deploy applications before it became a full scale platform and an every growing list of examples as every vendor wants to be the cloud platform of choice. The integration needs originally were often on the light side because so many enterprises treated it as an experiment at first or a one off for a specific set of users. This has changed a lot in the last 10 years as many companies repeated their on premise silo of data problems in the cloud as they usage went from one cloud app to 2, 5, +10, etc. In fact, if you strip away where a solution happens to be deployed (on prem or cloud) the reality is that if an enterprise had previously had a poorly planned on premise architecture and solution portfolio they probably have just as poorly planned cloud architecture solution and portfolio. Adding them together just leads to disjoint solutions that are hard to integrate, hard to maintain and hard to evolve. In other words the opposite of the being flexible goal.

5. 10s: Consolidation of technology and battle of the cloud platforms. It appears we are just getting started in the next great market consolidation and every enterprise IT group is going to need to decide their own criteria for how they balance current and future investments. Today we have Salesforce, Amazon, Google, Apple, SAP and a few others. In 10 years some of these will either not exist as they do today or be marginalized. No one can say which ones for sure and this is why prioritizing flexibility in terms or architecture for cloud adoption.

For me the main take aways from the past 25 years of technology adoption trends for anyone that thinks about enterprise and data integration would be the following.

a) It’s all starts and ends with data. Yes, applications, process, and people are important but it’s about the data.

b) Coarse grain and loosely coupled approaches to integration are the most flexible. (e.g. avoid point to point at all costs)

c) Design with the knowledge of what data is critical and what data might or should be accessible or movable

d) Identify data and applications that might have to stay where it is no matter what.(e.g. the main frame is never dying)

e) Make sure your integration and application groups have access to or include someone that understand security. While a lot of integration developers think they understand security it’s usually after the fact that you find out they really do not.

So, it’s possible to shape your cloud adoption and architecture future by at least understanding how past technology and solution adoption has shaped the present. For me it is important to remember it is all about the data and prioritizing flexibility as a technology requirement at least at the same level as features and functions. Good luck.

Share
Posted in Business Impact / Benefits, Cloud Computing, Data Integration | Tagged , , , , | Leave a comment

British Cycling: A Big Data Champion?

Big_Data

British Cycling: A Big Data Champion?

I think I may have gone to too many conferences in 2014 in which the potential of big data was discussed.  After a while all the stories blurred into two main themes:

  1. Companies have gone bankrupt at a time when demand for their core products increased.
  2. Data from mobile phones, cars and other machines house a gold mine of value – we should all be using it.

My main take away from 2014 conferences was that no amount of data is a substitute for poor strategy, or lack of organisational agility to adapt business processes in times of disruption.  However, I still feel as an industry our stories are stuck in the phase of ‘Big Data Hype’, but most organisations are beyond the hype and need practicalities, guidance and inspiration to turn their big data projects into a success.  This is possibly due to a limited number of big data projects in production, or perhaps it is too early to measure the long term results of existing projects.  Another possibility is that the projects are delivering significant competitive advantage, so the stories will remain under wraps for the time being.

However, towards the end of 2014 I stumbled across a big data success story in an unexpected place.  It did (literally) provide competitive advantage, and since it has been running for a number of years the results are plain to see.  It started with a book recommendation from a friend.   ‘Faster’ by Michael Hutchinson is written as a self-propelled investigation as to the difference between world champion and world class althletes.  It promised to satisfy my slightly geeky tendency to enjoy facts, numerical details and statistics.  It did this – but it really struck me as a ‘how-to’ guide for big data projects.

Mr Hutchinson’s book is an excellent read as an insight into professional cycling by a professional cyclist.  It is stacked with interesting facts and well-written anecdotes, and I highly recommend the reading the book.  Since the big-data aspect was a sub-plot, I will pull out the highlights without distracting from the main story.

Here are the five steps I extracted for big data project success:

1. Have a clear vision and goal for your project

The Sydney Olympics in 2000 had only produced 4 medals across all cycling disciplines for British cyclists.  With a home Olympics set for 2012, British Cycling desperately wanted to improve this performance.  Specific targets were clearly set across all disciplines stated in times that an athlete needed to achieve in order to win a race.

2. Determine data the required to support these goals

Unlike many big data projects which start with a data set and then wonder what to do with it, British Cycling did this the other way around.  They worked out what they needed to measure in order to establish the influencers on their goal (track time) and set about gathering this information.  In their case this involved gathering wind tunnel data to compare & contrast equipment, as well as physiological data from athletes and all information from cycling activities.

3. Experiment in order to establish causality

Most big data projects involve experimentation by changing the environment  whilst gathering a sub-set of data points.  The number of variables to adjust in cycling is large, but all were embraced. Data (including video) was gathered on the effects of small changes in each component:  Bike, Clothing, Athlete (training and nutrition).

4. Guide your employees on how to use the results of the data

Like many employees, cyclists and coaches were convinced of the ‘best way’ to achieve results based on their own personal experience.  Analysis of data in some cases showed that the perceived best way, was in fact not the best way.   Coaching staff trusted the data, and convinced the athletes to change aspects of both training and nutrition.  This was not necessarily easy to do, as it could mean fundamental changes in the athlete’s lifestyle.

5. Embrace innovation

Cycling is a very conservative sport by nature, with many of the key innovations coming from adjacent sports such as triathlon.  Data however, is not steeped in tradition and does not have pre-conceived ideas as to what equipment should look like, or what constitutes an excellent recovery drink.  What made British Cycling’s big data initiatives successful is that they allowed themselves to be guided by the data and put the recommendations into practice.  Plastic finished skin suits are probably not the most obvious choice for clothing, but they proved to be the biggest advantage cyclist could get.  Far more than tinkering with the bike.  (In fact they produced so much advantage they were banned shortly after the 2008 Olympics.)

The results:  British Cycling won 4 Olympic medals in 2000, one of which was gold.  In 2012 they grabbed 8 gold, 2 silver and 2 bronze medals.  A quick glance at their website shows that it is not just Olympic medals they are wining – but medals won across all world championship events has increased since 2000.

To me, this is one of the best big data stories, as it directly shows how to be successful using big data strategies in a completely analogue world.  I think it is more insightful that the mere fact that we are producing ever-increasing volumes of data.  The real value of big data is in understanding what portion of all avaiable data will constribute to you acieving your goals, and then embracing the use the results of analysis to make constructive changes in daily activities.

But then again, I may just like the story because it involves geeky facts, statistics and fast bicycles.

Share
Posted in Big Data, Business Impact / Benefits, Data Integration, Data Quality, Data Security, Data Services | Tagged , , , | Leave a comment