Category Archives: Data Security
Original article can be found here, scmagazine.com
On Jan. 13 the White House announced President Barack Obama’s proposal for new data privacy legislation, the Personal Data Notification and Protection Act. Many states have laws today that require corporations and government agencies to notify consumers in the event of a breach – but it is not enough. This new proposal aims to improve cybersecurity standards nationwide with the following tactics:
Enable cyber-security information sharing between private and public sectors.
Government agencies and corporations with a vested interest in protecting our information assets need a streamlined way to communicate and share threat information. This component of the proposed legislation incents organizations that participate in knowledge-sharing with targeted liability protection, as long as they are responsible for how they share, manage and retain privacy data.
Modernize the tools law enforcement has to combat cybercrime.
Existing laws, such as the Computer Fraud and Abuse Act, need to be updated to incorporate the latest cyber-crime classifications while giving prosecutors the ability to target insiders with privileged access to sensitive and privacy data. The proposal also specifically calls out pursuing prosecution when selling privacy data nationally and internationally.
Standardize breach notification policies nationwide.
Many states have some sort of policy that requires notification of customers that their data has been compromised. Three leading examples include California , Florida’s Information Protection Act (FIPA) and Massachusetts Standards for the Protection of Personal Information of Residents of the Commonwealth. New Mexico, Alabama and South Dakota have no data breach protection legislation. Enforcing standardization and simplifying the requirement for companies to notify customers and employees when a breach occurs will ensure consistent protection no matter where you live or transact.
Invest in increasing cyber-security skill sets.
For a number of years, security professionals have reported an ever-increasing skills gap in the cybersecurity profession. In fact, in a recent Ponemon Institute report, 57 percent of respondents said a data breach incident could have been avoided if the organization had more skilled personnel with data security responsibilities. Increasingly, colleges and universities are adding cybersecurity curriculum and degrees to meet the demand. In support of this need, the proposed legislation mentions that the Department of Energy will provide $25 million in educational grants to Historically Black Colleges and Universities (HBCU) and two national labs to support a cybersecurity education consortium.
This proposal is clearly comprehensive, but it also raises the critical question: How can organizations prepare themselves for this privacy legislation?
The International Association of Privacy Professionals conducted a study of Federal Trade Commission (FTC) enforcement actions. From the report, organizations can infer best practices implied by FTC enforcement and ensure these are covered by their organization’s security architecture, policies and practices:
- Perform assessments to identify reasonably foreseeable risks to the security, integrity, and confidentiality of personal information collected and stored on the network, online or in paper files.
- Limited access policies curb unnecessary security risks and minimize the number and type of network access points that an information security team must monitor for potential violations.
- Limit employee access to (and copying of) personal information, based on employee’s role.
- Implement and monitor compliance with policies and procedures for rendering information unreadable or otherwise secure in the course of disposal. Securely disposed information must not practicably be read or reconstructed.
- Restrict third party access to personal information based on business need, for example, by restricting access based on IP address, granting temporary access privileges, or similar procedures.
The Personal Data Notification and Protection Act fills a void at the national level; most states have privacy laws with California pioneering the movement with SB 1386. However, enforcement at the state AG level has been uneven at best and absent at worse.
In preparing for this national legislation organization need to heed the policies derived from the FTC’s enforcement practices. They can also track the progress of this legislation and look for agencies such as the National Institute of Standards and Technology to issue guidance. Furthermore, organizations can encourage employees to take advantage of cybersecurity internship programs at nearby colleges and universities to avoid critical skills shortages.
With online security a clear priority for President Obama’s administration, it’s essential for organizations and consumers to understand upcoming legislation and learn the benefits/risks of sharing data. We’re looking forward to celebrating safeguarding data and enabling trust on Data Privacy Day, held annually on January 28, and hope that these tips will make 2015 your safest year yet.
After a careful review by Informatica, the recent Ghost buffer overflow vulnerability (CVE-2015-0235) does not require any Informatica patches for our on-premise products. All Informatica cloud-hosted services were patched by Jan 30.
What you need to know
Ghost is a buffer overflow vulnerability found in glibc (GNU C Library), most commonly found on Linux systems. All distributions of Linux are potentially affected. The most common attack vectors involve Linux servers that are hosting web apps, email servers, and other such services that accept requests over the open Internet; hackers can embed malicious code therein. Fixed versions of glibc are now already available from their respective Linux vendors, including:
- Red Hat: https://access.redhat.com/articles/1332213
What you need to do
Because many of our products link to glibc.zip, we recommend customers apply the appropriate OS patch from their Linux vendor. After applying this OS patch, customers should restart Informatica services running on that machine to ensure our software is linking to the up-to-date glibc library. To ensure all other resources on a system are patched, a full system reboot may also be necessary.
Bill Burns, VP & Chief Information Security Officer
Valentine’s Day is such a strange holiday. It always seems to bring up more questions than answers. And the internet always seems to have a quiz to find out the answer! There’s the “Does he have a crush on you too – 10 simple ways to find out” quiz. There’s the “What special gift should I get her this Valentine’s Day?” quiz. And the ever popular “Why am I still single on Valentine’s Day?” quiz.
Well Marketers, it’s your lucky Valentine’s Day! We have a quiz for you too! It’s about your relationship with data. Where do you stand? Are you ready to take the next step?
Question 1: Do you connect – I mean, really connect – with your data?
□ (A) Not really. We just can’t seem to get it together and really connect.
□ (B) Sometimes. We connect on some levels, but there are big gaps.
□ (C) Most of the time. We usually connect, but we miss out on some things.
□ (D) We are a perfect match! We connect about everything, no matter where, no matter when.
Translation: Data ready marketers have access to the best possible data, no matter what form it is in, no matter what system it is in. They are able to make decisions based everything the entire organization “knows” about their customer/partner/product – with a complete 360 degree view. And they are also able to connect to and integrate with data outside the bounds of their organization to achieve the sought-after 720 degree view. They can integrate and react to social media comments, trends, and feedback – in real time – and to match it with an existing record whenever possible. And they can quickly and easily bring together any third party data sources they may need.
Question 2: How good looking & clean is you data?
□ (A) Yikes, not very. But it’s what’s on the inside that counts right?
□ (B) It’s ok. We’ve both let ourselves go a bit.
□ (C) It’s pretty cute. Not supermodel hot, but definitely girl or boy next door cute.
□ (D) My data is HOT! It’s perfect in every way!
Translation: Marketers need data that is reliable and clean. According to a recent Experian study, American companies believe that 25% of their data is inaccurate, the rest of the world isn’t much more confident. 90% of respondents said they suffer from common data errors, and 78% have problems with the quality of the data they gather from disparate channels. Making marketing decisions based upon data that is inaccurate leads to poor decisions. And what’s worse, many marketers have no idea how good or bad their data is, so they have no idea what impact it is having on their marketing programs and analysis. The data ready marketer understands this and has a top tier data quality solution in place to make sure their data is in the best shape possible.
Question 3: Do you feel safe when you’re with your data?
□ (A) No, my data is pretty scary. 911 is on speed dial.
□ (B) I’m not sure actually. I think so?
□ (C) My date is mostly safe, but it’s got a little “bad boy” or “bad girl” streak.
□ (D) I protect my data, and it protects me back. We keep each other safe and secure.
Translation: Marketers need to be able to trust the quality of their data, but they also need to trust the security of their data. Is it protected or is it susceptible to theft and nefarious attacks like the ones that have been all over the news lately? Nothing keeps a CMO and their PR team up at night like worrying they are going to be the next brand on the cover of a magazine for losing millions of personal customer records. But beyond a high profile data breach, marketers need to be concerned over data privacy. Are you treating customer data in the way that is expected and demanded? Are you using protected data in your marketing practices that you really shouldn’t be? Are you marketing to people on excluded lists
Question 4: Is your data adventurous and well-traveled, or is it more of a “home-body”?
□ (A) My data is all over the place and it’s impossible to find.
□ (B) My data is all in one place. I know we’re missing out on fun and exciting options, but it’s just easier this way.
□ (C) My data is in a few places and I keep fairly good tabs on it. We can find each other when we need to, but it takes some effort.
□ (D) My data is everywhere, but I have complete faith that I can get ahold of any source I might need, when and where I need it.
Translation: Marketing data is everywhere. Your marketing data warehouse, your CRM system, your marketing automation system. It’s throughout your organization in finance, customer support, and sale systems. It’s in third party systems like social media and data aggregators. That means it’s in the cloud, it’s on premise, and everywhere in between. Marketers need to be able to get to and integrate data no matter where it “lives”.
Question 5: Does your data take forever to get ready when it’s time to go do so something together?
□ (A) It takes forever to prepare my data for each new outing. It’s definitely not “ready to go”.
□ (B) My data takes it’s time to get ready, but it’s worth the wait… usually!
□ (C) My data is fairly quick to get ready, but it does take a little time and effort.
□ (D) My data is always ready to go, whenever we need to go somewhere or do something.
Translation: One of the reasons many marketers end up in marketing is because it is fast paced and every day is different. Nothing is the same from day-to-day, so you need to be ready to act at a moment’s notice, and change course on a dime. Data ready marketers have a foundation of great data that they can point at any given problem, at any given time, without a lot of work to prepare it. If it is taking you weeks or even days to pull data together to analyze something new or test out a new hunch, it’s too late – your competitors have already done it!
Question 6: Can you believe the stories your data is telling you?
□ (A) My data is wrong a lot. It stretches the truth a lot, and I cannot rely on it.
□ (B) I really don’t know. I question these stories – dare I say excused – but haven’t been able to prove it one way or the other.
□ (C) I believe what my data says most of the time. It rarely lets me down.
□ (D) My data is very trustworthy. I believe it implicitly because we’ve earned each other’s trust.
Translation: If your data is dirty, inaccurate, and/or incomplete, it is essentially “lying” to you. And if you cannot get to all of the data sources you need, your data is telling you “white lies”! All of the work you’re putting into analysis and optimization is based on questionable data, and is giving you questionable results. Data ready marketers understand this and ensure their data is clean, safe, and connected at all times.
Question 7: Does your data help you around the house with your daily chores?
□ (A) My data just sits around on the couch watching TV.
□ (B) When I nag my data will help out occasionally.
□ (C) My data is pretty good about helping out. It doesn’t take imitative, but it helps out whenever I ask.
□ (D) My data is amazing. It helps out whenever it can, however it can, even without being asked.
Translation: Your marketing data can do so much. It should enable you be “customer ready” – helping you to understand everything there is to know about your customers so you can design amazing personalized campaigns that speak directly to them. It should enable you to be “decision ready” – powering your analytics capabilities with great data so you can make great decisions and optimize your processes. But it should also enable you to be “showcase ready” – giving you the proof points to demonstrate marketing’s actual impact on the bottom line.
Now for the fun part… It’s time to rate your data relationship status
If you answered mostly (A): You have a rocky relationship with your data. You may need some data counseling!
If you answered mostly (B): It’s time to decide if you want this data relationship to work. There’s hope, but you’ve got some work to do.
If you answered mostly (C): You and your data are at the beginning of a beautiful love affair. Keep working at it because you’re getting close!
If you answered mostly (D): Congratulations, you have a strong data marriage that is based on clean, safe, and connected data. You are making great business decisions because you are a data ready marketer!
Do You Love Your Data?
No matter what your data relationship status, we’d love to hear from you. Please take our survey about your use of data and technology. The results are coming out soon so don’t miss your chance to be a part. https://www.surveymonkey.com/s/DataMktg
Also, follow me on twitter – The Data Ready Marketer – for some of the latest & greatest news and insights on the world of data ready marketing. And stay tuned because we have several new Data Ready Marketing pieces coming out soon – InfoGraphics, eBooks, SlideShares, and more!
Informatica users leveraging HDP are now able to see a complete end-to-end visual data lineage map of everything done through the Informatica platform. In this blog post, Scott Hedrick, director Big Data Partnerships at Informatica, tells us more about end-to-end visual data lineage.
Hadoop adoption continues to accelerate within mainstream enterprise IT and, as always, organizations need the ability to govern their end-to-end data pipelines for compliance and visibility purposes. Working with Hortonworks, Informatica has extended the metadata management capabilities in Informatica Big Data Governance Edition to include data lineage visibility of data movement, transformation and cleansing beyond traditional systems to cover Apache Hadoop.
Informatica users are now able to see a complete end-to-end visual data lineage map of everything done through Informatica, which includes sources outside Hortonworks Data Platform (HDP) being loaded into HDP, all data integration, parsing and data quality transformation running on Hortonworks and then loading of curated data sets onto data warehouses, analytics tools and operational systems outside Hadoop.
Regulated industries such as banking, insurance and healthcare are required to have detailed histories of data management for audit purposes. Without tools to provide data lineage, compliance with regulations and gathering the required information for audits can prove challenging.
With Informatica, the data scientist and analyst can now visualize data lineage and detailed history of data transformations providing unprecedented transparency into their data analysis. They can be more confident in their findings based on this visibility into the origins and quality of the data they are working with to create valuable insights for their organizations. Web-based access to visual data lineage for analysts also facilitates team collaboration on challenging and evolving data analytics and operational system projects.
The Informatica and Hortonworks partnership brings together leading enterprise data governance tools with open source Hadoop leadership to extend governance to this new platform. Deploying Informatica for data integration, parsing, data quality and data lineage on Hortonworks reduces risk to deployment schedules.
A demo of Informatica’s end-to-end metadata management capabilities on Hadoop and beyond is available here:
- A free trial of Informatica Big Data Edition in the Hortonworks Sandbox is available here .
Data proliferation has traditionally been measured based on the number of copies data reside on different media. For example, if data residing on an enterprise storage device was backed up to tape, the proliferation was measured by the number of tapes the same piece of data would reside. Now that backups are no longer restricted to the data center and data is no longer constrained by the originating application, this definition is due for an update.
Data proliferation should be measured based on the number of users who have access to or can view the data and that data proliferation is a primary factor in measuring the risk of a data breach. My argument here is that as sensitive, confidential or private data proliferates beyond the original copy, it increases its surface area and proportionally increases its risk of a data breach.
Using the original definition of data proliferation and an example of data storage shown below, data proliferation would include production, production copies used for disaster recovery purposes and all physical backup copies. But as you can see, data is also copied to test environments for development purposes. When factoring in the number of privileged users with access to those copies, you have a different view of proliferation and potential risk.
In the example, there are potentially thousands of copies of sensitive data but only a small number of users who are authorized to access the data.
In the case of test and development, this image highlights a potentially high area of risk because the number of users who could see the sensitive data is high.
Similarly with online advertising, the measure of how many people see an online ad is called an impression. If an ad was seen by 100 online users, it would have 100 impressions.
When you apply that same principal to data security, you could say that data proliferation is a calculation of the number of copies of a data element multiplied by the potential number of users who could physically view the data, or in other words ‘impressions’. In this second image below, rather than considering the total number of copies, what if we measured risk based on the total number of impressions?
In this case, the measure of risk is independent of the physical media the data reside on. You could take this a few steps further and add a factor based on security controls in place to prevent unauthorized access.
The Ponemon Institute stated that the biggest concern for security professionals is that they do not know where sensitive data resides. Informatica’s Intelligent Data Platform provides data security professionals with the technology required to discover, profile, classify and assess the risk of confidential and sensitive data.
Last year, we began significant investments in data security R&D support the initiative. This year, we continue the commitment by organizing around the vision. I am thrilled to be leading the Informatica Data Security Group, a newly-formed business unit comprised of a team dedicated to data security innovation. The business unit includes the former Application ILM business unit which consists of data masking, test data management and data archive technologies from previous acquisitions, including Applimation, ActiveBase, and TierData.
By having a dedicated business unit and engineering resources applying Informatica’s Intelligent Data Platform technology to a security problem, we believe we can make a significant difference addressing a serious challenge for enterprises across the globe. The newly formed Data Security Group will focus on new innovations in the data security intelligence market, while continuing to invest and enhance our existing data-centric security solutions such as data masking, data archiving and information lifecycle management solutions.
The world of data is transforming around us and we are committed to transforming the data security industry to keep our customer’s data clean, safe and connected.
For more details regarding how these changes will be reflected in our products, message and support, please refer to the FAQs listed below:
Q: What is the Data Security Group (DSG)?
A: Informatica has created a newly formed business unit, the Informatica Data Security Group, as a dedicated team focusing on data security innovation to meet the needs of our customers while leveraging the Informatica Intelligent Data Platform
Q: Why did Informatica create a dedicated Data Security Group business unit?
A: Reducing Risk is among the top 3 business initiatives for our customers in 2015. Data Security is a top IT and business initiative for just about every industry and organization that store sensitive, private, regulated or confidential data. Data Security is a Board room topic. By building upon our success with the Application ILM product portfolio and the Intelligent Data Platform, we can address more pressing issues while solving mission-critical challenges that matter to most of our customers.
Q: Is this the same as the Application ILM Business Unit?
A: The Informatica Data Security Group is a business unit that includes the former Application ILM business unit products comprised of data masking, data archive and test data management products from previous acquisitions, including Applimation, ActiveBase, and TierData, and additional resources developing and supporting Informatica’s data security products GTM, such as Secure@Source.
Q: How big is the Data Security market opportunity?
A: Data Security software market is estimated to be a $3B market in 2015 according to Gartner. Total information security spending will grow a further 8.2 percent in 2015 to reach $76.9 billion.
Q: Who would be most interested in this announcement and why?
A: All leaders are impacted when a data breach occurs. Understanding the risk of sensitive data is a board room topic. Informatica is investing and committing to securing and safeguarding sensitive, private and confidential data. If you are an existing customer, you will be able to leverage your existing skills on the Informatica platform to address a challenge facing every team who manages or handles sensitive or confidential data.
Q: How does this announcement impact the Application ILM products – Data Masking, Data Archive and Test Data Management?
A: The existing Application ILM products are foundational to the Data Security Group product portfolio. These products will continue to be invested in, supported and updated. We are building upon our success with the Data Masking, Data Archive and Test Data Management products.
Q: How will this change impact my customer experience?
A: The Informatica product website will reflect this new organization by listing the Data Masking, Data Archive, and Test Data Management products under the Data Security product category. The customer support portal will reference Data Security as the top level product category. Older versions of the product and corresponding documentation will not be updated and will continue to reflect Application ILM nomenclature and messaging.
This week, another reputable organization, Anthem Inc, reported it was ‘the target of a very sophisticated external cyber attack’. But rather than be upset at Anthem, I respect their responsible data breach reporting.
In this post from Joseph R. Swedish, President and CEO, Anthem, Inc., does something that I believe all CEO’s should do in this situation. He is straight up about what happened, what information was breached, actions they took to plug the security hole, and services available to those impacted.
When it comes to a data breach, the worst thing you can do is ignore it or hope it will go away. This was not the case with Anthem. Mr Swedish did the right thing and I appreciate it.
You only have one corporate reputation – and it is typically aligned with the CEO’s reputation. When the CEO talks about the details of a data breach and empathizes with those impacted, he establishes a dialogue based on transparency and accountability.
Research that tells us 44% of healthcare and pharmaceutical organizations experienced a breach in 2014. And we know that when personal information when combined with health information is worth more on the black market because the data can be used for insurance fraud. I expect more healthcare providers will be on the defensive this year and only hope that they follow Mr Swedish’s example when facing the music.
I hate to break the news but data breaches have become an unfortunate fact of life. These unwanted events are happening too frequently that each time it happens, it feels like the daily weather report. The scary thing about data breaches is that these events will only continue to grow as criminals become more desperate to take advantage of the innocent and data about our personal records, financial account numbers, and identities continues to proliferate across computer systems in every industry from your local retailer, your local DMV, to one of the nation’s largest health insurance providers.
According to the 2014 Cost of Data Breach study from the Ponemon Institute, data breaches will cost companies $201 per stolen record. According to the NY Post, 80 million records were stolen from Anthem this week which will cost employees, customers, and shareholders $16,080,000,000 from this single event. The 80 million records accounted for includes the data they knew about. What about all the data that has proliferated across systems? Data about both current and past customers across decades that was copied onto personal computers, loaded into shared network folders, and sitting there while security experts pray that their network security solutions will prevent the bad guys from finding it and causing even more carnage the this ever growing era of Big Data?If you are worried as much as I am about what these criminals will do with our personal information, make it a priority to protect your data assets in your lives both personal and in business. Learn more about Informatica’s perspectives and video on this matter:
- Data Security – A Major Concern in 2015
- How organizations can prepare for 2015 data privacy legislation
- How Protected is your PHI?
- The CISO Challenge: Articulating Data Worth and Security Economics
- IDC Life Sciences and Ponemon Research Highlights Need for New Security Measures
- Video: Secure@Source – A Data-Centric Approach to Security
Follow me! @DataisGR8
I think I may have gone to too many conferences in 2014 in which the potential of big data was discussed. After a while all the stories blurred into two main themes:
- Companies have gone bankrupt at a time when demand for their core products increased.
- Data from mobile phones, cars and other machines house a gold mine of value – we should all be using it.
My main take away from 2014 conferences was that no amount of data is a substitute for poor strategy, or lack of organisational agility to adapt business processes in times of disruption. However, I still feel as an industry our stories are stuck in the phase of ‘Big Data Hype’, but most organisations are beyond the hype and need practicalities, guidance and inspiration to turn their big data projects into a success. This is possibly due to a limited number of big data projects in production, or perhaps it is too early to measure the long term results of existing projects. Another possibility is that the projects are delivering significant competitive advantage, so the stories will remain under wraps for the time being.
However, towards the end of 2014 I stumbled across a big data success story in an unexpected place. It did (literally) provide competitive advantage, and since it has been running for a number of years the results are plain to see. It started with a book recommendation from a friend. ‘Faster’ by Michael Hutchinson is written as a self-propelled investigation as to the difference between world champion and world class althletes. It promised to satisfy my slightly geeky tendency to enjoy facts, numerical details and statistics. It did this – but it really struck me as a ‘how-to’ guide for big data projects.
Mr Hutchinson’s book is an excellent read as an insight into professional cycling by a professional cyclist. It is stacked with interesting facts and well-written anecdotes, and I highly recommend the reading the book. Since the big-data aspect was a sub-plot, I will pull out the highlights without distracting from the main story.
Here are the five steps I extracted for big data project success:
1. Have a clear vision and goal for your project
The Sydney Olympics in 2000 had only produced 4 medals across all cycling disciplines for British cyclists. With a home Olympics set for 2012, British Cycling desperately wanted to improve this performance. Specific targets were clearly set across all disciplines stated in times that an athlete needed to achieve in order to win a race.
2. Determine data the required to support these goals
Unlike many big data projects which start with a data set and then wonder what to do with it, British Cycling did this the other way around. They worked out what they needed to measure in order to establish the influencers on their goal (track time) and set about gathering this information. In their case this involved gathering wind tunnel data to compare & contrast equipment, as well as physiological data from athletes and all information from cycling activities.
3. Experiment in order to establish causality
Most big data projects involve experimentation by changing the environment whilst gathering a sub-set of data points. The number of variables to adjust in cycling is large, but all were embraced. Data (including video) was gathered on the effects of small changes in each component: Bike, Clothing, Athlete (training and nutrition).
4. Guide your employees on how to use the results of the data
Like many employees, cyclists and coaches were convinced of the ‘best way’ to achieve results based on their own personal experience. Analysis of data in some cases showed that the perceived best way, was in fact not the best way. Coaching staff trusted the data, and convinced the athletes to change aspects of both training and nutrition. This was not necessarily easy to do, as it could mean fundamental changes in the athlete’s lifestyle.
5. Embrace innovation
Cycling is a very conservative sport by nature, with many of the key innovations coming from adjacent sports such as triathlon. Data however, is not steeped in tradition and does not have pre-conceived ideas as to what equipment should look like, or what constitutes an excellent recovery drink. What made British Cycling’s big data initiatives successful is that they allowed themselves to be guided by the data and put the recommendations into practice. Plastic finished skin suits are probably not the most obvious choice for clothing, but they proved to be the biggest advantage cyclist could get. Far more than tinkering with the bike. (In fact they produced so much advantage they were banned shortly after the 2008 Olympics.)
The results: British Cycling won 4 Olympic medals in 2000, one of which was gold. In 2012 they grabbed 8 gold, 2 silver and 2 bronze medals. A quick glance at their website shows that it is not just Olympic medals they are wining – but medals won across all world championship events has increased since 2000.
To me, this is one of the best big data stories, as it directly shows how to be successful using big data strategies in a completely analogue world. I think it is more insightful that the mere fact that we are producing ever-increasing volumes of data. The real value of big data is in understanding what portion of all avaiable data will constribute to you acieving your goals, and then embracing the use the results of analysis to make constructive changes in daily activities.
But then again, I may just like the story because it involves geeky facts, statistics and fast bicycles.
I live in a very small town in Maine. I don’t spend a lot of time thinking about my privacy. Some would say that by living in a small town, you give up your right to privacy because everyone knows what everyone else is doing. Living here is a choice – for me to improve my family’s quality of life. Sharing all of the details of my life – not so much.
When I go to my doctor (who also happens to be a parent from my daughter’s school), I fully expect that any sort of information that I share with him, or that he obtains as a result of lab tests or interviews, or care that he provides is not available for anyone to view. On the flip side, I want researchers to be able to take my lab information combined with my health history in order to do research on the effectiveness of certain medications or treatment plans.
As a result of this dichotomy, Congress (in 1996) started to address governance regarding the transmission of this type of data. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a Federal law that sets national standards for how health care plans, health care clearinghouses, and most health care providers protect the privacy of a patient’s health information. With certain exceptions, the Privacy Rule protects a subset of individually identifiable health information, known as protected health information or PHI, that is held or maintained by covered entities or their business associates acting for the covered entity. PHI is any information held by a covered entity which concerns health status, provision of health care, or payment for health care that can be linked to an individual.
Many payers have this type of data in their systems (perhaps in a Claims Administration system), and have the need to share data between organizational entities. Do you know if PHI data is being shared outside of the originating system? Do you know if PHI is available to resources that have no necessity to access this information? Do you know if PHI data is being shared outside your organization?
If you can answer yes to each of these questions – fantastic. You are well ahead of the curve. If not – you need to start considering solutions that can
- Identify PHI in all of your data streams
- Monitor and track the flow of this data throughout your organization and
- Mask this data if it is being shared with resources that don’t need to be able to identify the individual.
I want to researchers to have access to medically relevant data so they can find the cures to some horrific diseases. I want to feel comfortable sharing health information with my doctor. I want to feel comfortable that my health insurance company is respecting my privacy. Now to get my kids to stop oversharing.