Tag Archives: data security
A few years ago the former eBay’s CISO, Dave Cullinane, led a sobering coaching discussion on how to articulate and communicate the value of a security solution and its economics to a CISO’s CxO peers.
Why would I blog about such old news? Because it was a great and timeless idea. And in this age of the ‘Great Data Breach’, where CISOs need all the help they can get, I thought I would share it with y’all.
Dave began by describing how to communicate the impact of an attack from malware such as Aurora, spearfishing, stuxnet, hacktivision, and so on… versus the investment required to prevent the attack. If you are an online retailer and your web server goes down because of a major denial of service attack, what does that cost the business? How much revenue is lost every minute that site is offline? Enough to put you out of business? See the figure below that illustrates how to approach this conversation.
If the impact of a breach and the risk of losing business is high and the investment in implementing a solution is relatively low, the investment decision is an obvious one (represented by the yellow area in the upper left corner).
However, it isn’t always this easy, is it? When determining what your company’s brand and reputation worth, how do you develop a compelling case?
Another dimension Dave described is communicating the economics of a solution that could prevent an attack based on the probability that the attack would occur (see next figure below).
For example, consider an attack that could influence stock prices? This is a complex scenario that is probably less likely to occur on a frequent basis and would require a sophisticated multidimensional solution with an integrated security analytics solution to correlate multiple events back to a single source. This might place the discussion in the middle blue box, or the ‘negotiation zone’. This is where the CISO needs to know what the CxO’s risk tolerances are and articulate value in terms of the ‘coin of the realm’.
Finally, stay on top of what the business is cooking up for new initiatives that could expose or introduce new risks. For example, is marketing looking to spin up a data warehouse on Amazon Redshift? Anyone on the analytics team tinkering with Hadoop in the cloud? Is development planning to outsource application test and development activities to offshore systems integrators? If you are participating in any of these activities, make sure your CISO isn’t the last to know when a ‘Breach Happens’!
To learn more about ways you can mitigate risk and maintain data privacy compliance, check out the latest Gartner Data Masking Magic Quadrant.
In the report, Gartner cites. “Global-scale scandals around sensitive data losses have highlighted the need for effective data protection, especially from insider attacks. Data masking, which is focused on protecting data from insiders and outsiders, is a must-have technology in enterprises’ and governments’ security portfolios.”
Organizations realize that data protection must be hardened to protect against the inevitable breach; originating from either internal or external threats. Data masking covers gaps in data protection in production and non-production environments that can be exploited by attackers.
Informatica customers are elevating the importance of data security initiatives in 2015 given the high exposure of recent breaches and the shift from just stealing identities and intellectual property, to politically charged platforms. This raises the concern that existing security controls are insufficient and a more data-centric security approach is necessary.
Recent enforcement by the Federal Trade Commission in the US and emerging legislation worldwide has clearly indicated that sensitive data access and sharing should be tightly controlled; this is the strength of data masking.
Data Masking de-identifies and/or de-sensitizes private and confidential data by hiding it from those who are unauthorized to access it. Other terms for data masking include data obfuscation, sanitization, scrambling, de-identification, and anonymization.
To learn more, Download the Gartner Magic Quadrant Data Masking Report now. And visit the Informatica website for data masking product information.
About the Magic Quadrant
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
A couple months ago, I reached out to a set of CIOs on the importance of good governance and security. All of them agreed that both were incredibly important. However, one CIO retorted a very pointed remark by saying that “the IT leadership at these breached companies wasn’t stupid.” He continued by saying that when selling the rest of the C-Suite, the discussion needs to be about business outcomes and business benefits. For this reason, he said that CIOs have struggled at selling the value of investments in governance and security investment. Now I have suggested previously that security pays because of the impact on “brand promise”. And, I still believe this.
However, this week the ante was raised even higher. A district judge ruled that a group of banks can proceed to sue a retailer for negligence in their data governance and security. The decision could clearly lead to significant changes in the way the cost of fraud is distributed among parties within the credit card ecosystem. Where once banks and merchant acquirers would have shouldered the burden of fraud, this decision paves the way for more card-issuing banks to sue merchants for not adequately protecting their POS systems.
The judge’s ruling said that “although the third-party hackers’ activities caused harm, merchant played a key role in allowing the harm to occur.” The judge also determined that the bank suit against merchants was valid because the plaintiffs adequately showed that the retailer failed “to disclose that its data security systems were deficient.” This is interesting because it says that security systems should be sufficient and if not, retailers need to inform potentially affected stakeholders of their deficient systems. And while taking this step could avoid a lawsuit, it would likely increase the cost of interchange for more risky merchants. This would effectively create a risk premium for retailers that do not adequately govern and protect their IT environments.
There are broad implications for all companies who end up harming customer, partners, or other stakeholders by not keeping their security systems up to snuff. The question is, will this make good governance have enough of a business outcome and benefit that businesses will actually want to pay it forward — i.e. invest in good governance and security? What do you think? I would love to hear from you.
Insurance companies serve as a fantastic example of big data technology use since data is such a pervasive asset in the business. From a cost savings and risk mitigation standpoint, insurance companies use data to assess the probable maximum loss of catastrophic events as well as detect the potential for fraudulent claims. From a revenue growth standpoint, insurance companies use data to intelligently price new insurance offerings and deploy cross-sell offers to customers to maximize their lifetime value.
New data sources are enabling insurance companies to mitigate risk and grow revenues even more effectively. Location-based data from mobile devices and sensors are being used inside insured properties to proactively detect exposure to catastrophic events and deploy preventive maintenance. For example, automobile insurance providers are increasingly offering usage-based driving programs, whereby insured individuals install a mobile sensor inside their car to relay the quality of their driving back to their insurance provider in exchange for lower premiums. Even healthcare insurance providers are starting to analyze the data collected by wearable fitness bands and smart watches to monitor insured individuals and inform them of personalized ways to be healthier. Devices can also be deployed in the environment that triggers adverse events, such as sensors to monitor earthquake and weather patterns, to help mitigate the costs of potential events. Claims are increasingly submitted with supporting information in a variety of formats like text files, spreadsheets, and PDFs that can be mined for insights as well. And with the growth on insurance sales online, web log and clickstream data is more important than ever to help drive online revenue.
Beyond the benefits of using new data sources to assess risk and grow revenues, big data technologies are enabling insurance companies to fundamentally rethink the basis of their analytical architecture. In the past, probable maximum loss modeling could only be performed on statistically aggregated datasets. But with big data technologies, insurance companies have the opportunity to analyze data at the level of an insured individual or a unique insurance claim. This increased depth of analysis has the potential to radically improve the quality and accuracy of risk models and market predictions.
Informatica is helping insurance companies accelerate the benefits of big data technologies. With multiple styles of ingestion available, Informatica enables insurance companies to leverage nearly any source of data. Informatica Big Data Edition provides comprehensive data transformations for ETL and data quality, so that insurance companies can profile, parse, integrate, cleanse, and refine data using a simple user-friendly visual development environment. With built-in data lineage tracking and support for data masking, Informatica helps insurance companies ensure regulatory compliance across all data.
To try out the Big Data Edition, download a free trial today in the Informatica Marketplace and get started with big data today!
According to the article, in Hamilton County Ohio, it’s not unusual to see kids from the same neighborhoods coming to the hospital for asthma attacks. Thus, researchers wanted to know if it was fact or mistaken perception that an unusually high number of children in the same neighborhood were experiencing asthma attacks. The next step was to review existing data to determine the extent of the issues, and perhaps how to solve the problem altogether.
“The researchers studied 4,355 children between the ages of 1 and 16 who visited the emergency department or were hospitalized for asthma at Cincinnati Children’s between January 2009 and December 2012. They tracked those kids for 12 months to see if they returned to the ED or were readmitted for asthma.”
Not only were the researchers able to determine a sound correlation between the two data sets, but they were able to advance the research to predict which kids were at high-risk based upon where they live. Thus, some of the cause and the effects have been determined.
This came about when researchers began thinking out of the box, when it comes to dealing with traditional and non-traditional medical data. They integrated housing and census data, in this case, with that of the data from the diagnosis and treatment of the patients. These are data sets unlikely to find their way to each other, but together they have a meaning that is much more valuable than if they just stayed in their respective silos.
“Non-traditional medical data integration has begun to take place in some medical collaborative environments already. The New York-Presbyterian Regional Health Collaborative created a medical village, which ‘goes beyond the established patient-centered medical home mode.’ It not only connects an academic medical center with a large ambulatory network, medical homes, and other providers with each other, but community resources such as school-based clinics and specialty-care centers (the ones that are a part of NYP’s network).”
The fact of the matter is that data is the key to understanding what the heck is going on when cells of sick people begin to emerge. While researchers and doctors can treat the individual patients there is not a good understanding of the larger issues that may be at play. In this case, poor air quality in poor neighborhoods. Thus, they understand what problem needs to be corrected.
The universal sharing of data is really the larger solution here, but one that won’t be approached without a common understanding of the value, and funding. As we pass laws around the administration of health care, as well as how data is to be handled, perhaps it’s time we look at what the data actually means. This requires a massive deployment of data integration technology, and the fundamental push to share data with a central data repository, as well as with health care providers.
Or in other words: Did the agency model kill data quality? When you watch the TV series “Homeland”, you quickly realize the interdependence between field operatives and the command center. This is a classic agency model. One arm gathers, filters and synthesizes information and prepares a plan but the guys on the ground use this intel to guide their sometimes ad hoc next moves.
The last few months I worked a lot – and I mean A LOT – with a variety of mid-sized life insurers (<$1B annual revenue) around fixing their legacy-inherited data quality problems. Their IT departments, functioning like Operations Command Centers (intel acquisition, filter and synthesize), were inundated with requests to fix up and serve a coherent, true, enriched central view of a participant (the target) and all his policies and related entities from and to all relevant business lines (planning) to achieve their respective missions (service, retain, upsell, mitigate risk): employee benefits, broker/dealer, retirement services, etc.
The captive and often independent agents (execution); however, often run with little useful data into an operation (sales cycle) as the Ops Center is short on timely and complete information. Imagine Carrie directing her strike team to just wing it based on their experience and dated intel from a raid a few months ago without real-time drone video feeds. Would she be saying, “Guys, it’s always been your neck, you’ll figure it out.” I think not.
This becomes apparent when talking to the actuaries, claims operations, marketing, sales, agency operations, audit, finance, strategic planning, underwriting and customer service, common denominators appeared quickly:
- Every insurer saw the need to become customer instead of policy centric. That’s the good news.
- Every insurer knew their data was majorly sub-standard in terms of quality and richness.
- Every insurer agreed that they are not using existing data capture tools (web portals for agents and policy holders, CRM applications, policy and claims mgmt systems) to their true potential.
- New books-of-business were generally managed as separate entities from a commercial as well as IT systems perspective, even if they were not net-new products, like trust products. Cross-selling as such, even if desired, becomes a major infrastructure hurdle.
- As in every industry, the knee-jerk reaction was to throw the IT folks at data quality problems and making it a back-office function. Pyrrhic victory.
- Upsell scenarios, if at all strategically targeted, are squarely in the hands of the independent agents. The insurer will, at most, support customer insights around lapse avoidance or 401k roll-off indicators for retiring or terminated plan participants. This may be derived from a plan sponsor (employer) census file, which may have incorrect address information.
- Prospect and participant e-mail addresses are either not captured (process enforcement or system capability) or not validated (domain, e-mail verification), so the vast majority of customer correspondence, like scenarios, statements, privacy notices and policy changes, travels via snail mail (and this typically per policy). Overall, only 15-50% of contacts have an “unverified” e-mail address today and of these less than a third subscribed to exclusive electronic statement delivery.
- Postal address information is still not 99% correct, complete or current, resulting in high returned mail ($120,000-$750,000 every quarter), priority mail upgrades, statement reprints, manual change capture and shredding cost as well as the occasional USPS fine.
- Data quality, as unbelievable as it sounds, took a back-burner to implementing a new customer data warehouse, a new claims system, a new risk data mart, etc. They all just get filled with the same old, bad data as business users were – and I quote –“used to the quality problem already”.
- Premium underpricing runs at 2-4% annually, foregoing millions in additional revenue, due to lack of a full risk profile.
- Customer cost –of-acquisition (CAC) is unknown or incorrect as there is no direct, realistic tracking of agency campaign/education dollars spent against new policies written.
- Agency historic production and projections are unclear as a dynamic enforcement of hierarchies is not available, resulting in orphaned policies generating excess tax burdens. Often this is the case when agents move to states where they are not licensed, they passed or retired.
What does a cutting-edge insurer look like instead? Ask Carrie Mathison and Saul Bernstein. They already have a risk and customer EDW as well as a modern (cloud based?) CRM and claims mgmt system. They have considered, as part of their original implementation or upgrade, capabilities required to fix the initial seed data into their analytics platforms. Now, they are looking into pushing them back into operational systems like CRM and avoiding bad source system entries from the get-go.
They are also beyond just using data to avoid throwing more bodies in every department at “flavor-of-the-month” clean-up projects, e.g. yet another state unclaimed property matching exercise, total annual premium revenue written in state X for tax review purposes by the state tax authority.
So what causes this drastic segmentation of leading versus laggard life insurers? In my humble opinion, it is the lack of a strategic refocusing of what the insurer can do for an agent by touching the prospects and customers directly. Direct interaction (even limited) improves branding, shortens the sales cycle and rates based on improved insights through better data quality.
Agents (and insurers) need to understand that the wealth of data (demographic, interactions, transactions) corporate possesses already via native and inherited (via M&A) can be a powerful competitive differentiator. Imagine if they start tapping into external sources beyond the standard credit bureaus and consumer databases; dare I say social media?
Competing based on immediate instead of long term needs (in insurance: life time earnings potential replacement), price (fees) and commission cannot be the sole answer.
If you use production data in test and development environments or are looking for alternative approaches, register for the first webinar in a three part series on data security gaps and remediation. On December 9th, Adrian Lane, Security Analyst at Securosis, will join me to discuss security for test environments.
This is the first webinar in a three part series on data security gaps and remediation. This webinar will focus on how data centric security can be used to shore up vulnerabilities in one of the key focus areas, test and development environments. It’s common practice that non-production database environments are created by making copies of production data. This potentially exposes sensitive and confidential production data to developers, testers, and contractors alike. Commonly, 6-10 copies of production databases are created for each application environment and they are regularly provisioned to support development, testing and training efforts. Since security controls deployed for the source database are not replicated in the test environments, this is a glaring hole in data security and a target for external or internal exploits.
In this webinar, we will cover:
- Key trends in enterprise data security
- Vulnerabilities in non-production application environments (test and development)
- Alternatives to consider when protecting test and development environments
- Priorities for enterprises in reducing attack surface for their organization
- Compliance and internal audit cost reduction
- Data masking and synthetics data use cases
- Informatica Secure Testing capabilities
Register for the webinar today at http://infa.media/1pohKov. If you cannot attend the live event, be sure to watch the webinar on-demand.
The Healthcare and Life Sciences industry has demonstrated its ability to take advantage of data to fuel research, explore new ways to cure life threatening diseases, and save lives. With the adoption of technology innovation especially in the mobile technology segment, this industry will need to find a balance between investments and risk.
ModernMedicine.com published an article in May, 2014 stating how analysts worry that a wide-scale security breach could occur in healthcare and pharmaceuticals industry this year. The piece calls out that this industry category ranked the lowest in an S&P500 cyber health study because of its high volume of incidents and slow response rates.
In the Ponemon Institute’s research, The State of Data Centric Security, respondents from the Healthcare and Life Sciences stated the data they considered most at risk was customer, consumer and patient record data. Intellectual Property, Business Intelligence and Classified Data responses ranked a close second.
In an Informatica webinar with Alan Louie, Research Analyst from IDC Health Insights (@IDCPharmaGuru), we discussed his research on ‘Changing Times in the Life Sciences – Enabled and Empowered by Tech Innovation’. The megatrends of cloud, mobile, social networks and Big Data analytics are all moving in a positive direction with various phases of adoption. Mobile technologies tops the list of IT priorities – likely because of the productivity gains that can be achieved by mobile devices and applications. Security/Risk Management technologies listed as the second-highest priority.
When we asked Security Professionals in Life Sciences in the Ponemon Survey, ‘What keeps you up at night?’, the top answer was ‘migrating to new mobile platforms’. The reason I call this factoid out is that all other industry categories ranked ‘not knowing where sensitive data resides’ as the biggest concern. Why is Life Sciences different from other industries?
One reason could be the intense scrutiny over Intellectual Property protection and HIPPA compliance has already shone a light on where sensitive data reside. Mobile makes it difficult to track and contain a potential breach given that cell phones are the number 1 item left behind in taxi cabs.
With the threat of a major breach on the horizon, and the push to leverage technology such as mobile and cloud, it is evident that the investments in security and risk management need to focus on the data itself – rather than tie it to a specific technology or platform.
Enter Data-Centric Security. The call to action is to consider applying a new approach to the information security paradigm that emphasizes the security of the data itself rather than the security of networks or applications. Informatica recently published an eBook ‘Data-Centric Security eBook New Imperatives for a New Age of Data’. Download it, read it. In an industry with so much at stake, we highlight the need for new security measures such as these. Do you agree?
I encourage your comments and open the dialogue!
From this analysis in “What’s Reasonable Security? A Moving Target,” IAPP extrapolated the best practices from the FTC’s enforcement actions.
While the white paper and article indicate that “reasonable security” is a moving target it does provide recommendations that will help organizations access and baseline their current data security efforts. Interesting is the focus on data centric security, from overall enterprise assessment to the careful control of access of employees and 3rd parties. Here some of the recommendations derived from the FTC’s enforcements that call for Data Centric Security:
- Perform assessments to identify reasonably foreseeable risks to the security, integrity, and confidentiality of personal information collected and stored on the network, online or in paper files.
- Limited access policies curb unnecessary security risks and minimize the number and type of network access points that an information security team must monitor for potential violations.
- Limit employee access to (and copying of) personal information, based on employee’s role.
- Implement and monitor compliance with policies and procedures for rendering information unreadable or otherwise secure in the course of disposal. Securely disposed information must not practicably be read or reconstructed.
- Restrict third party access to personal information based on business need, for example, by restricting access based on IP address, granting temporary access privileges, or similar procedures.
How does Data Centric Security help organizations achieve this inferred baseline?
- Data Security Intelligence (Secure@Source coming Q2 2015), provides the ability to “…identify reasonably foreseeable risks.”
- Data Masking (Dynamic and Persistent Data Masking) provides the controls to limit access of information to employees and 3rd parties.
- Data Archiving provides the means for the secure disposal of information.
Other data centric security controls would include encryption for data at rest/motion and tokenization for securing payment card data. All of the controls help organizations secure their data, whether a threat originates internally or externally. And based on the never ending news of data breaches and attacks this year, it is a matter of when, not if your organization will be significantly breached.
For 2015, “Reasonable Security” will require ongoing analysis of sensitive data and the deployment of reciprocal data centric security controls to ensure that the organizations keep pace with this “Moving Target.”
1. You already have data stewards.
Commonly, health systems think they can’t staff data governance such as UPMC has becauseof a lack of funding. In reality, people are already doing data governance everywhere, across your organization! You don’t have to secure headcount; you locate these people within the business, formalize data governance as part of their job, and provide them tools to improve and manage their efforts.
2. Multiple types of data stewards ensure all governance needs are being met.
Three types of data stewards were identified and tasked across the enterprise:
I. Data Steward. Create and maintain data/business definitions. Assist with defining data and mappings along with rule definition and data integrity improvement.
II. Application Steward. One steward is named per application sourcing enterprise analytics. Populate and maintain inventory, assist with data definition and prioritize data integrity issues.
III. Analytics Steward. Named for each team providing analytics. Populate and maintain inventory, reduce duplication and define rules and self-service guidelines.
3. Establish IT as an enabler.
IT, instead of taking action on data governance or being the data governor, has become anenabler of data governance by investing in and administering tools that support metadata definition and master data management.
4. Form a governance council.
UPMC formed a governance council of 29 executives—yes, that’s a big number but UPMC is a big organization. The council is clinically led. It is co-chaired by two CMIOs and includes Marketing, Strategic Planning, Finance, Human Resources, the Health Plan, and Research. The council signs off on and prioritizes policies. Decision-making must be provided from somewhere.
5. Avoid slowing progress with process.
In these still-early days, only 15 minutes of monthly council meetings are spent on policy and guidelines; discussion and direction take priority. For example, a recent agenda item was “Length of Stay.” The council agreed a single owner would coordinate across Finance, Quality and Care Management to define and document an enterprise definition for “Length of Stay.”
6. Use examples.
Struggling to get buy-in from the business about the importance of data governance? An example everyone can relate to is “Test Patient.” For years, in her business intelligence role, Terri worked with “Test Patient.” Investigation revealed that these fake patients end up in places they should not. There was no standard for creation or removal of test patients, which meant that test patients and their costs, outcomes, etc., were included in analysis and reporting that drove decisions inside and external to UPMC. The governance program created a policy for testing in production should the need arise.
7. Make governance personal through marketing.
Terri holds monthly round tables with business and clinical constituents. These have been a game changer: Once a month, for two hours, ten business invitees meet and talk about the program. Each attendee shares a data challenge, and Terri educates them on the program and illustrates how the program will address each challenge.
8. Deliver self-service.
Providing self-service empowers your users to gain access and control to the data they need to improve their processes. The only way to deliver self-service business intelligence is to make metadata, master data, and data quality transparent and accessible across the enterprise.
9. IT can’t do it alone.
Initially, IT was resistant to giving up control, but now the team understands that it doesn’t have the knowledge or the time to effectively do data governance alone.
10. Don’t quit!
Governance can be complicated, and it may seem like little progress is being made. Terri keeps spirits high by reminding folks that the only failure is quitting.
Getting started? Assess the data governance maturity of your organization here: http://governyourdata.com/