Tag Archives: data security

Data Security – A Major Concern in 2015

Data Security

Data Security – A Major Concern in 2015

2014 ended with a ton of hype and expectations and some drama if you are a data security professional or business executive responsible for shareholder value.  The recent attacks on Sony Pictures by North Korea during December caught everyone’s attention, not about whether with Sony would release “The Interview” but how vulnerable we as a society are to these criminal acts.

I have to admit, I was one of those who saw the movie and found the film humorous to say the least and can see why a desperate regime like North Korea would not want their leader admitting they love margarita’s and Katy Perry. What concerned me about the whole event was whether these unwanted security breaches were now just a fact of life?  As a disclaimer, I have no affinity over the downfall of the North Korean government however what transpired was fascinating and amazing that companies like Sony continue to struggle to protect sensitive data despite being one of the largest companies in the world.

According to the Identity Theft Resource Center, there were 761 reported data security breaches in 2014 impacting over 83 million breached records across industries and geographies with B2B and B2C retailers leading the pack with 79.2% of all breaches. Most of these breaches originated through the internet via malicious WORMS and viruses purposely designed to identify and rely back sensitive information including credit card numbers, bank account numbers, and social security information used by criminals to wreak havoc and significant financial losses to merchants and financial institutions. According to the 2014 Ponemon Institute Research study:

  • The average cost of cyber-crime per company in the US was $12.7 million this year, according to the Ponemon report, and US companies on average are hit with 122 successful attacks per year.
  • Globally, the average annualized cost for the surveyed organizations was $7.6 million per year, ranging from $0.5 million to $61 million per company. Interestingly, small organizations have a higher per-capita cost than large ones ($1,601 versus $437), the report found.
  • Some industries incur higher costs in a breach than others, too. Energy and utility organizations incur the priciest attacks ($13.18 million), followed closely by financial services ($12.97 million). Healthcare incurs the fewest expenses ($1.38 million), the report says.

Despite all the media attention around these awful events last year, 2015 does not seem like it’s going to get any better. According to CNBC just this morning, Morgan Stanley reported a data security breach where they had fired an employee who it claims stole account data for hundreds of thousands of its wealth management clients. Stolen information for approximately 900 of those clients was posted online for a brief period of time.  With so much to gain from this rich data, businesses across industries have a tough battle ahead of them as criminals are getting more creative and desperate to steal sensitive information for financial gain. According to a Forrester Research, the top 3 breach activities included:

  • Inadvertent misuse by insider (36%)
  • Loss/theft of corporate asset (32%)
  • Phishing (30%)

Given the growth in data volumes fueled by mobile, social, cloud, and electronic payments, the war against data breaches will continue to grow bigger and uglier for firms large and small.  As such, Gartner predicts investments in Information Security Solutions will grow further 8.2 percent in 2015 vs. 2014 reaching $76.9+ billion globally.  Furthermore, by 2018, more than half of organizations will use security services firms that specialize in data protection, security risk management and security infrastructure management to enhance their security postures.

Like any war, you have to know your enemy and what you are defending. In the war against data breaches, this starts with knowing where your sensitive data is before you can effectively defend against any attack. According to the Ponemon Institute, 18% of firms who were surveyed said they knew where their structured sensitive data was located where as the rest were not sure. 66% revealed that if would not be able to effectively know if they were attacked.   Even worse, 47% were NOT confident at having visibility into users accessing sensitive or confidential information and that 48% of those surveyed admitted to a data breach of some kind in the last 12 months.

In closing, the responsibilities of today’s information security professional from Chief Information Security Officers to Security Analysts are challenging and growing each day as criminals become more sophisticated and desperate at getting their hands on one of your most important assets….your data.  As your organizations look to invest in new Information Security solutions, make sure you start with solutions that allow you to identify where your sensitive data is to help plan an effective data security strategy both to defend your perimeter and sensitive data at the source.   How prepared are you?

For more information about Informatica Data Security Solutions:

  • Download the Gartner Data Masking Magic Quadrant Report
  • Click here to learn more about Informatica’s Data Masking Solutions
  • Click here to access Informatica Dynamic Data Masking: Preventing Data Breaches with Benchmark-Proven Performance whitepaper
Share
Posted in Application ILM, Banking & Capital Markets, Big Data, CIO, Data masking, Data Privacy, Data Security | Tagged , , | Leave a comment

The CISO Challenge: Articulating Data Worth and Security Economics

A few years ago the former eBay’s CISO, Dave Cullinane, led a sobering coaching discussion on how to articulate and communicate the value of a security solution and its economics to a CISO’s CxO peers.

Why would I blog about such old news? Because it was a great and timeless idea. And in this age of the ‘Great Data Breach’, where CISOs need all the help they can get, I thought I would share it with y’all.

Dave began by describing how to communicate the impact of an attack from malware such as Aurora, spearfishing, stuxnet, hacktivision, and so on… versus the investment required to prevent the attack.  If you are an online retailer and your web server goes down because of a major denial of service attack, what does that cost the business?  How much revenue is lost every minute that site is offline? Enough to put you out of business? See the figure below that illustrates how to approach this conversation.

If the impact of a breach and the risk of losing business is high and the investment in implementing a solution is relatively low, the investment decision is an obvious one (represented by the yellow area in the upper left corner).

CISO Challenge

However, it isn’t always this easy, is it?  When determining what your company’s brand and reputation worth, how do you develop a compelling case?

Another dimension Dave described is communicating the economics of a solution that could prevent an attack based on the probability that the attack would occur (see next figure below).

CISO Challenge

For example, consider an attack that could influence stock prices?  This is a complex scenario that is probably less likely to occur on a frequent basis and would require a sophisticated multidimensional solution with an integrated security analytics solution to correlate multiple events back to a single source.  This might place the discussion in the middle blue box, or the ‘negotiation zone’. This is where the CISO needs to know what the CxO’s risk tolerances are and articulate value in terms of the ‘coin of the realm’.

Finally, stay on top of what the business is cooking up for new initiatives that could expose or introduce new risks.  For example, is marketing looking to spin up a data warehouse on Amazon Redshift? Anyone on the analytics team tinkering with Hadoop in the cloud? Is development planning to outsource application test and development activities to offshore systems integrators? If you are participating in any of these activities, make sure your CISO isn’t the last to know when a ‘Breach Happens’!

To learn more about ways you can mitigate risk and maintain data privacy compliance, check out the latest Gartner Data Masking Magic Quadrant.

Share
Posted in Data Governance, Data masking, Data Privacy, Data Security, Governance, Risk and Compliance | Tagged , , , , | Leave a comment

Informatica is a Leader in the Gartner 2014 Data Masking Magic Quadrant Three Years in a Row

Informatica a Leader in Data Masking

Informatica a Leader in Data Masking

Informatica announced this week its leadership position in Gartner 2014 Magic Quadrant for Data Masking Technology for the third year in a row. For the first time, Informatica was positioned the furthest to the right for Completeness of Vision.

In the report, Gartner cites. “Global-scale scandals around sensitive data losses have highlighted the need for effective data protection, especially from insider attacks. Data masking, which is focused on protecting data from insiders and outsiders, is a must-have technology in enterprises’ and governments’ security portfolios.”

Organizations realize that data protection must be hardened to protect against the inevitable breach; originating from either internal or external threats.  Data masking covers gaps in data protection in production and non-production environments that can be exploited by attackers.

Informatica customers are elevating the importance of data security initiatives in 2015 given the high exposure of recent breaches and the shift from just stealing identities and intellectual property, to politically charged platforms.  This raises the concern that existing security controls are insufficient and a more data-centric security approach is necessary.

Recent enforcement by the Federal Trade Commission in the US and emerging legislation worldwide has clearly indicated that sensitive data access and sharing should be tightly controlled; this is the strength of data masking.

Data Masking de-identifies and/or de-sensitizes private and confidential data by hiding it from those who are unauthorized to access it. Other terms for data masking include data obfuscation, sanitization, scrambling, de-identification, and anonymization.

To learn more, Download the Gartner Magic Quadrant Data Masking Report now. And visit the Informatica website for data masking product information.

About the Magic Quadrant

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Share
Posted in B2B, Business Impact / Benefits, Data masking, Data Privacy | Tagged , , , | Leave a comment

Are The Banks Going to Make Retailers Pay for Their Poor Governance?

 

Retail and Data Governance

Retail and Data Governance

A couple months ago, I reached out to a set of CIOs on the importance of good governance and security. All of them agreed that both were incredibly important. However, one CIO retorted a very pointed remark by saying that “the IT leadership at these breached companies wasn’t stupid.” He continued by saying that when selling the rest of the C-Suite, the discussion needs to be about business outcomes and business benefits.  For this reason, he said that CIOs have struggled at selling the value of investments in governance and security investment. Now I have suggested previously that security pays because of the impact on “brand promise”.  And, I still believe this.

However, this week the ante was raised even higher. A district judge ruled that a group of banks can proceed to sue a retailer for negligence in their data governance and security. The decision could clearly lead to significant changes in the way the cost of fraud is distributed among parties within the credit card ecosystem. Where once banks and merchant acquirers would have shouldered the burden of fraud, this decision paves the way for more card-issuing banks to sue merchants for not adequately protecting their POS systems.

Accidents waste priceless time

Accidents waste priceless time

The judge’s ruling said that “although the third-party hackers’ activities caused harm, merchant played a key role in allowing the harm to occur.” The judge also determined that the bank suit against merchants was valid because the plaintiffs adequately showed that the retailer failed “to disclose that its data security systems were deficient.” This is interesting because it says that security systems should be sufficient and if not, retailers need to inform potentially affected stakeholders of their deficient systems. And while taking this step could avoid a lawsuit, it would likely increase the cost of interchange for more risky merchants. This would effectively create a risk premium for retailers that do not adequately govern and protect their IT environments.

There are broad implications for all companies who end up harming customer, partners, or other stakeholders by not keeping their security systems up to snuff. The question is, will this make good governance have enough of a business outcome and benefit that businesses will actually want to pay it forward — i.e. invest in good governance and security? What do you think? I would love to hear from you.

Related links

Solutions:

Enterprise Level Data Security

Hacking: How Ready Is Your Enterprise?

Gambling With Your Customer’s Financial Data

The State of Data Centric Security

Twitter: @MylesSuer

 

Share
Posted in Banking & Capital Markets, CIO, Data Governance, Data Security, Financial Services | Tagged , , , | Leave a comment

Big Data Helps Insurance Companies Create A More Protected World

Big Data Helps Insurance Companies

Big Data Helps Insurance Companies

One of the most data-driven industries in the world is the insurance industry.  If you think about it, the entire business model of insurance companies is based on pooling the perceived risk of catastrophic events in a way that is viable and profitable for the insurer.  So it’s no surprise that one of the earliest adopter of big data technologies like Hadoop has been the insurance industry.

Insurance companies serve as a fantastic example of big data technology use since data is such a pervasive asset in the business.  From a cost savings and risk mitigation standpoint, insurance companies use data to assess the probable maximum loss of catastrophic events as well as detect the potential for fraudulent claims.  From a revenue growth standpoint, insurance companies use data to intelligently price new insurance offerings and deploy cross-sell offers to customers to maximize their lifetime value.

New data sources are enabling insurance companies to mitigate risk and grow revenues even more effectively.  Location-based data from mobile devices and sensors are being used inside insured properties to proactively detect exposure to catastrophic events and deploy preventive maintenance.  For example, automobile insurance providers are increasingly offering usage-based driving programs, whereby insured individuals install a mobile sensor inside their car to relay the quality of their driving back to their insurance provider in exchange for lower premiums.  Even healthcare insurance providers are starting to analyze the data collected by wearable fitness bands and smart watches to monitor insured individuals and inform them of personalized ways to be healthier.  Devices can also be deployed in the environment that triggers adverse events, such as sensors to monitor earthquake and weather patterns, to help mitigate the costs of potential events.  Claims are increasingly submitted with supporting information in a variety of formats like text files, spreadsheets, and PDFs that can be mined for insights as well.  And with the growth on insurance sales online, web log and clickstream data is more important than ever to help drive online revenue.

Beyond the benefits of using new data sources to assess risk and grow revenues, big data technologies are enabling insurance companies to fundamentally rethink the basis of their analytical architecture.  In the past, probable maximum loss modeling could only be performed on statistically aggregated datasets.  But with big data technologies, insurance companies have the opportunity to analyze data at the level of an insured individual or a unique insurance claim.  This increased depth of analysis has the potential to radically improve the quality and accuracy of risk models and market predictions.

Informatica is helping insurance companies accelerate the benefits of big data technologies.  With multiple styles of ingestion available, Informatica enables insurance companies to leverage nearly any source of data.  Informatica Big Data Edition provides comprehensive data transformations for ETL and data quality, so that insurance companies can profile, parse, integrate, cleanse, and refine data using a simple user-friendly visual development environment.  With built-in data lineage tracking and support for data masking, Informatica helps insurance companies ensure regulatory compliance across all data.

To try out the Big Data Edition, download a free trial today in the Informatica Marketplace and get started with big data today!

Share
Posted in Big Data, Data Integration | Tagged , , , | Leave a comment

When Data Integration Saves Lives

When Data Integration Saves Lives

When Data Integration Saves Lives

In an article published in Health Informatics, its author, Gabriel Perna, claims that data integration could save lives, as we learn more about illnesses and causal relationships.

According to the article, in Hamilton County Ohio, it’s not unusual to see kids from the same neighborhoods coming to the hospital for asthma attacks.  Thus, researchers wanted to know if it was fact or mistaken perception that an unusually high number of children in the same neighborhood were experiencing asthma attacks.  The next step was to review existing data to determine the extent of the issues, and perhaps how to solve the problem altogether.

“The researchers studied 4,355 children between the ages of 1 and 16 who visited the emergency department or were hospitalized for asthma at Cincinnati Children’s between January 2009 and December 2012. They tracked those kids for 12 months to see if they returned to the ED or were readmitted for asthma.”

Not only were the researchers able to determine a sound correlation between the two data sets, but they were able to advance the research to predict which kids were at high-risk based upon where they live.  Thus, some of the cause and the effects have been determined.

This came about when researchers began thinking out of the box, when it comes to dealing with traditional and non-traditional medical data.  They integrated housing and census data, in this case, with that of the data from the diagnosis and treatment of the patients.  These are data sets unlikely to find their way to each other, but together they have a meaning that is much more valuable than if they just stayed in their respective silos.

“Non-traditional medical data integration has begun to take place in some medical collaborative environments already. The New York-Presbyterian Regional Health Collaborative created a medical village, which ‘goes beyond the established patient-centered medical home mode.’ It not only connects an academic medical center with a large ambulatory network, medical homes, and other providers with each other, but community resources such as school-based clinics and specialty-care centers (the ones that are a part of NYP’s network).”

The fact of the matter is that data is the key to understanding what the heck is going on when cells of sick people begin to emerge.  While researchers and doctors can treat the individual patients there is not a good understanding of the larger issues that may be at play.  In this case, poor air quality in poor neighborhoods.  Thus, they understand what problem needs to be corrected.

The universal sharing of data is really the larger solution here, but one that won’t be approached without a common understanding of the value, and funding.  As we pass laws around the administration of health care, as well as how data is to be handled, perhaps it’s time we look at what the data actually means.  This requires a massive deployment of data integration technology, and the fundamental push to share data with a central data repository, as well as with health care providers.

Share
Posted in Data Integration, Data Integration Platform, Data Quality, Data Services | Tagged , , , | Leave a comment

Homeland = Your Average Life Insurance Company?

Or in other words: Did the agency model kill data quality?  When you watch the TV series “Homeland”, you quickly realize the interdependence between field operatives and the command center.  This is a classic agency model.  One arm gathers, filters and synthesizes information and prepares a plan but the guys on the ground use this intel to guide their sometimes ad hoc next moves.

Data QualityThe last few months I worked a lot – and I mean A LOT – with a variety of mid-sized life insurers (<$1B annual revenue) around fixing their legacy-inherited data quality problems.  Their IT departments, functioning like Operations Command Centers (intel acquisition, filter and synthesize), were inundated with requests to fix up and serve a coherent, true, enriched central view of a participant (the target) and all his policies and related entities from and to all relevant business lines (planning) to achieve their respective missions (service, retain, upsell, mitigate risk): employee benefits, broker/dealer, retirement services, etc.

The captive and often independent agents (execution); however, often run with little useful data into an operation (sales cycle) as the Ops Center is short on timely and complete information.  Imagine Carrie directing her strike team to just wing it based on their experience and dated intel from a raid a few months ago without real-time drone video feeds.  Would she be saying, “Guys, it’s always been your neck, you’ll figure it out.”  I think not.

Homeland = Your Average Life Insurance Company?

Homeland = Your Average Life Insurance Company?

This becomes apparent when talking to the actuaries, claims operations, marketing, sales, agency operations, audit, finance, strategic planning, underwriting and customer service, common denominators appeared quickly:

  • Every insurer saw the need to become customer instead of policy centric. That’s the good news.
  • Every insurer knew their data was majorly sub-standard in terms of quality and richness.
  • Every insurer agreed that they are not using existing data capture tools (web portals for agents and policy holders, CRM applications, policy and claims mgmt systems) to their true potential.
  • New books-of-business were generally managed as separate entities from a commercial as well as IT systems perspective, even if they were not net-new products, like trust products.  Cross-selling as such, even if desired, becomes a major infrastructure hurdle.
  • As in every industry, the knee-jerk reaction was to throw the IT folks at data quality problems and making it a back-office function.  Pyrrhic victory.
  • Upsell scenarios, if at all strategically targeted, are squarely in the hands of the independent agents.  The insurer will, at most, support customer insights around lapse avoidance or 401k roll-off indicators for retiring or terminated plan participants.  This may be derived from a plan sponsor (employer) census file, which may have incorrect address information.
  • Prospect and participant e-mail addresses are either not captured (process enforcement or system capability) or not validated (domain, e-mail verification), so the vast majority of customer correspondence, like scenarios, statements, privacy notices and policy changes,  travels via snail mail (and this typically per policy). Overall, only 15-50% of contacts have an “unverified” e-mail address today and of these less than a third subscribed to exclusive electronic statement delivery.
  • Postal address information is still not 99% correct, complete or current, resulting in high returned mail ($120,000-$750,000 every quarter), priority mail upgrades, statement reprints, manual change capture and shredding cost as well as the occasional USPS fine.
  • Data quality, as unbelievable as it sounds, took a back-burner to implementing a new customer data warehouse, a new claims system, a new risk data mart, etc.  They all just get filled with the same old, bad data as business users were – and I quote –“used to the quality problem already”.
  • Premium underpricing runs at 2-4% annually, foregoing millions in additional revenue, due to lack of a full risk profile.
  • Customer cost –of-acquisition (CAC) is unknown or incorrect as there is no direct, realistic tracking of agency campaign/education dollars spent against new policies written.
  • Agency historic production and projections are unclear as a dynamic enforcement of hierarchies is not available, resulting in orphaned policies generating excess tax burdens.  Often this is the case when agents move to states where they are not licensed, they passed or retired.

What does a cutting-edge insurer look like instead?  Ask Carrie Mathison and Saul Bernstein.  They already have a risk and customer EDW as well as a modern (cloud based?) CRM and claims mgmt system.  They have considered, as part of their original implementation or upgrade, capabilities required to fix the initial seed data into their analytics platforms.  Now, they are looking into pushing them back into operational systems like CRM and avoiding bad source system entries from the get-go.

They are also beyond just using data to avoid throwing more bodies in every department at “flavor-of-the-month” clean-up projects, e.g. yet another state unclaimed property matching exercise, total annual premium revenue written in state X for tax review purposes by the state tax authority.

So what causes this drastic segmentation of leading versus laggard life insurers?  In my humble opinion, it is the lack of a strategic refocusing of what the insurer can do for an agent by touching the prospects and customers directly.  Direct interaction (even limited) improves branding, shortens the sales cycle and rates based on improved insights through better data quality.

Agents (and insurers) need to understand that the wealth of data (demographic, interactions, transactions) corporate possesses already via native and inherited (via M&A) can be a powerful competitive differentiator.  Imagine if they start tapping into external sources beyond the standard credit bureaus and consumer databases; dare I say social media?

Competing based on immediate instead of long term needs (in insurance: life time earnings potential replacement), price (fees) and commission cannot be the sole answer.

Share
Posted in Data Governance, Data Integration, Data Quality, Data Security | Tagged , , | Leave a comment

Securing Sensitive Data in Test and Development Environments

Securing Sensitive Data in Test and Development Environments

Securing Test and Development Environments

Do you use copies of production data in test and development environments? This is common practice in IT organizations. For this reason, test environments have become the number one target for outside intruders. That being said, most data breaches occur when non-malicious insiders accidentally expose sensitive data in the course of their daily work. Insider data breaches can be more serious and harder to detect than intruder events.

If you use production data in test and development environments or are looking for alternative approaches, register for the first webinar in a three part series on data security gaps and remediation.  On December 9th, Adrian Lane, Security Analyst at Securosis, will join me to discuss security for test environments.

This is the first webinar in a three part series on data security gaps and remediation. This webinar will focus on how data centric security can be used to shore up vulnerabilities in one of the key focus areas, test and development environments. It’s common practice that non-production database environments are created by making copies of production data. This potentially exposes sensitive and confidential production data to developers, testers, and contractors alike. Commonly, 6-10 copies of production databases are created for each application environment and they are regularly provisioned to support development, testing and training efforts. Since security controls deployed for the source database are not replicated in the test environments, this is a glaring hole in data security and a target for external or internal exploits.

In this webinar, we will cover:

  • Key trends in enterprise data security
  • Vulnerabilities in non-production application environments (test and development)
  • Alternatives to consider when protecting test and development environments
  • Priorities for enterprises in reducing attack surface for their organization
  • Compliance and internal audit cost reduction
  • Data masking and synthetics data use cases
  • Informatica Secure Testing capabilities

Register for the webinar today at http://infa.media/1pohKov. If you cannot attend the live event, be sure to watch the webinar on-demand.

Share
Posted in Application ILM, Business Impact / Benefits, Data Security, Data Services | Tagged , , , | Leave a comment

IDC Life Sciences and Ponemon Research Highlights Need for New Security Measures

The Healthcare and Life Sciences industry has demonstrated its ability to take advantage of data to fuel research, explore new ways to cure life threatening diseases, and save lives.  With the adoption of technology innovation especially in the mobile technology segment, this industry will need to find a balance between investments and risk.

ModernMedicine.com published an article in May, 2014 stating how analysts worry that a wide-scale security breach could occur in healthcare and pharmaceuticals industry this year.  The piece calls out that this industry category ranked the lowest in an S&P500 cyber health study because of its high volume of incidents and slow response rates.

In the Ponemon Institute’s research, The State of Data Centric Security, respondents from the Healthcare and Life Sciences stated the data they considered most at risk was customer, consumer and patient record data.  Intellectual Property, Business Intelligence and Classified Data responses ranked a close second.

Data security

In an Informatica webinar with Alan Louie, Research Analyst from IDC Health Insights (@IDCPharmaGuru), we discussed his research on ‘Changing Times in the Life Sciences – Enabled and Empowered by Tech Innovation’.  The megatrends of cloud, mobile, social networks and Big Data analytics are all moving in a positive direction with various phases of adoption.  Mobile technologies tops the list of IT priorities – likely because of the productivity gains that can be achieved by mobile devices and applications. Security/Risk Management technologies listed as the second-highest priority.

When we asked Security Professionals in Life Sciences in the Ponemon Survey, ‘What keeps you up at night?’, the top answer was ‘migrating to new mobile platforms’.  The reason I call this factoid out is that all other industry categories ranked ‘not knowing where sensitive data resides’ as the biggest concern. Why is Life Sciences different from other industries?

Life sciences

One reason could be the intense scrutiny over Intellectual Property protection and HIPPA compliance has already shone a light on where sensitive data reside. Mobile makes it difficult to track and contain a potential breach given that cell phones are the number 1 item left behind in taxi cabs.

With the threat of a major breach on the horizon, and the push to leverage technology such as mobile and cloud, it is evident that the investments in security and risk management need to focus on the data itself – rather than tie it to a specific technology or platform.

Enter Data-Centric Security.  The call to action is to consider applying a new approach to the information security paradigm that emphasizes the security of the data itself rather than the security of networks or applications.   Informatica recently published an eBook ‘Data-Centric Security eBook New Imperatives for a New Age of Data’.  Download it, read it. In an industry with so much at stake, we highlight the need for new security measures such as these. Do you agree?

I encourage your comments and open the dialogue!

Share
Posted in Application ILM, Data Governance, Data masking, Data Privacy, Data Security | Tagged , , , , , | Leave a comment

Just In Time For the Holidays: How The FTC Defines Reasonable Security

Reasonable Security

How The FTC Defines Reasonable Security

Recently the International Association of Privacy Professionals (IAPP, www.privacyassociation.org ) published a white paper that analyzed the Federal Trade Commission’s (FTC) data security/breach enforcement. These enforcements include organizations from the finance, retail, technology and healthcare industries within the United States.

From this analysis in “What’s Reasonable Security? A Moving Target,” IAPP extrapolated the best practices from the FTC’s enforcement actions.

While the white paper and article indicate that “reasonable security” is a moving target it does provide recommendations that will help organizations access and baseline their current data security efforts.  Interesting is the focus on data centric security, from overall enterprise assessment to the careful control of access of employees and 3rd parties.  Here some of the recommendations derived from the FTC’s enforcements that call for Data Centric Security:

  • Perform assessments to identify reasonably foreseeable risks to the security, integrity, and confidentiality of personal information collected and stored on the network, online or in paper files.
  • Limited access policies curb unnecessary security risks and minimize the number and type of network access points that an information security team must monitor for potential violations.
  • Limit employee access to (and copying of) personal information, based on employee’s role.
  • Implement and monitor compliance with policies and procedures for rendering information unreadable or otherwise secure in the course of disposal. Securely disposed information must not practicably be read or reconstructed.
  • Restrict third party access to personal information based on business need, for example, by restricting access based on IP address, granting temporary access privileges, or similar procedures.

How does Data Centric Security help organizations achieve this inferred baseline? 

  1. Data Security Intelligence (Secure@Source coming Q2 2015), provides the ability to “…identify reasonably foreseeable risks.”
  2. Data Masking (Dynamic and Persistent Data Masking)  provides the controls to limit access of information to employees and 3rd parties.
  3. Data Archiving provides the means for the secure disposal of information.

Other data centric security controls would include encryption for data at rest/motion and tokenization for securing payment card data.  All of the controls help organizations secure their data, whether a threat originates internally or externally.   And based on the never ending news of data breaches and attacks this year, it is a matter of when, not if your organization will be significantly breached.

For 2015, “Reasonable Security” will require ongoing analysis of sensitive data and the deployment of reciprocal data centric security controls to ensure that the organizations keep pace with this “Moving Target.”

Share
Posted in Data Integration, Data masking, Data Privacy, Data Security | Tagged , , , | Leave a comment