Category Archives: Data masking

Which Method of Controls Should You Use to Protect Sensitive Data in Databases and Enterprise Applications? Part II

Sensitive Data

Protecting Sensitive Data

To determine what is the appropriate sensitive data protection method to use, you should first answer the following questions regarding the requirements:

  • Do you need to protect data at rest (in storage), during transmission, and/or when accessed?
  • Do some privileged users still need the ability to view the original sensitive data or does sensitive data need to be obfuscated at all levels?
  • What is the granularity of controls that you need?
    • Datafile level
    • Table level
    • Row level
    • Field / column level
    • Cell level
    • Do you need to be able to control viewing vs. modification of sensitive data?
    • Do you need to maintain the original characteristics / format of the data (e.g. for testing, demo, development purposes)?
    • Is response time latency / performance of high importance for the application?  This can be the case for mission critical production applications that need to maintain response times in the order of seconds or sub-seconds.

In order to help you determine which method of control is appropriate for your requirements, the following table provides a comparison of the different methods and their characteristics.

data

A combination of protection method may be appropriate based on your requirements.  For example, to protect data in non-production environments, you may want to use persistent data masking to ensure that no one has access to the original production data, since they don’t need to.  This is especially true if your development and testing is outsourced to third parties.  In addition, persistent data masking allows you to maintain the original characteristics of the data to ensure test data quality.

In production environments, you may want to use a combination of encryption and dynamic data masking.  This is the case if you would like to ensure that all data at rest is protected against unauthorized users, yet you need to protect sensitive fields only for certain sets of authorized or privileged users, but the rest of your users should be able to view the data in the clear.

The best method or combination of methods will depend on each scenario and set of requirements for your environment and organization.  As with any technology and solution, there is no one size fits all.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data masking, Data Security, Data Services, Enterprise Data Management | Tagged , , , | Leave a comment

Which Method of Controls Should You Use to Protect Sensitive Data in Databases and Enterprise Applications? Part I

Sensitive Data

Protecting Sensitive Data

I’m often asked to share my thoughts about protecting sensitive data. The questions that typically come up include:

  • Which types of data should be protected?
  • Which data should be classified as “sensitive?”
  • Where is this sensitive data located?
  • Which groups of users should have access to this data?

Because these questions come up frequently, it seems ideal to share a few guidelines on this topic.

When protecting the confidentiality and integrity of data, the first level of defense is Authentication and access control. However, data with higher levels of sensitivity or confidentiality may require additional levels of protection, beyond regular authentication and authorization methods.

There are a number of control methods for securing sensitive data available in the market today, including:

  • Encryption
  • Persistent (Static) Data Masking
  • Dynamic Data Masking
  • Tokenization
  • Retention management and purging

Encryption is a cryptographic method of encoding data.  There are generally, two methods of encryption:  symmetric (using single secret key) and asymmetric (using public and private keys).  Although there are methods of deciphering encrypted information without possessing the key, a good encryption algorithm makes it very difficult to decode the encrypted data without knowledge of the key.  Key management is usually a key concern with this method of control.  Encryption is ideal for mass protection of data (e.g. an entire data file, table, partition, etc.) against unauthorized users.

Persistent or static data masking obfuscates data at rest in storage.  There is usually no way to retrieve the original data – the data is permanently masked.  There are multiple techniques for masking data, including: shuffling, substitution, aging, encryption, domain-specific masking (e.g. email address, IP address, credit card, etc.), dictionary lookup, randomization, etc.  Depending on the technique, there may be ways to perform reverse masking  - this should be used sparingly.  Persistent masking is ideal for cases where all users should not see the original sensitive data (e.g. for test / development environments) and field level data protection is required.

Dynamic data masking de-identifies data when it is accessed.  The original data is still stored in the database.  Dynamic data masking (DDM) acts as a proxy between the application and database and rewrites the user / application request against the database depending on whether the user has the privilege to view the data or not.  If the requested data is not sensitive or the user is a privileged user who has the permission to access the sensitive data, then the DDM proxy passes the request to the database without modification, and the result set is returned to the user in the clear.  If the data is sensitive and the user does not have the privilege to view the data, then the DDM proxy rewrites the request to include a masking function and passes the request to the database to execute.  The result is returned to the user with the sensitive data masked.  Dynamic data masking is ideal for protecting sensitive fields in production systems where application changes are difficult or disruptive to implement and performance / response time is of high importance.

Tokenization substitutes a sensitive data element with a non-sensitive data element or token.  The first generation tokenization system requires a token server and a database to store the original sensitive data.  The mapping from the clear text to the token makes it very difficult to reverse the token back to the original data without the token system.  The existence of a token server and database storing the original sensitive data renders the token server and mapping database as a potential point of security vulnerability, bottleneck for scalability, and single point of failure. Next generation tokenization systems have addressed these weaknesses.  However, tokenization does require changes to the application layer to tokenize and detokenize when the sensitive data is accessed.  Tokenization can be used in production systems to protect sensitive data at rest in the database store, when changes to the application layer can be made relatively easily to perform the tokenization / detokenization operations.

Retention management and purging is more of a data management method to ensure that data is retained only as long as necessary.  The best method of reducing data privacy risk is to eliminate the sensitive data.  Therefore, appropriate retention, archiving, and purging policies should be applied to reduce the privacy and legal risks of holding on to sensitive data for too long.  Retention management and purging is a data management best practices that should always be put to use.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data masking, Data Security, Data Services, Enterprise Data Management | Tagged , , , | Leave a comment

Hacking: How Ready Is Your Enterprise?

 
Recent corporate data security challenges require companies to ask hard questions about enterprise readiness:

1)      How do you know if your firm is next in line?
2)      How well will your Information Technology team respond to an attempted breach?

Is your firm ready?

HackingOver the last year, a number of high profile data security breaches have taken place at major US corporations. However, as a business person, how do you know the answers to the above questions.  Do you know what is at risk? And as well with big data gathering so much attention these days, isn’t it kind of like putting all the eggs into one basket? According to the management scholar, Theodore Levitt, part of being a manager is the ability to ask questions. My goal today is to arm business managers with the questions to ask so they can determine the answers to both of the above questions.

Is your Big Data secure?

HackingBig Data is all the buzz today. How safe are your Big Data spaces? Do you know what is going into each of them? Judith Hurwitz, the President and CEO of Hurwitz & Associates, says that she worries about big data security. Judith even suggests that big data “introduces security risks into the company, unintended consequences can endanger the company”. According to Judith, these risks come in two forms:

1)      Big data sources can contain viruses as well as other forms of business risk
2)      Big data lakes if unprotected represent a major business risk from hacking

Clearly, protecting your big data comprehensively requires diligence, including data encryption. But just remember, big data may seem like a science project in the back room, but it puts in one place a significant volume of data that could damage your enterprise if exposed to the outside world.

Do you need better tools or better business processes?

SecurityWhile many of the discussions about recent hacks have focused on the importance of having the right and up to date tools in place, it is just as important to have the right business processes in place if you want to minimize the possibility of a breach and minimizes losses when a breach occurs.

From an accessibility and security prospective, security processes look at the extent to which access to information is restricted appropriately to authorized parties. Next, from an information management perspective, they should consider the entire information life cycle. Information should be protected during all phases of its life cycle. Security should start at the information planning phase, and for many, this implies different protection mechanisms for storing, sharing, and disposition of information.

To determine what questions a business person should be asking their security professionals, I went to COBIT 5. For those who do not know, COBIT is the standard your auditors use to evaluate your company’s technology per Sarbanes Oxley. Understanding what it recommends matters because CFOs that we have talked to say that after the recent hacks they believe they are about to get increased scrutiny from their auditors. If you want to understand what auditors will look for, you should study COBIT 5. COBIT 5 has even linked its security policy guidance to what your IT security management team should be running against—one more term, ISO/IEC 27000 standard. Want to impress your security management professionals? Ask them whether they are in compliance with ISO/IEC 27000.

Good information security requires policies and procedures

Now, let’s explore what COBIT 5 recommends for information governance and security. The first thing it recommends is that good information security requires policies and procedures are created and put in place. This sounds pretty reasonable. However, COBIT next insists—something that we all know is true as managers– enterprise culture and ethics are critical to making “security policies and procedures effective”.

What metrics then should business people use to judge whether their firm is managing information security appropriately. COBIT 5 suggest that you look for two things right off the top.

1)      How recently did your IT organization conduct a risk assessment for the services that it provides?
2)      Does your IT organization have a current security plan which is accepted and communication throughout the enterprise?

For the first, it is important that you then ask what percentage of IT services and programs are covered by a risk assessment and what percentage of security incidents taking place were not identified in the risk assessment. The first question tells you how actively your IT is managing security and the second tells you whether there a gaps and risks. Your goal here should be to ensure that “IT-related enterprise risk does not exceed your risk appetite and your risk tolerance”.

With regards to the security plan, you should be asking your IT leadership (your CIO or CISO) about the number of key security roles that have been clearly defined and about the number of security related incidents over time. As important, find out how many security solutions currently deviate from plan?  A timely review of these could clearly impact your probability of getting your systems hacked.

As a manager, you know that teams need policies and procedures to limit errors from happening and to manage them when they occur. So ask what are the procedures for managing through a security event? As important, ask about the percentage of services are confirmed to have alignment with the security plan. At the same time, you want to know about the number of security incidents caused by non-adherence to the security plan. For the future, you want to make sure as well that all new solutions being developed have from launch confirmed their alignment to the security plan.

Other critical things to consider include the number of security incidents that have caused financial loss, business disruption, and public embarrassment. This of course is a big one that should be small in number. Then ask about the number of IT services with outstanding security requirements? Next, what is the time required to grant, change, and remove access privileges and the frequency of security assessment against the latest standards and guidelines.

Concluding Remarks

Security is one area that you really need IT-Business Alignment. It is important, as a business professional, that you do your best to ensure that IT builds policies and procedures that conform to your corporate risk appetite. As well you need to assure that the governance, policies, and procedures for your IT organization run against are kept current and update. This includes ensuring that the data is governed from end to end in the IT environment.

Related links

Solutions: Enterprise Level Data Security
The State of Data Centric Security
Gambling With Your Customer’s Financial Data
Twitter: @MylesSuer

FacebookTwitterLinkedInEmailPrintShare
Posted in CIO, Data masking, Data Security | Tagged , , , | Leave a comment

Gambling With Your Customer’s Financial Data

CIOs and CFOs both dig data security

Financial dataIn my discussions with CIOs over the last couple of months, I asked them about the importance of a series of topics. All of them placed data security at the top of their IT priority list. Even their CFO counterparts, with whom they do not always see eye to eye, said they were very concerned about the business risk for corporate data. These CFOs said that they touch, as a part of owning business risk, security — especially from hacking. One CFO said that he worried, as well, about the impact of data security for compliance issues, including HIPAA and SOX. Another said this: “The security of data is becoming more and more important. The auditors are going after this. CFOs, for this reason, are really worried about getting hacked. This is a whole new direction, but some of the highly publicized recent hacks have scared a lot of folks and they combined represent to many of us a watershed event.”

Editor of CFO Magazine

According to David W. Owens the editor of CFO Magazine, even if you are using “secure” storage, such as internal drives and private clouds, the access to these areas can be anything but secure. Practically any employee can be carrying around sensitive financial and performance data in his or her pocket, at any time.” Obviously, new forms of data access have created new forms of data risk.

Are some retailers really leaving the keys in the ignition?

If I only hadGiven the like mind set from CIOs and CFOs, I was shocked to learn that some of the recently hacked retailers had been using outdated security software, which may have given hackers easier access company payment data systems. Most amazingly, some retailers had not even encrypted their customer payment data. Because of this, hackers were able to hide on the network for months and steal payment data, as customers continued to use their credit cards at the company’s point of sale locations.

Why weren’t these transactions encrypted or masked? In my 1998 financial information start-up, we encrypted our databases to protect against hacks of our customers’ personal financial data. One answer came from a discussion with a Fortune 100 Insurance CIO. This CIO said “CIO’s/CTO’s/CISO’s struggle with selling the value of these investment because the C Suite is only interested in hearing about investments with a direct impact on business outcomes and benefits”.

Enterprise security drives enterprise brand today

Brand ValueSo how should leaders better argue the business case for security investments? I want to suggest that the value of IT is its “brand promise”. For retailers, in particular, if a past purchase decision creates a perceived personal data security risk, IT becomes a liability to their corporations brand equity and potentially creates a negative impact on future sales. Increasingly how these factors are managed either supports or not the value of a company’s brand.

My message is this: Spend whatever it takes to protect your brand equity; Otherwise a security issue will become a revenue issue.

In sum, this means organizations that want to differentiate themselves and avoid becoming a brand liability need to further invest in their data centric security strategy and of course, encryption. The game is no longer just about securing particular applications. IT organizations need to take a data centric approach to securing customer data and other types of enterprise data. Enterprise level data governance rules needs to be a requirement. A data centric approach can mitigate business risk by helping organizations to understand where sensitive data is and to protect it in motion and at rest. 

Related links

Solutions: Enterprise Level Data Security
The State of Data Centric Security
How Is The CIO Role Starting To Change?
The CFO viewpoint on data
CFOs discuss their technology priorities
Twitter: @MylesSuer

 

 

FacebookTwitterLinkedInEmailPrintShare
Posted in CIO, Data Governance, Data masking, Data Security, Retail | Tagged , , , , , , , | Leave a comment

Time to Change Passwords…Again

Time to Change Passwords…Again

Time to Change Passwords…Again

Has everyone just forgotten about data masking?

The information security industry is reporting that more than 1.5 billion (yes, that’s with a “B”) emails and passwords have been hacked. It’s hard to tell from the article, but this could be the big one. (And just when we thought that James Bond had taken care of the Russian mafia.) From both large and small companies, nobody is safe. According to the experts the sites ranged from small e-commerce sites to Fortune 500 companies.  At this time the experts aren’t telling us who the big targets were.  We could be very unpleasantly surprised.

Most security experts admit that the bulk of the post-breach activity will be email spamming.  Insidious to be sure.  But imagine if the hackers were to get a little more intelligent about what they have.  How many individuals reuse passwords?  Experts say over 90% of consumers reuse passwords between popular sites.  And since email addresses are the most universally used “user name” on those sites, the chance of that 1.5 billion identities translating into millions of pirated activities is fairly high.

According to the recent published Ponemon study; 24% of respondents don’t know where their sensitive data is stored.  That is a staggering amount.  Further complicating the issue, the same study notes that 65% of the respondents have no comprehensive data forensics capability.  That means that consumers are more than likely to never hear from their provider that their data had been breached.  Until it is too late.

So now I guess we all get to go change our passwords again.  And we don’t know why, we just have to.  This is annoying.  But it’s not a permanent fix to have consumers constantly looking over their virtual shoulders.  Let’s talk about the enterprise sized firms first.  Ponemon indicates that 57% of respondents would like more trained data security personnel to protect data.  And the enterprise firm should have the resources to task IT personnel to protect data.  They also have the ability to license best in class technology to protect data.  There is no excuse not to implement an enterprise data masking technology.  This should be used hand in hand with network intrusion defenses to protect from end to end.

Smaller enterprises have similar options.  The same data masking technology can be leveraged on smaller scale by a smaller IT organization including the personnel to optimize the infrastructure.  Additionally, most small enterprises leverage Cloud based systems that should have the same defenses in place.  The small enterprise should bias their buying criteria in data systems for those that implement data masking technology.

Let me add a little fuel to the fire and talk about a different kind of cost.  Insurers cover Cyber Risk either as part of a Commercial General Liability policy or as a separate policy.  In 2013, insurers paid an average approaching $3.5M for each cyber breach claim.  The average per record cost of claims was over $6,000.  Now, imagine your enterprise’s slice of those 1.5 billion records.  Obviously these are claims, not premiums.  Premiums can range up to $40,000 per year for each $1M in coverage.  Insurers will typically give discounts for those companies that have demonstrated security practices and infrastructure.  I won’t belabor the point, it’s pure math at this point.

There is plenty of risk and cost to go around, to be sure.  But there is a way to stay protected with Informatica.  And now, let’s all take a few minutes to go change our passwords.  I’ll wait right here.  There, do you feel better?

For more information on Informatica’s data masking technology click here, where you can drill into dynamic and persistent data masking technology, leading in the industry.  So you should still change your passwords…but check out the industry’s leading data security technology after you do.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data masking, Data Privacy | Tagged , , , | 1 Comment

Data Obfuscation and Data Value – Can They Coexist?

Data Obfuscation and Data Value

Data Obfuscation and Data Value

Data is growing exponentially. New technologies are at the root of the growth. With the advent of big data and machine data, enterprises have amassed amounts of data never before seen. Consider the example of Telecommunications companies. Telco has always collected large volumes of call data and customer data. However, the advent of 4G services, combined with the explosion of the mobile internet, has created data volume Telco has never seen before.

In response to the growth, organizations seek new ways to unlock the value of their data. Traditionally, data has been analyzed for a few key reasons. First, data was analyzed in order to identify ways to improve operational efficiency. Secondly, data was analyzed to identify opportunities to increase revenue.

As data expands, companies have found new uses for these growing data sets. Of late, organizations have started providing data to partners, who then sell the ‘intelligence’ they glean from within the data. Consider a coffee shop owner whose store doesn’t open until 8 AM. This owner would be interested in learning how many target customers (Perhaps people aged 25 to 45) walk past the closed shop between 6 AM and 8 AM. If this number is high enough, it may make sense to open the store earlier.

As much as organizations prioritize the value of data, customers prioritize the privacy of data. If an organization loses a customer’s data, it results in a several costs to the organization. These costs include:

  • Damage to the company’s reputation
  • A reduction of customer trust
  • Financial costs associated with the investigation of the loss
  • Possible governmental fines
  • Possible restitution costs

To guard against these risks, data that organizations provide to their partners must be obfuscated. This protects customer privacy. However, data that has been obfuscated is often of a lower value to the partner. For example, if the date of birth of those passing the coffee shop has been obfuscated, the store owner may not be able to determine if those passing by are potential customers. When data is obfuscated without consideration of the analysis that needs to be done, analysis results may not be correct.

There is away to provide data privacy for the customer while simultaneously monetizing enterprise data. To do so, organizations must allow trusted partners to define masking generalizations. With sufficient data masking governance, it is indeed possible for data obfuscation and data value to coexist.

Currently, there is a great deal of research around ensuring that obfuscated data is both protected and useful. Techniques and algorithms like ‘k-Anonymity’ and ‘l-Diversity’ ensure that sensitive data is safe and secure. However, these techniques have have not yet become mainstream. Once they do, the value of big data will be unlocked.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, B2B Data Exchange, Data masking, Data Privacy, Data Security, Telecommunications | Tagged , , , | Leave a comment

Intelligent Data for a Smarter World

Intelligent Data for a Smarter WorldI’ve been thinking a lot lately about next-generation products — products that will have the smarts to respond on their own to rapidly changing conditions.

We can all imagine self-driving cars that distinguish between a life-threatening situation (like a swerving car ahead) or a thing-threatening occurrence (like a scurrying raccoon) and brake and steer accordingly. And we expect automated picking systems will soon know — by a SKU’s size, shape, weight and temperature — which assembly line or packing area gets which products. And it won’t be long before enterprise systems will see and plug security holes across hundreds of systems, no matter whether the data is hosted internally or held by partners and suppliers.

The underpinning for such smarts is data that’s clean, safe and connected — the hallmarks of everything we do and believe in at Informatica. But we also recognize that next-generation products need something more. They also need to know when and where data changes, along with how to get the right data to the right person, place or thing, in the right way. That’s why Informatica is unveiling our vision for an Intelligent Data Platform, fueled by new technology innovations in data intelligence.

Data intelligence is built on two new capabilities – live data map and inference engine. Live data map continuously updates all the metadata—structural, semantic, usage and otherwise— on all of the data flowing through an enterprise, while the inference engine can deduce user intentions, help humans search for what they need in their own natural language, and provide recommendations on the best way to consume data depending on the use case. The combination ensures that clean, safe and connected data gets to whomever or whatever needs it, as it’s needed—fast.

We at Informatica believe these capabilities are so incredibly vital for the enterprise that the Intelligent Data Platform now serves as the foundation of many of our future products — beginning with Project Springbok and Project Secure@Source™. These two new offerings simplify some of the toughest challenges facing people in the enterprise: letting business users find and use the data they need, and seeing where their most-sensitive data is hiding amidst all the nooks and crannies.

Project Springbok’s Excel-like interface lets everyday business folks and mere mortals find the data sets they’re interested in, fix formatting and quality issues, and do tasks that are a pain today to perform — such as combining data sets or publishing the results for colleagues to reuse and enhance. Project Springbok is also a guide, with its recommendations derived by the inference engine. It tells users the sources they could or should have access to, and then provisions only what they should have. It lets users see which data sets colleagues are most frequently accessing and finding the most valuable. It also alerts users to inconsistent or incomplete data, suggests ways to sort new combinations of data sets and recommends the best data for the task.

While we designed Project Springbok for the average business user, Project Secure@Source is intended for people responsible for protecting the enterprise, including chief risk officers, chief information security officers (CISOs) and even board members of public companies. That’s because Project Secure@Source’s graphical interface displays all the systems holding sensitive data, such as social security numbers, medical records or payment card information.

But it’s not enough just to know where that data is. To safeguard all the sensitive information about their products, their customers, and their employees, users also need to understand how that data got into these systems, how it moves around, and who is using it. Project Secure@Source does that, too — showing, for example, that an engineer used payment card data to test a Hadoop cluster, and left it there. With Project Secure@Source, users can selectively remove or mask that data from any system in the enterprise.

You’ll hear us talk about and showcase the Intelligent Data Platform, Project Springbok and Project Secure@Source at Informatica World on May 13 and 14. I hope you’ll join us to learn how our vision and our product roadmap will enable a smarter world for all of us, today.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration Platform, Data masking, Informatica World 2014 | Leave a comment

Data Archiving and Data Security Are Table Stakes

ILM Day at Informatica World 2014I recently met with a longtime colleague from the Oracle E-Business Suite implementation eco-system, now VP of IT for a global technology provider. This individual has successfully implemented data archiving and data masking technologies to eliminate duplicate applications and control the costs of data growth – saving tens of millions of dollars. He has freed up resources that were re-deployed within new innovative projects such as Big Data – giving him the reputation as a thought leader. In addition, he has avoided exposing sensitive data in application development activities by securing it with data masking technology – thus securing his reputation.

When I asked him about those projects and the impact on his career, he responded, ‘Data archiving and data security are table stakes in the Oracle Applications IT game. However, if I want to be a part of anything important, it has to involve Cloud and Big Data.’ He further explained how the savings achieved from Informatica Data Archive enabled him to increase employee retention rates because he was able to fund an exciting Hadoop project that key resources wanted to work on. Not to mention, as he transitioned from physical infrastructure to a virtual server by retiring legacy applications – he had accomplished his first step on his ‘journey to the cloud’. This would not have been possible if his data required technology that was not supported in the cloud. If he hadn’t secured sensitive data and had experienced a breach, he would be looking for a new job in a new industry.

Not long after, I attended a CIO summit where the theme of the conference was ‘Breakthrough Innovation’. Of course, Cloud and Big Data were main stage topics – not just about the technology, but about how it was used to solve business challenges and provide services to the new generation of ‘entitled’ consumers. This is the description of those who expect to have everything at their fingertips. They want to be empowered to share or not share their information. They expect that if you are going to save their personal information, it will not be abused. Lastly, they may even expect to try a product or service for free before committing to buy.

In order to size up to these expectations, Application Owners, like my long-time colleague, need to incorporate Data Archive and Data Masking in their standard SDLC processes. Without Data Archive, IT budgets may be consumed by supporting old applications and mountains of data, thereby becoming inaccessible for new innovative projects. Without Data Masking, a public breach will drive many consumers elsewhere.

To learn more about what Informatica has to offer and why data archiving and data masking are table stakes, join us at the ILM Day at Informatica World 2014 on May 12th in Las Vegas, NV.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Archiving, Data masking, Informatica World 2014 | Tagged , , , | Leave a comment

That’s not ‘Big Data’, that’s MY DATA!

Personal DataFor the past few years, the press has been buzzing about the potential value of Big Data. However, there is little coverage focusing on the data itself – how do you get it, is it accurate, and who can be trusted with it?

We are the source of data that is often spoken about – our children, friends and relatives and especially those people we know on Facebook or LinkedIn.  Over 40% of Big Data projects are in the sales and marketing arena – relying on personal data as a driving force. While machines have no choice but to provide data when requested, people do have a choice. We can choose not to provide data, or to purposely obscure our data, or to make it up entirely.

So, how can you ensure that your organization is receiving real information? Active participation is needed to ensure a constant flow of accurate data to feed your data-hungry algorithms and processes. While click-stream analysis does not require individual identification, follow-up sales & marketing campaigns will have limited value if the public at large is using false names and pretend information.

BCG has identified a link between trust and data sharing: 

“We estimate that those that manage this issue well [creating trust] should be able to increase the amount of consumer data they can access by at least five to ten times in most countries.”[i]

With that in mind, how do you create the trust that will entice people to share data? The principles behind common data privacy laws provide guidelines. These include:  accountability, purpose identification and disclosure, collection with knowledge and consent, data accuracy, individual access and correction, as well as the right to be forgotten.

But there are challenges in personal data stewardship – in part because the current world of Big Data analysis is far from stable.  In the ongoing search for the value of Big Data, new technologies, tools and approaches are being piloted. Experimentation is still required which means moving data around between data storage technologies and analytical tools, and giving unprecedented access to data in terms of quantity, detail and variety to ever growing teams of analysts. This experimentation should not be discouraged, but it must not degrade the accuracy or security of your customers’ personal data.

How do you measure up? If I made contact and asked for the sum total of what you knew about me, and how my data was being used – how long would it take to provide this information? Would I be able to correct my information? How many of your analysts can view my personal data and how many copies have you distributed in your IT landscape? Are these copies even accurate?

Through our data quality, data mastering and data masking tools, Informatica can deliver a coordinated approach to managing your customer’s personal data and build trust by ensuring the safety and accuracy of that data. With Informatica managing your customer’s data, your internal team can focus their attention on analytics. Analytics from accurate data can help develop the customer loyalty and engagement that is vital to both the future security of your business and continued collection of accurate data to feed your Big Data analysis.


[i] The Trust Advantage: How to Win with Big Data; bcg.perspectives November 2013

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data masking, Master Data Management | Tagged , , | Leave a comment

Data Privacy and Security at RSA and IAPP

Data SecurityIt is an important time for data security. This past month, two crucial data privacy events have taken place. Informatica was on hand for both:

  1. The RSA conference took place in San Francisco from February 24-28, 2014
  2. The IAPP Global Privacy Summit took place Washington, DC from March 5-7, 2014

Data Privacy at the 2014 RSA Conference

The RSA conference was busy as expected, with over 30,000 attendees. Informatica co-sponsored an after-hours event with one of our partners, Imperva, at the Dark Circus. The event was standing room only and provided a great escape from the torrential rain. One highlight of RSA, for Informatica, is that we were honored with two of the 2014 Security Products Guide Awards:

  1. Informatica Dynamic Data Masking won the Gold Award for Database Security, Data Leakage Prevention/Extrusion Prevention
  2. Informatica Cloud Test Data Management and Security won the Bronze Award for New Products

Of particular interest to us was the growing recognition of data-centric security and privacy at RSA. I briefly met Bob Rudis, co-author of “Data Driven Security” which was featured at the onsite bookstore. In the book, Rudis has presented a great case for focusing on data as the center-point of security, through data analysis and visualization. From Informatica’s perspective, we also believe that a deep understanding of data and its relationships will escalate as a key driver of security policies and measures.

Data Privacy at the IAPP Global Privacy Summit

The IAPP Global Privacy Summit was an amazing event, small (2,500), but completely sold-out and overflowing its current venue. We exhibited and had the opportunity to meet CPOs, privacy, risk/compliance and security professionals from around the world, and had hundreds of conversations about the role of data discovery and masking for privacy. From the privacy perspective, it is all about finding, de-identification and protection of PII, PCI and PHI. These privacy professionals have extensive legal and/or data security backgrounds and understand the need to safeguard privacy by using data masking. Many notable themes were present at IAPP:

  • De-identification is a key topic area
  • Concerns about outsourcing and contractors in application development and testing have driven test data management adoption
  • No national US privacy regulations expected in the short-term
  • Europe has active but uneven privacy enforcement (France: “name and shame”, UK: heavy fines, Spain; most active)

Register for Informatica WorldIf you want to learn more about data privacy and security, you will find no better place than Informatica World 2014. There, you’ll learn about the latest data security trends, see updates to Informatica’s data privacy and security offerings, and find out how Informatica protects sensitive information in real time without requiring costly, time-consuming changes to applications and databases. Register TODAY!

FacebookTwitterLinkedInEmailPrintShare
Posted in Data masking, Data Privacy | Tagged , | Leave a comment