0

Your Biggest IT Security Nightmare – Non-Production Data

Everyone is worried about data security and privacy as they should be; for data to be trusted, users and management need confidence in not just knowing that data is correct, but also in knowing that it is secure and that access is permitted only in controlled situations. There is no shortage of security disaster stories, but I’m not worried about production data since it is at the heart of application management disciplines which, while still not perfect, have had 50 years to mature. This perspective is stated succinctly by Ronald Reagan when he spoke about the economy and said “I am not worried about the deficit. It is big enough to take care of itself.”

Production systems, including production data warehouses, generally have solid controls; they provide a role-based secure user interface, maintain audit trails, require formal validation and approval before making changes, and contain extensive monitoring and management controls. The biggest data security gap for many organizations however is control of non-production data such as copies of real data used for development, testing, or training purposes.

A solution to this gap starts with the perspective that information has a lifecycle and needs to be controlled at the enterprise level. The first step to begin dealing with potential non-production data issues is to perform a risk assessment.  Assessing risk is a matter of degree and includes a combination of likelihood of exposure as well as severity of impact if a breach occurs.  At a high level, the steps to perform a risk assessment include:

  1. Identify the data elements that are sensitive and require particular control either from a regulatory compliance perspective, a customer data privacy perspective or a business data security perspective.
  2. Assess the sensitivity of each element according to a defined severity scale.
  3. Assess the probability of each application system in the enterprise in terms of its likelihood of enabling an exposure.
  4. Multiply the severity and likelihood ratings for each attribute and sum the attributes for each system to compute a numerical risk ranking of each application. Use the risk ranking to drive the implementation priority including deciding which systems may not require an investment if their risk level is below an acceptable threshold.
  5. For the systems that do require mitigation, implement a data masking solution with controls to be able to prove that no new copies of data can be used for non-production purposes without masking.

A risk assessment can be a lot of work, but it plays a critical role in articulating the business value of investing in non-production data controls. You could do this yourself, but to avoid reinventing the wheel you could ask Informatica’s Information Lifecycle Management Professional Services to help.

Once in operation, it is often the responsibility of an Integration Competency Center or similar group to monitor the masking operations, provide audit reports to the data governance committee, and drive a process to review and refresh the risk assessment on a periodic basis.

FacebookTwitterLinkedInEmailPrintShare
This entry was posted in Application ILM, CIO, Data Governance, Data masking, Data Privacy, Governance, Risk and Compliance, Integration Competency Centers and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>