Julie Lockner

Julie Lockner
Julie Lockner is a recognized leading industry expert in the structured data management market. Julie joins Informatica from the industry analyst firm ESG, where she served a dual role of senior analyst and vice president covering data management solutions and managing the end-user consulting practice. Prior to joining ESG, Julie was President and Founder of CentricInfo, a consulting firm acquired by ESG, specializing in implementing Data Governance and Data Center Optimization programs. Julie was VP at Solix Technologies and Sr Portfolio Architect at EMC where she defined product and sales strategies in the data management software market. She also held various engineering, product management and marketing positions with companies such as Oracle, Cognex and Raytheon. She graduated with a BSEE from WPI.

Customers Reduce Risk Of A Data Breach While Improving Testing Efficiencies

Testing in any domain involves dealing with an incredible amount of test data. As application environments get bigger and more complex, testing teams become susceptible to significant inefficiencies and project delays. Incomplete, missing, or irrelevant test data sets can wreak havoc on any QA cycle. If production data is copied for testing purposes, testers need to ensure sensitive data is protected – adding time to test data creation tasks.

Informatica recently polled its Test Data Management customers through a third party validation service, TechValidate. Here are three cases studies and testimonials about how Informatica customers are addressing these challenges.

techvalidate

FacebookTwitterLinkedInEmailPrintShare
Posted in Uncategorized | Leave a comment

Creating Secure Test Data Subsets for Salesforce.com Sandboxes

Many Salesforce developers that use sandbox environments for test and development suffer from the following challenges:

  • Lack of relevant data for proper testing and development (empty sandboxes)
  • To fix that problem, they manually copy data from production
  • Which results in exposing sensitive data to unauthorized users
  • And potentially consuming more storage than allocated for sandbox environments (resulting in unexpected costs)

To address these challenges, Informatica just released Cloud Test Data Management for Salesforce.  This solution is designed to give Salesforce admins and developers   the ability to provision secure test data subsets to developers through an easy to use, wizard driven approach.  The application is delivered as a service through a subscription-based pricing model.

The Informatica IT team uses Salesforce internally and validated an ROI based on reducing the amount of developer time used to manually script copying data from production to a sandbox, reducing the amount of time fixing defects due to not having the right test data, and eliminating the risk of a data breach by masking sensitive data.

To learn more about this new offering, watch a demonstration that shows how to create secure test data subsets for Salesforce.  Also, available now, try the free Cloud Data Masking app or take a 30-day Cloud Test Data Management trial.

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud Computing, Data masking | Leave a comment

Streamlining and Securing Application Test and Development Processes

Informatica recently hosted a webinar with Cognizant who shared how they streamline test data management processes internally with Informatica Test Data Management and pass on the benefits to their customers.  Proclaimed as the world’s largest Quality Engineering and Assurance (QE&A) service provider, they have over 400 customers and thousands of testers and are considered a thought leader in the testing practice.

We polled over 100 attendees on what their top challenges were with test data management considering the data and system complexities and the need to protect their client’s sensitive data.  Here are the results from that poll:

It was not surprising to see that generating test data sets and securing sensitive data in non-production environments were tied as the top two biggest challenges.   Data integrity/synchronization was a very close 3rd .

Cognizant with Informatica has been evolving its test data management offering to truly focus on not only securing sensitive data – but also improving testing efficiencies with identifying, provisioning and resetting test data – tasks that consume as much as 40% of testing cycle times.  As part of the next generation test data management platform, key components of that solution include:

Sensitive Data Discovery – an integrated and automated process that searches data sets looking for exposed sensitive data.  Many times, sensitive data resides in test copies unbeknownst to auditors.  Once data has been located, data can be masked in non-production copies.

Persistent Data Masking – masks sensitive data in-flight while cloning data from production or in-place on a gold copy.  Data formats are preserved while original values are completely protected.

Data Privacy Compliance Validation – auditors want to know that data has in fact been protected, the ability to validate and report on data privacy compliance becomes critical.

Test Data Management – in addition to creating test data subsets, clients require the ability to synthetically generate test data sets to eliminate defects by having data sets aligned to optimize each test case. Also, in many cases, multiple testers work on the same environment and may clobber each other’s test data sets.  Having the ability to reset test data becomes a key requirement to improve efficiencies.

lockner 2

Figure 2 Next Generation Test Data Management

When asked what tools or services that have been deployed, 78% said in-house developed scripts/utilities.  This is an incredibly time-consuming approach and one that has limited repeatability. Data masking was deployed in almost half of the respondents.

lockner 3

Informatica with Cognizant are leading the way to establishing a new standard for Test Data Management by incorporating both test data generation, data masking, and the ability to refresh or reset test data sets.  For more information, check out Cognizant’s offering based on Informatica: TDMaxim and White Paper: Transforming Test Data Management for Increased Business Value.

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Data masking, Data Migration, Data Privacy | Tagged , , , , | Leave a comment

Are You Getting an EPIC ROI? Retire Legacy Healthcare Applications!

Healthcare organizations are currently engaged in major transformative initiatives. The American Recovery and Reinvestment Act of 2009 (ARRA) provided the healthcare industry incentives for the adoption and modernization of point-of-care computing solutions including electronic medical and health records (EMRs/EHRs).   Funds have been allocated, and these projects are well on their way.  In fact, the majority of hospitals in the US are engaged in implementing EPIC, a software platform that is essentially the ERP for healthcare.

These Cadillac systems are being deployed from scratch with very little data being ported from the old systems into the new.  The result is a dearth of legacy applications running in aging hospital data centers, consuming every last penny of HIS budgets.  Because the data still resides on those systems, hospital staff continues to use them making it difficult to shut down or retire.

Most of these legacy systems are not running on modern technology platforms – they run on systems such as HP Turbo Image, Intercache Mumps, and embedded proprietary databases.  Finding people who know how to manage and maintain these systems is costly and risky – risky in that if data residing in those applications is subject to data retention requirements (patient records, etc.) and the data becomes inaccessible.

A different challenge for CFOs of these hospitals is the ROI on these EPIC implementations.  Because these projects are multi-phased, multi-year, boards of directors are asking about the value realized from these investments.  Many are coming up short because they are maintaining both applications in parallel.  Relief will come when systems can be retired – but getting hospital staff and regulators to approve a retirement project requires evidence that they can still access data while adhering to compliance needs.

Many providers have overcome these hurdles by successfully implementing an application retirement strategy based on the Informatica Data Archive platform.  Several of the largest pediatrics’ children’s hospitals in the US are either already saving or expecting to save $2 Million or more annually from retiring legacy applications.  The savings come from:

  • Eliminating software maintenance and license costs
  • Eliminate hardware dependencies and costs
  • Reduced storage requirements by 95% (data archived is stored in a highly compressed, accessible format)
  • Improved efficiencies in IT by eliminating specialized processes or skills associated with legacy systems
  • Freed IT resources – teams can spend more of their time working on innovations and new projects

Informatica Application Retirement Solutions for Healthcare provide hospitals with the ability to completely retire legacy applications, retire and maintain access to archive data for hospital staff.  And with built in security and retention management, records managers and legal teams are satisfying compliance requirements.   Contact your Informatica Healthcare team for more information on how you can get that EPIC ROI the board of directors is asking for.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, Data Archiving, Healthcare | Tagged , , , , , , , , | Leave a comment

Comparing Encryption and Tokenization with Dynamic Data Masking

In recent conversations regarding solutions to implement for data privacy, our Dynamic Data Masking team put together the following table to highlight the differences between encryption / tokenization and Dynamic Data Masking (DDM).  Best practices dictate that both should be implemented in an enterprise for the most comprehensive and complete data security strategy.  For the purpose of this blog, here are a few definitions:

Dynamic Data Masking (DDM) protects sensitive data when it is retrieved based on policy without requiring the data to be altered when it is stored persistently.  Authorized users will see true data, unauthorized users will see masked values in the application.  No coding is required in the source application.

Encryption / tokenization protects sensitive data by altering its values when stored persistently while being able to decrypt and present the original values when requested by authorized users.  The user is validated by a separate service which then provides a decryption key.  Unauthorized users will only see the encrypted values.  In many cases, applications need to be altered requiring development work.

Use case Tokenization DDM
Business users access PII Business users work with actual SSN and personal values in the clear (not with tokenized values). As the data is tokenized in the database, it needs to be de-tokenized every time it is accessed by users – which is done be changing the application source-code (imposing costs and risks), and causing performance penalty.For example, if a user needs to retrieve information on a client with SSN = ‘987-65-4329’, the application needs to de-tokenize the entire tokenized SSN column to identify the correct client info – a costly operation. This is why implementation scope is limited. As DDM does not change the data in the database, but only masks it when accessed by unauthorized users, authorized users do not experience any performance hit nor require application source-code changes.For example, if an authorized user needs to retrieve information on a client with SSN = ‘987-65-4329’, his request is untouched by DDM. As the SSN stored in the database is not changed, there is no performance penalty involved.In case an unauthorized user retrieves the same SSN, DDM masks the SQL request, causing the sensitive data result (e.g., name, address, CC and age) to be masked, hidden or completely blocked.
Privileged Infrastructure DBA have access to the database server files Personal Identifiable Information (PII) stored in the database files is tokenized, ensuring that the few administrators that have uncontrolled access to the database servers cannot see it PII stored in the database files remains in the clear. The few administrators that have uncontrolled access to the database servers can potentially access it.
Production support, application developers, DBAs, consultants, outsource and offshore teams These groups of users have application super-user privileges, seen by the tokenization solution as authorized, and as such access PII in the clear!!! These users are identified by DDM as unauthorized, and as such are masked, hidden or blocked, protecting the PII.
Data warehouse protection Implementing tokenization on Data warehouses requires tedious database changes and causes performance penalty:1.Loading or reporting upon millions of PII records requires to tokenize/de-tokenize each record.2.Running a report with a condition on a tokenized value (e.g., when having a condition: SSN like (‘%333’) causes the de-tokenization of the entire column).

Massive database configuration changes are required to use the tokenization API, creating and maintaining hundreds of views.

No performance penalty.No need to change reports, databases or to create views.

 

Combining both DDM and encryption/tokenization presents an opportunity to deliver complete data privacy without the need to alter the application or write any code.

Informatica works with its encryption and tokenization partners to deliver comprehensive data privacy protection in packaged applications, data warehouses and Big Data platforms such as Hadoop.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data masking, Data Privacy, Data Quality | Tagged , , , , | Leave a comment

Next Generation Database Archiving for Oracle Applications at Collaborate13

The OAUG hosted its annual convention, Collaborate13, this week in Denver, Colorado. The week started out with beautiful spring weather and turned quickly into frigid temperatures with a snow flurry bonus. The rapid change in weather didn’t stop 4,000 attendees from elevating their application knowledge in the mile high city. One topic that was very well attended from our perspective was the evolution of database archiving. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving | Tagged | 1 Comment

Why Backups Are Terrible Archives

Businesses retain information in an Enterprise data archiving either for compliance – adhere to data retention regulations – or because business users are afraid to let go of data they are used to having access to. Many IT have told us they retain data in archives because they are looking to cut infrastructure costs and do not have retention requirements clearly articulated from the business. As a result, enterprise data archiving has morphed into serving multiple purposes for IT –they can eliminate costs associated with maintaining aging data in production applications, allow business users to access the information on demand, all while adhering to some – if any known or defined – retention policies.  (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving | Tagged , , , | Leave a comment

Is The Data Explosion Impacting You? How Do You Compare To Your Peers?

The digitization of everything is creating a data explosion near you. Whether data is accumulating in the data center, in the cloud, on your laptop or mobile device, sometimes too much of something isn’t always a good thing. In a recent webinar cohosted by Informatica and Symantec,  we polled our listeners to find out how the data explosion was impacting them.  We also asked what type of unstructured and structured data is growing the fastest. Check out what they said. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Big Data, Data Archiving | Tagged , , , , , | Leave a comment

Improve Efficiencies in Test Data Management

According to analysts[1], users spend the majority of the application development lifecycle in development and testing and the least amount of time in quality management and documentation. This is probably not very shocking to anyone in QA or on a testing team. But how much time is actually spent on test data management?  In a recent webinar, more than half of the listeners polled say they spend between 30-40% of their effort on ‘data related tasks.’ (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Cloud Computing, Data masking | Tagged , , , , | Leave a comment

Enterprise Data Archiving Is White Hot

Informatica recently hosted a webinar on Enterprise Data Archiving Best Practices with guest speakers, Tony Baer from Ovum and Murali Rathnam from Symantec IT.  With over 600 registrations, I would say that enterprise data archiving is not hot, it is white hot.  At least for Informatica.  With Big Data entering the data center, organizations are looking for ways to make room – either in the budget or in the data center itself.  Archiving is a proven approach that achieves both.  Given the complexities and interconnections of enterprise applications, Enterprise Data Archive solutions based on market leading technologies such as Informatica Data Archive, can deliver on the value proposition while meeting tough requirements.  (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving | Tagged , , , , , , , | Leave a comment