- A loss of customer trust
- Revenue shortfalls
- A plummeting stock price
- C-level executives losing their jobs
As a result, Data security and privacy has become a key topic of discussion, not just in IT meetings, but in the media and the boardroom.
Preventing access to sensitive data has become more complex than ever before. There are new potential entry points that IT never previously considered. These new options go beyond typical BYOD user devices like smartphones and tablets. Today’s entry points can be much smaller: Things like HVAC controllers, office polycoms and temperature control systems.
So what can organizations do to combat this increasing complexity? Traditional data security practices focus on securing both the perimeter and the endpoints. However, these practices are clearly no longer working and no longer manageable. Not only is the number and type of devices expanding, but the perimeter itself is no longer present. As companies increasingly outsource, off-shore and move operations to the cloud, it is no longer possible fence the perimeters and to keep intruders out. Because 3rd parties often require some form of access, even trusted user credentials may fall into the hands of malicious intruders.
Data security requires a new approach. It must use policies to follow the data and to protect it, regardless of where it is located and where it moves. Informatica is responding to this need. We are leveraging our market leadership and domain expertise in data management and security. We are defining a new data security offering and category. This week, we unveiled our entry into the Data Security market at our Informatica World conference. Our new security offering, Secure@Source™ will allow enterprises to discover, detect and protect sensitive data.
The first step towards protecting sensitive data is to locate and identify them. So Secure@Source™ first allows you discover where all the sensitive data are located in the enterprise and classify them. As part of the discovery, Secure@source also analyzes where sensitive data is being proliferated, who has access to the data, who are actually accessing them and whether the data is protected or unprotected when accessed. Secure@Source™ leverages Informatica’s PowerCenter repository and lineage technology to perform a first pass, quick discovery with a more in depth analysis and profiling over time. The solution allows you to determine the privacy risk index of your enterprise and slice and dice the analysis based on region, departments, organization hierarchy, as well as data classifications.
The longer term vision of Secure@Source™ will allow you to detect suspicious usage patterns and orchestrate the appropriate data protection method, such as: alerting, blocking, archiving and purging, dynamically masking, persistently masking, encrypting, and/or tokenizing the data. The data protection method will depend on whether the data store is a production or non-production system, and whether you would like to de-identify sensitive data across all users or only for some users. All can be deployed based on policies. Secure@Source™ is intended to be an open framework for aggregating data security analytics and will integrate with key partners to provide a comprehensive visibility and assessment of an enterprise data privacy risk.
Secure@Source™ is targeted for beta at the end of 2014 and general availability in early 2015. Informatica is recruiting a select group of charter customers to drive and provide feedback for the first release. Customers who are interested in being a charter customer should register and send email to SecureCustomers@informatica.com.
What is In-Database Archiving in Oracle 12c and Why You Still Need a Database Archiving Solution to Complement It (Part 2)
In my last blog on this topic, I discussed several areas where a database archiving solution can complement or help you to better leverage the Oracle In-Database Archiving feature. For an introduction of what the new In-Database Archiving feature in Oracle 12c is, refer to Part 1 of my blog on this topic.
Here, I will discuss additional areas where a database archiving solution can complement the new Oracle In-Database Archiving feature:
- Graphical UI for ease of administration – In database archiving is currently a technical feature of Oracle database, and not easily visible or mange-able outside of the DBA persona. This is where a database archiving solution provides a more comprehensive set of graphical user interfaces (GUI) that makes this feature easier to monitor and manage.
- Enabling application of In-Database Archiving for packaged applications and complex data models – Concepts of business entities or transactional records composed of related tables to maintain data and referential integrity as you archive, move, purge, and retain data, as well as business rules to determine when data has become inactive and can therefore be safely archived allow DBAs to apply this new Oracle feature to more complex data models. Also, the availability of application accelerators (prebuilt metadata of business entities and business rules for packaged applications) enables the application of In-Database Archiving to packaged applications like Oracle E-Business Suite, PeopleSoft, Siebel, and JD Edwards
What is In-Database Archiving in Oracle 12c and Why You Still Need a Database Archiving Solution to Complement It (Part 1)
What is the new In-Database Archiving in the latest Oracle 12c release?
On June 25, 2013, Oracle introduced a new feature called In-Database Archiving with its new release of Oracle 12. “In-Database Archiving enables you to archive rows within a table by marking them as inactive. These inactive rows are in the database and can be optimized using compression, but are not visible to an application. The data in these rows is available for compliance purposes if needed by setting a session parameter. With In-Database Archiving you can store more data for a longer period of time within a single database, without compromising application performance. Archived data can be compressed to help improve backup performance, and updates to archived data can be deferred during application upgrades to improve the performance of upgrades.”
This is an Oracle specific feature and does not apply to other databases.
Data warehouses tend to grow very quickly because they integrate data from multiple sources and maintain years of historical data for analytics. A number of our customers have data warehouses in the hundreds of terabytes to petabytes range. Managing such a large amount of data becomes a challenge. How do you curb runaway costs in such an environment? Completing maintenance tasks within the prescribed window and ensuring acceptable performance are also big challenges.
We have provided best practices to archive aged data from data warehouses. Archiving data will keep the production data size at almost a constant level, reducing infrastructure and maintenance costs, while keeping performance up. At the same time, you can still access the archived data directly if you really need to from any reporting tool. Yet many are loath to move data out of their production system. This year, at Informatica World, we’re going to discuss another method of managing data growth without moving data out of the production data warehouse. I’m not going to tell you what this new method is, yet. You’ll have to come and learn more about it at my breakout session at Informatica World: What’s New from Informatica to Improve Data Warehouse Performance and Lower Costs.
I look forward to seeing all of you at Aria, Las Vegas next month. Also, I am especially excited to see our ILM customers at our second Product Advisory Council again this year.
I’ve been approached by a number of customers who are looking to archive data from their Salesforce application. There are two primary drivers I have heard cited:
- The need to manage the retention of Salesforce data and easily find and access it for legal eDiscovory
- Storage cost reduction for data that’s no longer active
Just like your on-premise database applications like E-Business Suite, PeopleSoft, Siebel and custom applications, SaaS applications such as Salesforce, Oracle CRM On Demand, Microsoft Dynamics, NetSuite, Eloqua and others will experience large data growth causing performance issues and increasing costs.
As data grows in your SaaS applications, the performance of accessing transactions and reporting will degrade. Your SaaS vendors will also require more time, effort, and cost to maintain and manage this data. Backups, upgrades and replication of these environments will take longer and application availability will be impacted due to longer maintenance windows. Your SaaS application vendors will require more storage to house the additional data and this cost will be passed on to you. (more…)
SAP’s data warehouse solution (SAP BW) provides enterprises the ability to easily build a warehouse over their existing operational systems with pre-defined extraction and reporting objects and methods. Data that is loaded into SAP BW is stored in a layered architecture which encourages reusability of data throughout the system in a standardized way. SAP’s implementation also enables easy audits of data delivery mechanisms that are used to produce various reports within the system.
To allow enterprises to achieve this level of standardization and auditability, SAP BW must persistently store large amounts of data within different layers of their architecture. Managing the size of the objects within these layers will become increasingly important as the system grows to insure high levels of performance for end-user queries and data delivery. (more…)
Both partitioning and archiving are alternative methods of improving database and application performance. Depending on a database administrator’s comfort level for one technology or method over another, either partitioning or archiving could be implemented to address performance issues due to data growth in production applications. But what are the best practices for utilizing one or the other method and how can they be used better together?
Just like your house needs yearly spring cleaning and you need to regularly throw out old junk, your application portfolio needs periodic review and rationalization to identify legacy, redundant applications that can be decommissioned to reduce bloat and save costs. If you have a hard time letting go of old stuff, it’s probably even harder for your application users to let go of access to their data. However, retiring applications doesn’t have to mean that you also lose the data within them. If the data within those applications are still needed for periodic reporting or for regulatory compliance, then there are still ways to retain the data without maintaining the application. (more…)
Data warehouses are applications– so why not manage them like one? In fact, data grows at a much faster rate in data warehouses, since they integrate date from multiple applications and cater to many different groups of users who need different types of analysis. Data warehouses also keep historical data for a long time, so data grows exponentially in these systems. The infrastructure costs in data warehouses also escalate quickly since analytical processing on large amounts of data requires big beefy boxes. Not to mention the software license and maintenance costs of such a large amount of data. Imagine how many backup media is required to backup tens to hundreds of terabytes of data warehouses on a regular basis. But do you really need to keep all that historical data in production?
One of the challenges of managing data growth in data warehouses is that it’s hard to determine which data is actually used, which data is no longer being used, or even if the data was ever used at all. Unlike transactional systems where the application logic determines when records are no longer being transacted upon, the usage of analytical data in data warehouses has no definite business rules. Age or seasonality may determine data usage in data warehouses, but business users are usually loath to let go of the availability of all that data at their fingertips. The only clear cut way to prove that some data is no longer being used in data warehouses is to monitor its usage.