Thousands of Oracle OpenWorld 2012 attendees visited the Informatica booth to learn how to leverage their combined investments in Oracle and Informatica technology. Informatica delivered over 40 presentations on topics that ranged from cloud, to data security to smart partitioning. Key Informatica executives and experts, from product engineering and product management, spoke with hundreds of users on topics and answered questions on how Informatica can help them improve Oracle application performance, lower risk and costs, and reduce project timelines. (more…)
A single day does not go by without a healthy discussion or debate on the meaning or impact of the term ‘big data.’ It is quite comical in one sense, and quite inspiring in another. We are all intrigued by this concept of having access to information so that when we get THE question, we will find THE answer. And hopefully the answer is more than “42!” (for those who may not be familiar with the reference, please take a pause and read the classic “Hitchhiker’s Guide to the Galaxy” by Douglas Adams).
These past few weeks were no different. I was fortunate to have the opportunity to present at Informatica World and attend EMC World recently. While Informatica is leading the charge for data integration, they are also taking a thought leadership position in how to conceptually view an organization’s return on its data. Combining aspects of value, cost, and the impact of big data, I saw it as an elegant way to say – hey, look – you can keep all the data you want to derive value, but there is a trade-off. There is ultimately a cost. That cost can be measured hundreds of different ways – but there will be consequences. So we have to prioritize and put controls in place to maximize ROI. (more…)
This week the EMC World 2012 conference is taking place in Las Vegas. Informatica is participating as a partner continuing its commitment to the EMC Select Partnership for the Informatica ILM and MDM solutions. Informatica has continued to expand its partnership to include support for its Greenplum Hadoop distribution – mostly to support organizations needs for big data integration while making big data manageable and secure. (more…)
Quite a bit has happened on the topic of big data since my last post on Informatica Perspectives almost one and a half years ago. I have spent a career working with organizations on how to get control over their uncontrolled data growth and industry visionaries are promoting this brave new world of big data. (more…)
As a final part of our series, Architecting A Database Archiving Solution, we will review a process I use to assess a client’s existing Total Cost of Ownership of their database application and how to justify a database archiving solution. The key metrics I begin with are listed below and explained:
During this series of “Architecting a Database Archiving Solution”, we discussed the Anatomy of A Database Archiving Solution and End User Access Requirements. In this post we will review the archive repository options at a very high level. Each option has its pros and cons and needs to be evaluated in more detail to determine which will be the best fit for your situation.
Series: Architecting A Database Archiving Solution Part 3: End User Access & Performance Expectations
In my previous blog as part of the series, architecting a database archiving solution, we discussed the major architecture components. In this session, we will focus on how end user access requirements and expected performance service levels drive the core of an architecture discussion.
End user access requirements can be determined by answering the following questions. When data is archived from a source database:
- How long does the archived data need to be retained? The longer the retention period, the more the solution architecture needs to account for potentially significant data volumes and technology upgrades or obsolescence. This will determine cost factors of keeping data online in a database or an archive file, versus nearline or offline on other media such as tape. (more…)
Recently at Informatica World, 2010 in Washington, DC, Richard Clark was a featured speaker during one of the general sessions. He was the former Counterterrorism Czar, serving multiple presidencies in the White House, working for the Pentagon and the Intelligence Community, and is currently the Chairman of Good Harbor Consulting Services, LLC – a 360° Security Risk Management firm. There was no one better suited to discuss corporate information security and risk management where the entire theme of the event was Beyond Boundaries.
Before we can go into more details on how to architect a database archiving solution, let’s review at a high level the major components of a database archiving solution. In general, a database archiving solution is comprised of four key pieces – application metadata, a policy engine, an archive repository and an archive access layer.
Application Metadata – This component contains information that is used to define what tables will participate in a database archiving activity. It stores the relationships between those tables, including database or application level constraints and any criteria that needs to be considered when selecting data that will be archived. The metadata for packaged applications, such as Oracle E-Business Suite, PeopleSoft, or SAP can usually be purchased in pre-populated repositories, such as Informatica’s Application Accelerators for Data Archive to speed implementation times.
Policy Engine – This component is where business users define their retention policies in terms of time durations and possibly other related rules (i.e. keep all financial data for current quarter plus seven years and the general and sub ledgers must have a status of “Closed”). The policy engine is also responsible for executing the policy within the database, and moving data to a configured archive repository. This involves translating the policy and metadata into structured query language that the database understands (SELECT * from TABLE A where COLUMN 1 > 2 years and COLUMN 2 = “Closed”). Depending on the policy, users may want to move the data to an archive (meaning it is removed from the source application) or just create a copy in the archive. The policy engine takes care of all those steps.
Archive Repository – This stores the database archive records. The choices for the repository vary and will be determined based on a number of factors typically driven from end user archive access requirements (we will discuss this in the next blog). Some of these choices include another archive database, highly compressed query-able archive files, XML files to name a few.
Archive Access Layer – This is the mechanism that makes the database archive accessible either to a native application, a standard business reporting tool, or a data discovery portal. Again, these options vary and will be determined based on the end user access requirements and the technology standards in the organizations data center.
In the next series, we will discuss how End User Access and Performance Requirements impact the selection of these components in further detail.
Julie Lockner, Founder, www.CentricInfo.com
Classifying databases data for an ILM project requires a process for categorizing and classifying that involves the business owners, Records Management, Security, IT, DBA’s and developers. In an ideal scenario, a company has documented every single business process down to data flows and database tables. IT can map database tables to the underlying infrastructure. Since most of us work in realistic scenarios, here is one approach you can take to classify information without knowing all the interrelations.�