Category Archives: Application ILM

Exploring the Informatica ILM Nearline 6.1A

In today’s post, I want to write about the “Informatica ILM Nearline 6.1A″. Although this Nearline concept is not new, it is still not very known and represents the logical evolution of business applications,  data warehouses and information lifecycle approaches that have struggled to maintain acceptable performance levels in the face of the increasingly intense “data tsunami” that looms over today’s business world. Whereas older archiving solutions based their viability on the declining prices of hardware and storage, ILM Nearline 6.1A embraces the dynamism of a software and services approach to fully leverage the potential of large enterprise data architectures.

Looking back, we can now see that the older data management solutions presented a paradox: in order to mitigate performance issues and meet Service Level Agreements (SLA) with users, they actually prevented or limited ad-hoc access to data. On the basis of system monitoring and usage statistics, this inaccessible data was then declared to be unused, and this was cited as an excuse for locking it away entirely. In effect, users were told: “Since you can’t get at it, you can’t use it, and therefore we’re not going to give it to you”!

ILM Nearline 6.1A, by contrast, allows historical data to be accessed with near-online speeds, empowering business analysts to measure and perfect key business initiatives through analysis of actual historical details. In other words, ILM Nearline 6.1A gives you all the data you want, when and how you want it (without impacting the performance of existing warehouse reporting systems!).

Aside from the obvious economic and environmental benefits of this software-centric approach and the associated best practices, the value of ILM Nearline 6.1A can be assessed in terms of the core proposition cited by Tim O’Reilly when he coined the term “Web 2.0″:

“The value of the software is proportional to the scale and dynamism of the data it helps to manage.”

In this regard, ILM Nearline 6.1A provides a number of important advantages over prior methodologies:

Keeps data accessible: ILM Nearline 6.1A enables optimal performance from the online database while keeping all data easily accessible. This massively reduces the work required to identify, access and restore archived data, while minimizing the performance hit involved in doing so in a production environment.

Keeps the online database “lean”: Because data archived to the ILM Nearline 6.1A can still be easily accessed by users at near-online speeds, it allows for much more recent data to be moved out of the online system than would be possible with archiving. This results in far better online system performance and greater flexibility to further support user requirements without performance trade-offs. It is also a big win for customers moving their systems to HANA.

Relieves data management stress: Data can be moved to ILM Nearline 6.1A without the substantial ongoing analysis of user access patterns that is usually required by archiving products. The process is typically based on a rule as simple as “move all data older than x months from the ten largest InfoProviders”.

Mitigates administrative risk: Unlike archived data, ILM Nearline 6.1A data requires little or no additional ongoing administration, and no additional administrative intervention is required to access it.

Lets analysts be analysts: With ILM Nearline 6.1A, far less time is taken up in gaining access to key data and “cleansing it”, so much more time can be spent performing “what if” scenarios before recommending a course of action for the company. This improves not only the productivity but also the quality of work of key business analysts and statistical gurus.

Copes with data structure changes: ILM Nearline 6.1A can easily deal with data model changes, making it possible to query data structured according to an older model alongside current data. With archive data, this would require considerable administrative work.

Leverages existing storage environments: Compared to older archiving products/strategies, the high degree of compression offered by ILM Nearline 6.1A greatly increases the amount of information that can be stored as well as the speed at which it can be accessed.

Keeps data private and secure: ILM Nearline 6.1A has privacy and security features that protect key information from being seen by ad-hoc business analysts (for example: names, social security numbers, credit card information).

In short, ILM Nearline 6.1A offers a significant advantage over other nearline and archiving technologies. When data needs be removed from the online database in order to improve performance, but still needs to be readily accessible by users to conduct long-term analysis, historical reporting, or to rebuild aggregates/KPIs/InfoCubes for period-over-period analysis, ILM Nearline 6.1A is currently the only workable solution available.

In my next post, I’ll discuss more specifically how implementing the ILM Nearline 6.1A solution can benefit your business apps, data warehouses and your business processes.


FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving | Tagged , , , , | 2 Comments

ROI via Application Retirement

ROI = every executive’s favorite acronym and one that is often challenging to demonstrate.

In our interactions with provider clients and prospects we are hearing that they’ve migrated to new EMRs but aren’t receiving the ROI they had budgeted or anticipated. In many cases, they are using the new EMR for documentation but still paying to maintain the legacy EMR for access to historical data for billing and care delivery. If health systems can retire these applications and still maintain operational access to the data, they will be able to realize the expected ROI and serve patients proactively.

My colleague Julie, Lockner wrote a blog post about how Informatica Application Retirement for Healthcare is helping healthcare organizations to retire legacy applications and realize ROI.

Read her blog post here or listen to a quick overview here.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Application Retirement, Healthcare | Tagged , , | Leave a comment

What is In-Database Archiving in Oracle 12c and Why You Still Need a Database Archiving Solution to Complement It (Part 2)

In my last blog on this topic, I discussed several areas where a database archiving solution can complement or help you to better leverage the Oracle In-Database Archiving feature.  For an introduction of what the new In-Database Archiving feature in Oracle 12c is, refer to Part 1 of my blog on this topic.

Here, I will discuss additional areas where a database archiving solution can complement the new Oracle In-Database Archiving feature:

  • Graphical UI for ease of administration – In database archiving is currently a technical feature of Oracle database, and not easily visible or mange-able outside of the DBA persona.   This is where a database archiving solution provides a more comprehensive set of graphical user interfaces (GUI) that makes this feature easier to monitor and manage. 
  • Enabling application of In-Database Archiving for packaged applications and complex data models – Concepts of business entities or transactional records composed of related tables to maintain data and referential integrity as you archive, move, purge, and retain data, as well as business rules to determine when data has become inactive and can therefore be safely archived allow DBAs to apply this new Oracle feature to more complex data models.  Also, the availability of application accelerators (prebuilt metadata of business entities and business rules for packaged applications) enables the application of In-Database Archiving to packaged applications like Oracle E-Business Suite, PeopleSoft, Siebel, and JD Edwards

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving, Data Governance, Governance, Risk and Compliance | Tagged , , , , , , | Leave a comment

What is In-Database Archiving in Oracle 12c and Why You Still Need a Database Archiving Solution to Complement It (Part 1)

What is the new In-Database Archiving in the latest Oracle 12c release?

On June 25, 2013, Oracle introduced a new feature called In-Database Archiving with its new release of Oracle 12.  “In-Database Archiving enables you to archive rows within a table by marking them as inactive. These inactive rows are in the database and can be optimized using compression, but are not visible to an application. The data in these rows is available for compliance purposes if needed by setting a session parameter. With In-Database Archiving you can store more data for a longer period of time within a single database, without compromising application performance. Archived data can be compressed to help improve backup performance, and updates to archived data can be deferred during application upgrades to improve the performance of upgrades.”

This is an Oracle specific feature and does not apply to other databases.

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving, Data Governance, Governance, Risk and Compliance | Tagged , , , , , | Leave a comment

Get Your Data Butt Off The Couch and Move It

Data is everywhere.  It’s in databases and applications spread across your enterprise.  It’s in the hands of your customers and partners.  It’s in cloud applications and cloud servers.  It’s on spreadsheets and documents on your employee’s laptops and tablets.  It’s in smartphones, sensors and GPS devices.  It’s in the blogosphere, the twittersphere and your friends’ Facebook timelines. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, B2B, Big Data, Cloud Computing, Complex Event Processing, Data Governance, Data Integration, Data Migration, Data Quality, Data Services, Data Transformation, Data Warehousing, Enterprise Data Management, Integration Competency Centers | Tagged , , , | Leave a comment

How to Improve Your Application Performance with the Hardware You Already Have

Most application owners know that as data volumes accumulate, application performance can take a major hit if the underlying infrastructure is not aligned to keep up with demand. The problem is that constantly adding hardware to manage data growth can get costly – stealing budgets away from needed innovation and modernization initiatives.

Join Julie Lockner as she reviews the Cox Communications case study on how they were able to solve an application performance problem caused by too much data with the hardware they already had by using Informatica Data Archive with Smart Partitioning. Source: TechValidate. TVID: 3A9-97F-577

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Customers, Telecommunications | Tagged , , , , , , | Leave a comment

Keeping Your SAP HANA Costs Under Control

Adopting SAP HANA can offer significant new business value, but it can also be an expensive proposition. If you are contemplating or in the process of moving to HANA, it’s worth your time to understand your options for Nearlining your SAP data.  The latest version of Informatica ILM Nearline, released in February, has been certified by SAP and can run with SAP BW systems running on HANA or any relational database supported by SAP.

Nearlining your company’s production SAP BW before migrating to a HANA-based BW can provide huge saving potentials. Even if your HANA project has already started, Nearlining the production data will help keep the database growth flat. We have customers that have actually been able to shrink InfoProviders by enforcing strict rules on data retention on the data stored in the live database.

Informatica World is around the corner, and I will be there with my peers to demo and talk about the latest version of Informatica ILM Nearline.  Click here to learn more about Informatica World 2013 and make sure you sign up for one my Hands On Lab sessions on this topic. See you at the Aria in Las Vegas in June.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Informatica Events | Tagged , , , , , | 1 Comment

Column-oriented Part 2: Flexible Data Modeling for a Simplified End User Experience

In my previous blog, I explained how Column-oriented Database Management Systems (CDBMS), also known as columnar databases or CBAT, offer a distinct advantage over the traditional row-oriented RDBMS in terms of I/O workload, deriving primarily from basing the granularity of I/O operations on the column rather than the entire row. This technological advantage has a direct impact on the complexity of data modeling tasks and on the end-user’s experience of the data warehouse, and this is what I will discuss in today’s post. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Integration | Tagged , , , , | 1 Comment

Column-oriented Part 1: The I/O Advantage

Column-oriented Database Management Systems (CDBMS), also referred to as columnar databases and CBAT, have been getting a lot of attention recently in the data warehouse marketplace and trade press. Interestingly, some of the newer companies offering CDBMS-based products give the impression that this is an entirely new development in the RDBMS arena. This technology has actually been around for quite a while. But the market has only recently started to recognize the many benefits of CDBMS. So, why is CDBMS now coming to be recognized as the technology that offers the best support for very large, complex data warehouses intended to support ad hoc analytics? In my opinion, one of the fundamental reasons is the reduction in I/O workload that it enables. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Integration | Tagged , , , , , , | 2 Comments