1

Next-Generation Data Virtualization Series Part 2 – Characteristics

So, where have I been since my last blog? Well, I have been working on our new Architect to Architect webinar series on data virtualization, which is very exciting for me as I get to rub shoulders (virtually speaking) with hundreds of industry architects.

The interactive nature and record attendance at these webinars have made one thing very clear – data virtualization is indeed top of mind. In my last blog we discussed the concept and how data virtualization is different or a superset of traditional data federation, especially as it overcomes many limitations of the latter. Wayne Eckerson did a great job at tracking the evolution of data federation in a recent webinar and blog.

Here are the replays for part 1 and part 2 of this webinar series. So be sure to check them out for yourself and let me know what you think.

Yes, exciting times indeed, especially for data virtualization!

In a recent checklist on data virtualization, TDWI’s Philip Russom had the following to say – “As the term suggests, data virtualization must abstract the underlying complexity and provide a business-friendly view of trusted data on demand.” A simple yet profound statement indeed, which demands the evaluation of the characteristics of data virtualization, that make such complexity seem manageable.

In this checklist, Russom provides the following list of recommendations for anyone considering real-time data integration:

  • Enable real-time data warehousing with real-time data integration
  • Know the available real-time data integration techniques
  • Virtualize and pool data resources for access in real time or other speeds and frequencies
  • Profile, certify, improve, and optimize data as you virtualize it
  • Abstract real-time data warehouse functionality as data services
  • Provision data services for any application, in multiple modes and protocols

Leverage a single platform for both physical and virtual data integration

Data virtualization can thus be described as an abstraction layer that hides and handles all the complexity of making many different data sources look like one. And when I say complexity, I mean all types of data, varying degrees of data quality, different modes of data processing and a whole range of consuming applications that want their data in a specific way, on-demand. This doesn’t sound like simple data federation to me.

What do you think?

Informatica Data Services is the next-generation data virtualization technology that is built around these very tenets.

Next up – I will discuss the next generation data virtualization architecture and best practices for agile business intelligence (BI).

And before I sign-off, please register here for part 3 of the interactive Architect to Architect webinar series on data virtualization. Come see for yourself why your fellow architects are calling it “refreshing.”

FacebookTwitterLinkedInEmailPrintShare
This entry was posted in Data Integration Platform, Data Services, Data Warehousing, Enterprise Data Management, Integration Competency Centers, Real-Time, SOA and tagged , , , , , , , , , , , . Bookmark the permalink.

One Response to Next-Generation Data Virtualization Series Part 2 – Characteristics

  1. Tim says:

    Hi,
    May I share information about this webinar series among my colleagues on itevent.net?
    Thanks!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>