Category Archives: SOA
Simplistic Approaches to Data Federation Solve (Only) Part of the Puzzle – We Need Data Virtualization (Part 2 of 3)
In my last post, I introduced the concept of data federation, which for now I would like to differentiate from data virtualization – a term that I’ll bring into focus in a bit. But first, we explored two issues: data accessibility and data latency. Within recent times, the sophistication of data accessibility services has matured greatly, to the point where one can somewhat abstract those accessibility services from the downstream consumer (or “reuser”) of data. (more…)
Apparently, everyone’s favorites words these days are “big data.” But just because some new tools and techniques promise the potential of absorbing and analyzing huge amounts of data from a variety of sources, it does not mean that installing Hadoop in your enterprise is going to automatically help you to get new insights from existing and “big data,” faster. (more…)
Sometimes when you want to sell SOA, you need to sell the concept and not the buzzword. Case in point, when I speak at a conference. If I talk about SOA patterns as a way to drive to a better architecture, I often see eyes begin to roll. However, if I say we’re looking to externalize services that will be meshed and re-meshed together to form business solutions, thus providing agility…the eyes light up. Funny thing is, I’m talking about the same thing. (more…)
Today, agility and timely visibility are critical to the business. No wonder CIO.com, states that business intelligence (BI) will be the top technology priority for CIOs in 2012. However, is your data architecture agile enough to handle these exacting demands?
In his blog Top 10 Business Intelligence Predictions For 2012, Boris Evelson of Forrester Research, Inc., states that traditional BI approaches often fall short for the two following reasons (among many others):
- BI hasn’t fully empowered information workers, who still largely depend on IT
- BI platforms, tools and applications aren’t agile enough (more…)
If you haven’t already, I think you should read The Forrester Wave™: Data Virtualization, Q1 2012. For several reasons – one, to truly understand the space, and two, to understand the critical capabilities required to be a solution that solves real data integration problems.
At the very outset, let’s clearly define Data Virtualization. Simply put, Data Virtualization is foundational to Data Integration. It enables fast and direct access to the critical data and reports that the business needs and trusts. It is not to be confused with simple, traditional Data Federation. Instead, think of it as a superset which must complement existing data architectures to support BI agility, MDM and SOA. (more…)
For too long, many enterprises have been attempting to sort through increasingly complex spaghetti architectures with point-to-point data integration. “They get to the point where when they want to introduce a new product or make a change, they have to touch 30 different systems,” says John Akred, data and platforms lead at Accenture Technology Labs. “That has real consequences in the marketplace for enterprises.”
John continued that Hadoop – an open-source software framework that enables applications to run across large arrays of nodes, accessing petabytes’ worth of data – will help organizations manage and scale up to the huge volumes of unstructured and semi-structured data now surging into organizations. I recently had the opportunity to join John, along with Julianna DeLua, Enterprise Solution Evangelist for Big Data from Informatica, for a discussion of Hadoop’s role in the emerging data as a platform paradigm. The session was the second session of the Hadoop Tuesdays Webinar series, sponsored by Informatica and Cloudera. (more…)
Data services, data services, data services. Do I sound like a broken record? Forgive me if I seem obsessed with the topic, but I truly believe that technology can change your enterprise, and allow IT to finally get a handle on data in the shortest amount of time.
The real value lies in SOA data services. These services allow enterprises to place an easy-to-configure layer between the source physical databases and those that wish to consume the data, either applications or humans. If this seems simple, why, you are right! It is. Why is it so simple? It’s because the complexity is hidden from you, including the access mechanisms to the physical data, the transformation of schemas from physical to abstract, and even the management of data quality and integrity.
So where is the value? There are three core points to consider here: (more…)
I was on the big data bandwagon before everyone began to jump on. The value is very clear. Simply put, it’s the ability to manage terabytes and terabytes of data as if it were just a small data set.
Big data is possible. We take a divide-and-conquer approach to processing queries and other data operations. The operations on data are divided up on many different servers, perhaps thousands of times, and then the results are recombined later when all of the operations are complete. This is technology seen in the most popular big data technology, Hadoop leveraging a map-reduce approach to manage data and operations on data. The larger database guys are picking up on this trend and moving in this direction as well. (more…)
Late last year at a company all-hands meeting, the CEO of a large consumer electronics company had a serious mandate. He needed big results and fast.
“Competition is really heating up and customer churn is on the rise. I need visibility on-demand – a common view of all enterprise data or else we cannot continue to grow.” IT teams across the enterprise have been scurrying and working furiously to create a common view of CUSTOMER, PRODUCT, SALES, and INVENTORY, but the results have been incomplete, inaccurate and too slow. This is no easy task. (more…)