Category Archives: Data Services
At the Informatica World 2014 pre-conference, the “ILM Day” sessions were packed, with over 100 people in attendance. This attendance reflects the strong interest in data archive, test data management and data security. Customers were the focus of the panel sessions today, taking center stage to share their experiences, best practices and lessons learned from successful deployments.
Both the test management and data archive panels had strong audience interest and interaction. For Test Data Management, the panel topic was “Agile Development by Streamlining Test Data Management”; for data archive, the session tackled “Managing Data Growth in the Era of Application Consolidation and Modernization”. The panels provided practical tactics and strategies to address the challenges and issues in managing data growth, and how to efficiently and safely provision test data. Thank you to the customers, partners and analysts who served on the panels; participating was EMC, Visteon, Comcast, Lowes, Tata Consultancy Services and Neuralytix.
The day concluded with a most excellent presentation from the ILM General Manager, Amit Walia and the CTO of the International Association of Privacy Professionals, Jeff Northrop. Amit provided an executive summary pre-view of Tuesday’s Secure@Source(TM) announcement, while Jeff Northrop provided a thought provoking market backdrop on the issues and challenges for data privacy and security, and how the focus on information security needs to shift to a ‘data-centric’ approach.
A very successful event for all involved!
In the Information Age we live and work in, where it’s hard to go even one day without a Google search, where do you turn for insights that can help you solve work challenges and progress your career? This is a tough question. How can we deal with the challenges of information overload – which some have called information pollution? (more…)
Data is everywhere. It’s in databases and applications spread across your enterprise. It’s in the hands of your customers and partners. It’s in cloud applications and cloud servers. It’s on spreadsheets and documents on your employee’s laptops and tablets. It’s in smartphones, sensors and GPS devices. It’s in the blogosphere, the twittersphere and your friends’ Facebook timelines. (more…)
In a recent visit to a client, three people asked me to autograph their copies of Integration Competency Center: An Implementation Guidebook. David Lyle and I published the book in 2005, but it was clear from the dog-eared corners and book-mark tabs that it is still relevant and actively being used today. Much has changed in the last seven years including the emergence of Big Data, Data Virtualization, Cloud Integration, Self-Service Business Intelligence, Lean and Agile practices, Data Privacy, Data Archiving (the “death” part of the information life-cycle), and Data Governance. These areas were not mainstream concerns in 2005 like they are today. The original ICC (Integration Competency Center) book concepts and advice are still valid in this new context, but the question I’d like readers to comment on is should we write a new book that explicitly provides guidance for these new capabilities in a shared services environment? (more…)
So wrote Potter Stewart, Associate Justice of the Supreme Court in Jacobellis v. Ohio opinion (1964). He was talking about pornography. The same holds true for data. For example, most business users have a hard time describing exactly what data they need for a new BI report, including what source system to get the data from, in sufficiently precise terms that allow designers, modelers and developers to build the report right the first time. But if you sit down with a user in front an analyst tool and profile the potential source data, they will tell you in an instant whether it’s the right data or not. (more…)
The ability to create abstract schemas that are mapped to back-end physical databases provides a huge advantage for those enterprises looking to get their data under control. However, given the power of data virtualization, there are a few things that those in charge of data integration should know. Here are a few quick tips.
Tip 1: Start with a new schema that is decoupled from the data sources. (more…)
A recent trip to a supermarket in Telluride, Colorado struck me as a funny place to find an analogy for data quality, but there it was. You see, supermarkets here require you to bring your own bags to cart your groceries home. Those brown disposable plastic bags are banned here – the town has made a firm commitment to the philosophy of Reduce, Reuse and Recycle. By adhering to this environmental philosophy, data integration teams can develop and deploy successful data quality strategies across the enterprise despite the constraints of today’s “do more with less” IT budgets.
In the decade that I’ve been in the Information Management space, I’ve noticed that success in data integration usually comes in small increments – typically on a project by project basis. However, by leveraging those small incremental successes and deploying them in a repeatable, consistent fashion – either as standardized rules sets or data services in a SOA – development teams can maximize their impact at the enterprise level.
If data is an asset, why should you give it away? Open data is based on the notion that some data should be freely available to everyone to use, similar to other “Open” movements such as open source software. But open data doesn’t have to only be about exposing information publicly; the same concepts can be applied inside your firewall. Here are a few examples: (more…)