Tag Archives: data
With the ready availability of data integration technology, it’s amazing to me that the use of manual coding for data integration flows is even a consideration. However, based upon this article in SearchDataManagement, the concept is still out there.
Of course the gist of the article is that hand coding is no longer considered the most productive way to go, which is correct. However, just the fact that this is still an issue and a consideration for anyone moving to data integration solutions perplexes me. Perhaps it’s the new generation of architects and data management professionals who need a quick lesson on the pitfalls of doing data integration by hand. (more…)
I’m sitting in the Taiwan airport on my way to Guangzhou. We just completed the Informatica World Tour in Hong Kong, Beijing and Taiwan, and I’ve had the opportunity to deliver the keynote presentation, Maximize Your Return on Big Data.
All of our audiences exceeded our expectations. We had 50% more attendees than planned. Why? Big data. It is a hot topic and everyone is trying to determine how to leverage big data in their enterprise to get a competitive advantage. At the event, I made the point – if you’re not trying to understand how to leverage big data in your enterprise, your successor will. Kitty Fok, the IDC China Country Manager, spoke after me. Her consistent comment was – “if your company isn’t looking to leverage big data, you will be out of business.” (more…)
In contrast to addressing the management and process issues, we might say that the technical issues are actually quite straightforward to address. In my original enumeration from a few posts back, I ordered the data issue categories in the reverse order of the complexity of their solution. Model and information architecture problems are the most challenging, because of the depth to which business applications are inherently dependent on their underlying models. Even simple changes require significant review to make sure that no expected capability is inadvertently broken. (more…)
Just like your house needs yearly spring cleaning and you need to regularly throw out old junk, your application portfolio needs periodic review and rationalization to identify legacy, redundant applications that can be decommissioned to reduce bloat and save costs. If you have a hard time letting go of old stuff, it’s probably even harder for your application users to let go of access to their data. However, retiring applications doesn’t have to mean that you also lose the data within them. If the data within those applications are still needed for periodic reporting or for regulatory compliance, then there are still ways to retain the data without maintaining the application. (more…)
Coincidentally, my company is involved with a number of different customers who are reviewing the quality criteria associated with addresses. Each scenario has different motivations for assessing address data quality. One use case focuses on administrative management – ensuring that things that need to happen at a particular location have an accurate and valid address. A different use case considers one aspect of regulatory compliance regarding protection of private information (since mail delivered to the wrong address is a potential exposure of the private information contained within the envelope). Another compliance use case looks at timely delivery of hard copy notifications as part of a legal process, requiring the correct address. (more…)
In adapting the six-sigma technique of failure mode and effects analysis for data quality management, we are hoping to proactively identify the potential errors that lead to the most severe business impacts and then strengthen the processes and applications to prevent errors from being introduced in the first place. In my last post, though, I noted that the approach to this analysis starts with the errors and then figures out the impacts. I think we should go the other way so as to optimize the effort and reduce the analysis time to focus on the most important potentialities. (more…)
Back in the good ol’ days, Santa Claus received letters and post cards from children all over the world. When telephones and faxes became commonplace, they were also used to contact Santa. In addition to those traditional methods, children today can also use the internet to send emails, Twitter, Facebook and even LinkedIn to notify Santa of their wish list. (more…)
Last time we looked at the failure mode and effect analysis technique from the six-sigma community and slightly adjusted it to be data-centric so that it can be used to anticipate the different types of data errors that could occur and adjust application design to accommodate the prevention of data errors in the first place. This approach really is proactive since you are proactively considering the many different types of errors that could be introduce and then shoring up the process in anticipation of their occurrence. (more…)
In the second of two videos, Peter Ku, Director of Financial Services Solutions Marketing, Informatica, talks about the latest trends regarding counterparty data and how the legal entity identifier (LEI) system will impact banks across the globe.
Specifically, he answers the following questions:
- What are the latest trends regarding counterparty information and how will the legal entity identifier system impact banks across the globe?
- How does Informatica help solve the challenges regarding counterparty information and help banks prepare for the new legal entity identifier system?
Also watch Peter’s first video (http://youtu.be/KvyDPzOTnUY) to learn about counterparty data and its challenges.