Tag Archives: Informatica Data Services
I recently met with a longtime colleague from the Oracle E-Business Suite implementation eco-system, now VP of IT for a global technology provider. This individual has successfully implemented data archiving and data masking technologies to eliminate duplicate applications and control the costs of data growth – saving tens of millions of dollars. He has freed up resources that were re-deployed within new innovative projects such as Big Data – giving him the reputation as a thought leader. In addition, he has avoided exposing sensitive data in application development activities by securing it with data masking technology – thus securing his reputation.
When I asked him about those projects and the impact on his career, he responded, ‘Data archiving and data security are table stakes in the Oracle Applications IT game. However, if I want to be a part of anything important, it has to involve Cloud and Big Data.’ He further explained how the savings achieved from Informatica Data Archive enabled him to increase employee retention rates because he was able to fund an exciting Hadoop project that key resources wanted to work on. Not to mention, as he transitioned from physical infrastructure to a virtual server by retiring legacy applications – he had accomplished his first step on his ‘journey to the cloud’. This would not have been possible if his data required technology that was not supported in the cloud. If he hadn’t secured sensitive data and had experienced a breach, he would be looking for a new job in a new industry.
Not long after, I attended a CIO summit where the theme of the conference was ‘Breakthrough Innovation’. Of course, Cloud and Big Data were main stage topics – not just about the technology, but about how it was used to solve business challenges and provide services to the new generation of ‘entitled’ consumers. This is the description of those who expect to have everything at their fingertips. They want to be empowered to share or not share their information. They expect that if you are going to save their personal information, it will not be abused. Lastly, they may even expect to try a product or service for free before committing to buy.
In order to size up to these expectations, Application Owners, like my long-time colleague, need to incorporate Data Archive and Data Masking in their standard SDLC processes. Without Data Archive, IT budgets may be consumed by supporting old applications and mountains of data, thereby becoming inaccessible for new innovative projects. Without Data Masking, a public breach will drive many consumers elsewhere.
So, where have I been since my last blog? Well, I have been working on our new Architect to Architect webinar series on data virtualization, which is very exciting for me as I get to rub shoulders (virtually speaking) with hundreds of industry architects.
The interactive nature and record attendance at these webinars have made one thing very clear – data virtualization is indeed top of mind. In my last blog we discussed the concept and how data virtualization is different or a superset of traditional data federation, especially as it overcomes many limitations of the latter. Wayne Eckerson did a great job at tracking the evolution of data federation in a recent webinar and blog. (more…)
Specific deadlines related to Dodds-Frank are coming due and banking organizations are scrambling to find ways to satisfy the increasing demand for more transparency and access to the right data to satisfy certain aspects of the law. Here are a few examples:
Within 12 Months:
- Derivative clearing and swap dealer regulation
- Compliance with the new Office of Financial Research (OFR)
Within the first 18 months:
- Volker rule goes into effect
- Liabilities cap on large financial firms
- Heightened standards / minimum leverage and risk-based capital requirements
- Remittance error resolution standards
Within 24 months:
- Proposed simplified mortgage disclosures
- Contingent capital report and rule-making (more…)