Tag Archives: Enterprise Data Management

Building a Data Foundation for Execution

Building a Data Foundation for Execution

Building a Data Foundation

I have been re-reading Enterprise Architecture as Strategy from the MIT Center for Information Systems Research (CISR).*  One concept that they talk about that jumped out at me was the idea of a “Foundation for Execution.”  Everybody is working to drive new business initiatives, to digitize their businesses, and to thrive in an era of increased technology disruption and competition.  The ideas around a Foundation for Execution in the book are a highly practical and useful framework to deal with these problems.

This got me thinking: What is the biggest bottleneck in the delivery of business value today?  I know I look at things from a data perspective, but data is the biggest bottleneck.  Consider this prediction from Gartner:

“Gartner predicts organizations will spend one-third more on app integration in 2016 than they did in 2013. What’s more, by 2018, more than half the cost of implementing new large systems will be spent on integration. “

When we talk about application integration, we’re talking about moving data, synchronizing data, cleansing, data, transforming data, testing data.  The question for architects and senior management is this: Do you have the Data Foundation for Execution you need to drive the business results you require to compete?  The answer, unfortunately, for most companies is; No.

All too often data management is an add-on to larger application-based projects.  The result is unconnected and non-interoperable islands of data across the organization.  That simply is not going to work in the coming competitive environment.  Here are a couple of quick examples:

  • Many companies are looking to compete on their use of analytics.  That requires collecting, managing, and analyzing data from multiple internal and external sources.
  • Many companies are focusing on a better customer experience to drive their business. This again requires data from many internal sources, plus social, mobile and location-based data to be effective.

When I talk to architects about the business risks of not having a shared data architecture, and common tools and practices for enterprise data management, they “get” the problem.  So why aren’t they addressing it?  The issue is that they find that they are only funded to do the project they are working on and are dealing with very demanding timeframe requirements.  They have no funding or mandate to solve the larger enterprise data management problem, which is getting more complex and brittle with each new un-connected project or initiative that is added to the pile.

Studies such as “The Data Directive” by The Economist show that organizations that actively manage their data are more successful. But, if that is the desired future state, how do you get there?

Changing an organization to look at data as the fuel that drives strategy takes hard work and leadership. It also takes a strong enterprise data architecture vision and strategy.  For fresh thinking on the subject of building a data foundation for execution, see “Think Data-First to Drive Business Value” from Informatica.

* By the way, Informatica is proud to announce that we are now a sponsor of the MIT Center for Information Systems Research.

FacebookTwitterLinkedInEmailPrintShare
Posted in Architects, Business Impact / Benefits, CIO, Data Governance, Data Integration, Data Synchronization, Enterprise Data Management | Tagged , , , , , , | Leave a comment

Enterprise Architects as Strategists

Data Architecture

The conversation at the Gartner Enterprise Architecture Summit was very interesting last week. They central them for years had been idea of closely linking enterprise architecture with the goals and strategy.  This year, Gartner added another layer to that conversation.  They are now actively promoting the idea of enterprise architects as strategists.

The reason why is simple.  The next wave of change is coming and it will significantly disrupt everybody.  Even worse, your new competitors may be coming from other industries.

Enterprise architects are in a position to take a leading role within the strategy process. This is because they are the people who best understand both business strategy and technology trends.

Some of the key ideas discussed included:

  • The boundaries between physical and digital products will blur
  • Every organization will need a technology strategy to survive
  • Gartner predicts that by 2017: 60% of the Global 1,000 will execute on at least one revolutionary and currently unimaginable business transformation effort.
  • The change is being driven by trends such as mobile, social, the connectedness of everything, cloud/hybrid, software-defined everything, smart machines, and 3D printing.

Observations

I agree with all of this.  My view is that this means that it is time for enterprise architects to think very differently about architecture.  Enterprise applications will come and go.  They are rapidly being commoditized in any case.  They need to think like strategists; in terms of market differentiation.  And nothing will differentiate an organization more than their data.    Example: Google autonomous cars.  Google is jumping across industry boundaries to compete in a new market with data as their primary differentiator. There will be many others.

Thinking data-first

Years of thinking of architecture from an application-first or business process-first perspective have left us with silos of data and the classic ‘spaghetti diagram” of data architecture. This is slowing down business initiative delivery precisely at the time organizations need to accelerate and make data their strategic weapon.  It is time to think data-first when it comes to enterprise architecture.

You will be seeing more from Informatica on this subject over the coming weeks and months.

Take a minute to comment on this article.  Your thoughts on how we should go about changing to a data-first perspective, both pro and con are welcomed.

Also, remember that Informatica is running a contest to design the data architecture of the year 2020.  Full details are here.

http://www.informatica.com/us/architects-challenge/

FacebookTwitterLinkedInEmailPrintShare
Posted in Architects, CIO, Data Integration, Data Integration Platform, Enterprise Data Management | Tagged , , , , , , , | Leave a comment

And now for the rest of the data…

In the first two issues I spent time looking at the need for states to pay attention to the digital health and safety of their citizens, followed by the oft forgotten need to understand and protect the non-production data. This is data than has often proliferated and also ignored or forgotten about.

In many ways, non-production data is simpler to protect. Development and test systems can usually work effectively with realistic but not real PII data and realistic but not real volumes of data. On the other hand, production systems need the real production data complete with the wealth of information that enables individuals to be identified – and therefore presents a huge risk. If and when that data is compromised either deliberately or accidentally the consequences can be enormous; in the impact on the individual citizens and also the cost of remediation on the state. Many will remember the massive South Carolina data breach of late 2012 when over the course of 2 days a 74 GB database was downloaded and stolen, around 3.8 million payers and 1.9 million dependents had their social security information stolen and 3.3 million “lost” bank account details. The citizens’ pain didn’t end there, as the company South Carolina picked to help its citizens seems to have tried to exploit the situation.

encryption protects against theft - unless the key is stolen too

encryption protects against theft – unless the key is stolen too

The biggest problem with securing production data is that there are numerous legitimate users and uses of that data, and most often just a small number of potentially malicious or accidental attempts of inappropriate or dangerous access. So the question is… how does a state agency protect its citizens’ sensitive data while at the same time ensuring that legitimate uses and users continues – without performance impacts or any disruption of access? Obviously each state needs to make its own determination as to what approach works best for them.

This video does a good job at explaining the scope of the overall data privacy/security problems and also reviews a number of successful approaches to protecting sensitive data in both production and non-production environments. What you’ll find is that database encryption is just the start and is fine if the database is “stolen” (unless of course the key is stolen along with the data! Encryption locks the data away in the same way that a safe protects physical assets – but the same problem exists. If the key is stolen with the safe then all bets are off. Legitimate users are usually easily able deliberately breach and steal the sensitive contents, and it’s these latter occasions we need to understand and protect against. Given that the majority of data breaches are “inside jobs” we need to ensure that authorized users (end-users, DBAs, system administrators and so on) that have legitimate access only have access to the data they absolutely need, no more and no less.

So we have reached the end of the first series. In the first blog we looked at the need for states to place the same emphasis on the digital health and welfare of their citizens as they do on their physical and mental health. In the second we looked at the oft-forgotten area of non-production (development, testing, QA etc.) data. In this third and final piece we looked at the need to and some options for providing the complete protection of non-production data.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Business Impact / Benefits, Data masking, Data Privacy, Enterprise Data Management, Public Sector | Tagged , , , , | Leave a comment

What types of data need protecting?

In my first article on the topic of citizens’ digital health and safety we looked at the states’ desire to keep their citizens healthy and safe and also at the various laws and regulations they have in place around data breaches and losses. The size and scale of the problem together with some ideas for effective risk mitigation are in this whitepaper.

The cost and frequency of data breaches continue to rise

The cost and frequency of data breaches continue to rise

Let’s now start delving a little deeper into the situation states are faced with. It’s pretty obvious that citizen data that enables an individual to be identified (PII) needs to be protected. We immediately think of the production data: data that is used in integrated eligibility systems; in health insurance exchanges; in data warehouses and so on. In some ways the production data is the least of our problems; our research shows that the average state has around 10 to 12 full copies of data for non-production (development, test, user acceptance and so on) purposes. This data tends to be much more vulnerable because it is widespread and used by a wide variety of people – often subcontractors or outsourcers, and often the content of the data is not well understood.

Obviously production systems need access to real production data (I’ll cover how best to protect that in the next issue), on the other hand non-production systems of every sort do not. Non-production systems most often need realistic, but not real data and realistic, but not real data volumes (except maybe for the performance/stress/throughput testing system). What need to be done? Well to start with, a three point risk remediation plan would be a good place to start.

1. Understand the non-production data using sophisticated data and schema profiling combined with NLP (Natural Language Processing) techniques help to identify previously unrealized PII that needs protecting.
2. Permanently mask the PII so that it is no longer the real data but is realistic enough for non-production uses and make sure that the same masking is applied to the attribute values wherever they appear in multiple tables/files.
3. Subset the data to reduce data volumes, this limits the size of the risk and also has positive effects on performance, run-times, backups etc.

Gartner has just published their 2013 magic quadrant for data masking this covers both what they call static (i.e. permanent or persistent masking) and dynamic (more on this in the next issue) masking. As usual the MQ gives a good overview of the issues behind the technology as well as a review of the position, strengths and weaknesses of the leading vendors.

It is (or at least should be) an imperative that from the top down state governments realize the importance and vulnerability of their citizens data and put in place a non-partisan plan to prevent any future breaches. As the reader might imagine, for any such plan to success needs a combination of cultural and organizational change (getting people to care) and putting the right technology – together these will greatly reduce the risk. In the next and final issue on this topic we will look at the vulnerabilities of production data, and what can be done to dramatically increase its privacy and security.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving, Data Governance, Data masking, Data Privacy, Public Sector | Tagged , , , , , | Leave a comment

Data Management Is No Longer A Back Office Function

A front office as defined by Wikipedia is “a business term that refers to a company’s departments that come in contact with clients, including the marketing, sales, and service departments” while a back office is “tasks dedicated to running the company….without being seen by customers.”  Wikipedia goes on to say that “Back office functions can be outsourced to consultants and contractors, including ones in other countries.” Data Management was once a back office activity but in recent years it has moved to the front office.  What changed? (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, CIO, Data Governance, Enterprise Data Management, Integration Competency Centers, Master Data Management | Tagged , , , , | 2 Comments

Building A Better Data Warehouse

The devil, as they say, is in the detail. Your organization might have invested years of effort and millions of dollars in an enterprise data warehouse, but unless the data in it is accurate and free of contradiction, it can lead to misinformed business decisions and wasted IT resources.

We’re seeing an increasing number of organizations confront the issue of data quality in their data warehousing environments in efforts to sharpen business insights in a challenging economic climate. Many are turning to master data management (MDM) to address the devilish data details that can undermine the value of a data warehousing investment.

Consider this: Just 24 percent of data warehouses deliver “high value” to their organizations, according to a survey by The Data Warehousing Institute (TDWI).[1] Twelve percent are low value and 64 percent are moderate value “but could deliver more,” TDWI’s report states. For many organizations, questionable data quality is the reason why data warehouses fall short of their potential. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Quality, Data Warehousing, Enterprise Data Management, Master Data Management | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Salesforce.com Integration – Inside Out or Outside In?

People have begun to realize that integration is a key requirement for a successful SaaS application deployment.  Since a SaaS application like Salesforce CRM resides in the cloud (it is hosted in a data center outside your company’s firewalls), customers need to move their critical corporate data like customer, product or pricing information into Salesforce before they can use the application and provide maximum value to their end users.

They also need to keep the data in Salesforce synchronized with the rest of their on premise applications in order to maintain operational efficiency and provide timely and accurate information flow throughout their enterprise.

So how difficult is it to integrate Salesforce with on premise applications, and how should it be done?  Well, for starters, salesforce.com provides a complete set of well documented APIs that make it straight forward for a good software programmer to accomplish that task.

Those same APIs are also used by integration vendors to provide more flexible, powerful and manageable solutions that accomplish the same task without requiring the same level of programming skill.   (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Cloud Computing, Customers, Data Integration, Data Services, Enterprise Data Management | Tagged , , , , , , , , , | 4 Comments

IT Does Matter

Just about everyone has heard about Nicholas Carr’s Harvard Business Review article titled IT Doesn’t Matter. I’ve heard many reactions to the paper presented at conferences and in the blog-o-sphere, so I figured it was time to go to the source and purchased IT Doesn’t Matter (HBR OnPoint Enhanced Edition) from Amazon for $6.50 as a downloadable pdf.

The paper is excellent in that it presents not only the original paper first written in 2003, but also features no fewer that 14 reactionary letters plus a final response from Carr to the lettersInterestingly, not a single one of the 14 letters agreed with Carr’s fundamental premise (although most of them were polite enough to acknowledge some value in the debate). I don’t know whether it was the editor’s choice to select only negative letters out of the hundreds that I’m sure were submitted, but I suspect these 14 are representative of the masses. So by way of this commentary let me add the 15th letter. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Enterprise Data Management, Integration Competency Centers | Tagged , , , | Leave a comment

Is IT Wagging the Organization?

You may need to reorganize your company to take advantage of modern IT architectures such as SOA or MDM. Some people would say this is heresy – that this is a case of the tail wagging the dog and that the IT strategy should echo the business strategy and operating model. True enough – but IT is also “part” of the business and also has an obligation to play a role in leading the enterprise into the future.The fact is that much of the current IT “mess” is because it has directly mimicked the business. Any organization that builds systems will naturally build them in a way that supports the organizational structure and the lines of communication that are inherent in the functional silos. The legacy problem that most organizations face (high cost of maintenance, lack of agility to change, poor data quality, unpredictable operations) is a direct result of building systems piecemeal in support of silos.

There is no shortage of technology silver bullets to solve the problem. The current crop of solutions includes SOA, MDM and ERP to name just a few.

There may another reason why the Gartner hype cycle exists – you know the one; peak of inflated expectations, trough of disillusionment, plateau of productivity. The reality is that most of these technologies are not technologies at all – they are enterprise strategies. And strategies require a C-level commitment that literally starts with top the organization – the CEO.

Lets take a look at an SOA example. A common business service for a bank is a “Credit Decision” service. Credit decisions should be done consistently across the bank in a way that reflects the bank policies, market strategy, risk management controls and regularity constraints in all the different legal jurisdictions that it operates in. But rather than implement one credit decisioning engine, the Credit Card business implements one, as does the Mortgage Business, the Deposit Business, the Line of Credit business, the Personal Loan business, etc. And if that isn’t enough, since the internet bank is run by a different executive from the retail branch bank – we also have separate decisioning engines for different customer channels. Each of the business groups as a result needs to hire staff to constantly maintain and update the business rules in each of the redundant systems to comply with corporate directives.

If you go and ask each of the LOB managers if they need their own decisioning engine, along with all the overhead that goes along with maintaining it, they will likely say no. It doesn’t make sense. But if you try to implement an SOA strategy in a large bank with 10 different decisioning engines, you will have an uphill battle. The first challenge is one of timing. While all 10 groups may in fact be totally on-board with the concept, they all have different priorities in any given year. You may be waiting forever for all the stars to align such that a critical mass of them are prepared to all focus on implementing a generic solution at the same time.

It’s clear from this example that reorganizing the business to create a “credit decisioning function” that serves the enterprise would dramatically simplify the IT solution. The typical pattern however is to avoid the hard work of addressing the organizational, people and political issues and instead try to solve the problem “bottom up” by incrementally and thereby iterate towards an enterprise solution. The bottom-up approach can in fact work – but don’t make the mistake to think that it’s necessarily easier more successful. The reason so many organizations are in the SOA Trough of Disillusionment is exactly because they are building it bottom-up and have not addressed the organizational issues.

Let’s take another example – Master Data Management. MDM promises to deliver quality information and one version of the truth for business data. But in order to achieve this, the organization needs an information owner (or steward) who is accountable for data quality, consistency, currency, security, etc. For most companies that are organized around LOBs or products – who owns customer information? In the absence of a business owner, a common pattern is for IT to step and presume a stewardship role. Unfortunately (as is evidenced by a long list of less than ideal CRM implementations) the IT group often does so without an official sanction from the business executives so the results should not be a big surprise.

This is not to say that any of the technologies that are inherent in SOA, MDM or ERP are bad or flawed. In fact, they are the result of the smartest people in the industry creating the technologies and the result of market forces testing the all the innovate ideas and having the best ones bubble to the surface.

At the risk of repeating myself, the point is that successful implementation is much more than a technology. You can’t “buy” SOA or MDM – these are strategies that demand organizational top-down commitment. While a bottom-up strategy can make meaningful progress and deliver significant results, IT organizations should not be afraid to tackle the “big” issues of organizational design and politics. IT is part of the business. And as IT has become a larger and larger player in most organizations due to the natural shift to an information economy, it would be wrong to think of this as the tail wagging the dog.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Enterprise Data Management, Integration Competency Centers | Tagged , | Leave a comment