Category Archives: Data Governance
As I indicated in Competing on Analytics, if you ask CIOs today about the importance of data to their enterprises, they will likely tell you about their business’ need to “compete on analytics”, to deliver better business insights, and to drive faster business decision making. These have a high place on the business and CIO agendas, according to Thomas H. Davenport, because “at a time when firms in many industries offer similar products and use comparable technologies, business processes are among the last remaining points of differentiation.” For this reason, Davenport claims timely analytics enables companies to “wring every last drop of value from their processes”.
So is anyone showing the way on how to compete on analytics?
UMass Memorial Health Care is a great example of an enterprise that is using analytics to “wring every last drop of value from their processes”. However, before UMass could compete on data, it needed to create data that could be trusted by its leadership team.
Competing on analytics requires trustworthy data
At UMass, they found that they could not accurately measure the size of their patient care population. This is a critical metric for growing market share. Think about how hard it would be to operate any business without an accurate count of how many customers are being served. Lacking this information hindered UMass’ ability to make strategic market decisions and drive key business and clinical imperatives.
A key need at UMASS was to determine a number of critical success factors for its business. This included obviously the size of the patient population but it also included the composition of the patient population and the number of unique patients served by primary care physician providers across each of its business locations. Without this knowledge, UMASS found itself struggling to make effective decisions regarding its strategic direction, its clinical policies, and even its financial management. And all of these factors really matter in an era of healthcare reform.
Things proved particularly complex at UMass since they act as what is called a “complex integrated delivery network”. This means that portions of its business effectively operated under different business models. This, however, creates a data challenge in healthcare. Unlike other diversified enterprises, UMASS needs an operating model–“the necessary level of business process integration and standardization for delivering its services to customers”– that could support different elements of its business but be unified for integrative analysis. This matters because in UMass’ case, there is a single denominator, the patient. And to be clear, while each of UMASS’ organizations could depend on their data to meet their needs, UMASS lacked an integrative view into patients.
Departmental Data may be good for a department but not for the Enterprise
UMass had adequate data for each organization, such as delivering patient care or billing for a specific department or hospital, but it was inadequate for system wide measures. And aggregation and analytics, which needed to combine data across systems and organizations was stymied by data inconsistencies, incomplete population of fields, or other types of data quality problems between each system. These issues made it impossible to provide the analytics UMass’ senior managers needed. For example, UMass’ aggregated data contained duplicate patients—people who had been treated at different sites and had different medical record numbers, but who were in fact the same patients.
A key need for UMass creating the ability to compete on analytics was to measure and report on the number of primary care patients being treated across their entire healthcare system. UMass leadership saw this as a key planning and strategy metric because primary care patients today are the focus of investments in wellness and prevention programs, as well as a key source of specialty visits and inpatients. According to George Brenckle, Senior Vice President and CIO, they “had an urgent need for improved clinical and business intelligence across all our operations, we needed an integrated view of patient information, encounters, providers, and UMass Memorial locations to support improved decision making, advance the quality of patient care, and increase patient loyalty. To put the problem into perspective, we have more than 100 applications—some critical, some not so critical—and our ultimate ambition was to integrate all of these areas of business, leverage analytics, and drive clinical and operational excellence.”
The UMASS Solution
The UMass solved the above issues by creating an integrated way to view of patient information, encounters, providers, and UMass Memorial locations. This allowed UMass to compute the number of primary care physician patients cared for. In order to make this work, the solution merged data from the core hospital information applications and processed this data for quality issue that prevented UMass from deriving the primary care patient count. Armed with this, data integration helped UMass Memorial improve its clinical outcomes, grow its patient population, increase process efficiency, and ultimately maximize its return on data. As well UMASS gained a reliable measure of its primary care patient population, UMASS now was able to determine an accurate counts for unique patients served by its hospitals (3.2 million), active patients (i.e., those treated within the last three years—approximately 1.7 million), and unique providers (approximately 24,000).
According to Brenckle, data integration transformed their analytical capabilities and decision making. “We know who our primary care patients are and how many there are of them, whether the volume of patients is rising or decreasing, how many we are treating in an ambulatory or acute care setting, and what happens to those patients as they move through the healthcare system. We are able to examine which providers they saw and at which location. This data is vital to improving clinical outcomes, growing the patient population, and increasing efficiency.”
Thomas Davenport Book “Competing On Analytics”
Competing on Analytics
The Business Case for Better Data Connectivity
CIO explains the importance of Big Data to Healthcare
The CFO Viewpoint upon Data
What an enlightened healthcare CEO should tell their CIO?
Is this how you think about IT? Or do you think of IT in terms of the technology it deploys instead? I recently was interviewing a CIO at Fortune 50 Company about the changing role of CIOs. When I asked him about which technology issues were most important, this CIO said something that surprised me.
He said, “IT is all about data. Think about it. What we do in IT is all about the intake of data, the processing of data, the store of data, and the analyzing of data. And we need, from data, to increasingly provide the intelligence to make better decisions”.
How many view the function of the IT organization with such clarity?
This was the question that I had after hearing this CIO. And how many IT organizations view IT as really a data system? It must not be very many. Jeanne Ross from MIT CISR contends in her book that company data “one of its most important assets, is patchy, error-prone, and not up-to-date” (Enterprise Architecture as Strategy, Jeanne Ross, page 7). Jeanne contends as well that companies having a data centric view “have higher profitability, experience faster time to market, and get more value from their IT investments” (Enterprise Architecture as Strategy, Jeanne Ross, page 2).
What then do you need to do to get your data house in order?
What then should IT organizations do to move their data from something that is “patchy, error-prone, and not up-to-date” to something that is trustworthy and timely? I would contend our CIO friend had it right. We need to manage all four elements of our data process better.
1. Input Data Correctly
You need start by making sure that the data you produced is done so consistently and correctly. I liken the need here to a problem that I had with my electronic bill pay a few years ago. My bank when it changed bill payment service providers started sending my payments to a set of out of date payee addresses. This caused me to receive late fees and for my credit score to actually go down. The same kind of thing can happen to a business when there are duplicate customers or customer addresses are entered incorrectly. So much of marketing today is about increasing customer intimacy. It is hard to improve customer intimacy when you bug the same customer too much or never connect with a customer because you had a bad address for them.
2. Process Data to Produce Meaningful Results
You need to collect and manipulate data to derive meaningful information. This is largely about processing data so it results produce meaningful analysis. To do this well, you need to take out the data quality issues from the data that is produced. We want, in this step, to make data is “trustworthy” to business users.
With this, data can be consolidated into a single view of customer, financial account, etc. A CFO explained the importance of this step by saying the following:
“We often have redundancies in each system and within the chart of accounts the names and numbers can differ from system to system. And as you establish a bigger and bigger set of systems, you need to, in accounting parlance, to roll-up the charter of accounts”.
Once data is consistently put together, then you need to consolidate it so that it can be used by business users. This means that aggregates need to be created for business analysis. These should support dimensional analysis so that business users can truly answer why something happened. For finance organizations, timely aggregated data with supporting dimensional analysis enables them to establish themselves as “a business person versus a bean counting historically oriented CPA”. Having this data answers questions like the following:
- Why are sales not being achieved? Which regions or products are failing to be delivered?
- Or why is the projected income statement not in conformance with plan? Which expense categories should be we cut in order to ensure the income statement is in line with business expectation?
3. Store Data Where it is Most Appropriate
Data storage needs to be able to occur today in many ways. It can be in applications, a data warehouse, or even, a Hadoop cluster. You need here to have an overriding data architecture that considers the entire lifecycle of data. A key element of doing this well involves archiving data as it becomes inactive and protecting data across its entire lifecycle. The former can involve as well the disposing of information. And the latter requires the ability to audit, block, and dynamically mask sensitive production data to prevent unauthorized access.
4. Enable analysis including the discovery, testing, and putting of data together
Analysis today is not just about the analysis tools. It is about enabling users to discover, test, and put data together. CFOs that we have talked to say they want analysis to expose earlier potential business problems. They want, for example, to know about metrics like average selling price and gross margins by account or by product. They want as well to see when they have seasonality affects.
Increasingly, CFOs need to use this information to help predict what the future of their business will look like. CFOs say that they want at the same time to help their businesses make better decisions from data. Limiting them today from doing this are disparate systems that cannot talk to each other. CFOs complain about their enterprise hodgepodge of systems that do not talk to one another. And yet to report, CFOs need to traverse between front office to back office systems.
One CIO said to us that the end of any analysis layer should be the ability to trust data and make dependable business decisions. And once dependable data exists, business users say that they want “access to data when they need it. They want to get data when and where you need it”. One CIO likened what is need here to orchestration when he said:
“Users want to be able to self-service. They want to be able to assembly data and put it together and do it from different sources at different times. I want them to be able to have no preconceived process. I want them to be able discover data across all sources”.
So as we said at the beginning of this post, IT is all about the data. And with mobile systems of engagement, IT’s customers are wanting increasingly their data at their fingertips. This means that business users need to be able to trust that the data they use for business analysis is timely and accurate. This demands that IT organizations get better at managing their core function—data.
Competing on Analytics
The CFO Viewpoint of Data
Is Big Data Destined To Become Small And Vertical?
Big Data Why?
The Business Case for Better Data Connectivity
What is big data and why should your business care?
A growing number of Data Scientists believe so.
If you recall the Cholera outbreak of Haiti in 2010 after the tragic earthquake, a joint research team from Karolinska Institute in Sweden and Columbia University in the US analyzed calling data from two million mobile phones on the Digicel Haiti network. This enabled the United Nations and other humanitarian agencies to understand population movements during the relief operations and during the subsequent cholera outbreak. They could allocate resources more efficiently and identify areas at increased risk of new cholera outbreaks.
Mobile phones, widely owned even in the poorest countries in Africa. Cell phones are also a rich source of data irrespective of which region where other reliable sources are sorely lacking. Senegal’s Orange Telecom provided Flowminder, a Swedish non-profit organization, with anonymized voice and text data from 150,000 mobile phones. Using this data, Flowminder drew up detailed maps of typical population movements in the region.
Today, authorities use this information to evaluate the best places to set up treatment centers, check-posts, and issue travel advisories in an attempt to contain the spread of the disease.
The first drawback is that this data is historic. Authorities really need to be able to map movements in real time especially since people’s movements tend to change during an epidemic.
The second drawback is, the scope of data provided by Orange Telecom is limited to a small region of West Africa.
Here is my recommendation to the Centers for Disease Control and Prevention (CDC):
- Increase the area for data collection to the entire region of Western Africa which covers over 2.1 million cell-phone subscribers.
- Collect mobile phone mast activity data to pinpoint where calls to helplines are mostly coming from, draw population heat maps, and population movement. A sharp increase in calls to a helpline is usually an early indicator of an outbreak.
- Overlay this data over censuses data to build up a richer picture.
The most positive impact we can have is to help emergency relief organizations and governments anticipate how a disease is likely to spread. Until now, they had to rely on anecdotal information, on-the-ground surveys, police, and hospital reports.
I recently participated on an EDM Council panel on BCBS 239 earlier this month in London and New York. The panel consisted of Chief Risk Officers, Chief Data Officers, and information management experts from the financial industry. BCBS 239 set out 14 key principles requiring banks aggregate their risk data to allow banking regulators to avoid another 2008 crisis, with a deadline of Jan 1, 2016. Earlier this year, the Basel Committee on Banking Supervision released the findings from a self-assessment from the Globally Systemically Important Banks (GISB’s) in their readiness to 11 out of the 14 principles related to BCBS 239.
Given all of the investments made by the banking industry to improve data management and governance practices to improve ongoing risk measurement and management, I was expecting to hear signs of significant process. Unfortunately, there is still much work to be done to satisfy BCBS 239 as evidenced from my findings. Here is what we discussed in London and New York.
- It was clear that the “Data Agenda” has shifted quite considerably from IT to the Business as evidenced by the number of risk, compliance, and data governance executives in the room. Though it’s a good sign that business is taking more ownership of data requirements, there was limited discussions on the importance of having capable data management technology, infrastructure, and architecture to support a successful data governance practice. Specifically capable data integration, data quality and validation, master and reference data management, metadata to support data lineage and transparency, and business glossary and data ontology solutions to govern the terms and definitions of required data across the enterprise.
- With regard to accessing, aggregating, and streamlining the delivery of risk data from disparate systems across the enterprise and simplifying the complexity that exists today from point to point integrations accessing the same data from the same systems over and over again creating points of failure and increasing the maintenance costs of supporting the current state. The idea of replacing those point to point integrations via a centralized, scalable, and flexible data hub approach was clearly recognized as a need however, difficult to envision given the enormous work to modernize the current state.
- Data accuracy and integrity continues to be a concern to generate accurate and reliable risk data to meet normal and stress/crisis reporting accuracy requirements. Many in the room acknowledged heavy reliance on manual methods implemented over the years and investing in Automating data integration and onboarding risk data from disparate systems across the enterprise is important as part of Principle 3 however, much of what’s in place today was built as one off projects against the same systems accessing the same data delivering it to hundreds if not thousands of downstream applications in an inconsistent and costly way.
- Data transparency and auditability was a popular conversation point in the room as the need to provide comprehensive data lineage reports to help explain how data is captured, from where, how it’s transformed, and used remains a concern despite advancements in technical metadata solutions that are not integrated with their existing risk management data infrastructure
- Lastly, big concerns regarding the ability to capture and aggregate all material risk data across the banking group to deliver data by business line, legal entity, asset type, industry, region and other groupings, to support identifying and reporting risk exposures, concentrations and emerging risks. This master and reference data challenge unfortunately cannot be solved by external data utility providers due to the fact the banks have legal entity, client, counterparty, and securities instrument data residing in existing systems that require the ability to cross reference any external identifier for consistent reporting and risk measurement.
To sum it up, most banks admit they have a lot of work to do. Specifically, they must work to address gaps across their data governance and technology infrastructure.BCBS 239 is the latest and biggest data challenge facing the banking industry and not just for the GSIB’s but also for the next level down as mid-size firms will also be required to provide similar transparency to regional regulators who are adopting BCBS 239 as a framework for their local markets. BCBS 239 is not just a deadline but the principles set forth are a key requirement for banks to ensure they have the right data to manage risk and ensure transparency to industry regulators to monitor system risk across the global markets. How ready are you?
Did you know Harrods introduces more than 1.7 million new products every year? This includes their own labels, as well as other brands. Recently, Peter Rush, the Harrods Solution Architect responsible for product information, spoke at Informatica’s MDM Day EMEA in London. At the event, he said there are:
“so many things we want to do: Product Information is at the heart of most of them.”
As part of the customer experience program, Harrods identified product information quality as a key asset, next to customer information management.
The Product Information Challenge Harrods was facing included the following:
- A Lack of a single Product data store
- Inappropriate Product Data objectives
- Massive scale and volume of products and brands (1.7 million new products per year)
- Concessions and Own Bought
- Localized enrichment
- Media Assets all over the estate
While discussing his product information management project, Peter gave a great and simple example. He showed the product descriptions below and asked, “Who knows which two products these are?”:
- XX 6621/74 BLK VN SS TOP 969B S
- XX37066 L/BLU PRK FLAN SH 440B MED
Then, he solved the mystery. The answer was this:
- Black V-neck sleeveless top
- Light blue parker print flannel shirt
Turning vision into reality needs a joint business and IT project
Peter said, it is important to build a “flexible team to meet needs of each project stage, with representation from key business areas”. The team should include representatives from groups like: Merchandise Data, Buying Team, Web Team, IT, CRM and the Shopfloor Team. In addition to their Core Project Team, Harrods defined a Steering Committee and a group of selected Super Users.
Benefit summary: a combination of people, technology and process
At the end of the session, I was impressed by this graphic. This image sums up the essentials of product information management success. It is about the people, who are able to do the right things. It is about how technology enables automation. It is about the process which turns information into value.
Finally it is important to mention our partner Javelin Group is leading the PIM implementation at Harrods. Also Andy Hayler, analyst from The Information Difference, wrote an article for the CIO Magazine.
CIOs and CFOs both dig data security
In my discussions with CIOs over the last couple of months, I asked them about the importance of a series of topics. All of them placed data security at the top of their IT priority list. Even their CFO counterparts, with whom they do not always see eye to eye, said they were very concerned about the business risk for corporate data. These CFOs said that they touch, as a part of owning business risk, security — especially from hacking. One CFO said that he worried, as well, about the impact of data security for compliance issues, including HIPAA and SOX. Another said this: “The security of data is becoming more and more important. The auditors are going after this. CFOs, for this reason, are really worried about getting hacked. This is a whole new direction, but some of the highly publicized recent hacks have scared a lot of folks and they combined represent to many of us a watershed event.”
According to David W. Owens the editor of CFO Magazine, even if you are using “secure” storage, such as internal drives and private clouds, the access to these areas can be anything but secure. Practically any employee can be carrying around sensitive financial and performance data in his or her pocket, at any time.” Obviously, new forms of data access have created new forms of data risk.
Are some retailers really leaving the keys in the ignition?
Given the like mind set from CIOs and CFOs, I was shocked to learn that some of the recently hacked retailers had been using outdated security software, which may have given hackers easier access company payment data systems. Most amazingly, some retailers had not even encrypted their customer payment data. Because of this, hackers were able to hide on the network for months and steal payment data, as customers continued to use their credit cards at the company’s point of sale locations.
Why weren’t these transactions encrypted or masked? In my 1998 financial information start-up, we encrypted our databases to protect against hacks of our customers’ personal financial data. One answer came from a discussion with a Fortune 100 Insurance CIO. This CIO said “CIO’s/CTO’s/CISO’s struggle with selling the value of these investment because the C Suite is only interested in hearing about investments with a direct impact on business outcomes and benefits”.
Enterprise security drives enterprise brand today
So how should leaders better argue the business case for security investments? I want to suggest that the value of IT is its “brand promise”. For retailers, in particular, if a past purchase decision creates a perceived personal data security risk, IT becomes a liability to their corporations brand equity and potentially creates a negative impact on future sales. Increasingly how these factors are managed either supports or not the value of a company’s brand.
My message is this: Spend whatever it takes to protect your brand equity; Otherwise a security issue will become a revenue issue.
In sum, this means organizations that want to differentiate themselves and avoid becoming a brand liability need to further invest in their data centric security strategy and of course, encryption. The game is no longer just about securing particular applications. IT organizations need to take a data centric approach to securing customer data and other types of enterprise data. Enterprise level data governance rules needs to be a requirement. A data centric approach can mitigate business risk by helping organizations to understand where sensitive data is and to protect it in motion and at rest.
Solutions: Enterprise Level Data Security
The State of Data Centric Security
How Is The CIO Role Starting To Change?
The CFO viewpoint on data
CFOs discuss their technology priorities
A few months ago, while addressing a room full of IT and business professional at an Information Governance conference, a CFO said – “… if we designed our systems today from scratch, they will look nothing like the environment we own.” He went on to elaborate that they arrived there by layering thousands of good and valid decisions on top of one another.
Similarly, Information Governance has also evolved out of the good work that was done by those who preceded us. These items evolve into something that only a few can envision today. Along the way, technology evolved and changed the way we interact with data to manage our daily tasks. What started as good engineering practices for mainframes gave way to data management.
Then, with technological advances, we encountered new problems, introduced new tasks and disciplines, and created Information Governance in the process. We were standing on the shoulders of data management, armed with new solutions to new problems. Now we face the four Vs of big data and each of those new data system characteristics have introduced a new set of challenges driving the need for Big Data Information Governance as a response to changing velocity, volume, veracity, and variety.
Before I answer this question, I must ask you “How comprehensive is the framework you are using today and how well does it scale to address the new challenges?”
While there are several frameworks out in the marketplace to choose from. In this blog, I will tell you what questions you need to ask yourself before replacing your old framework with a new one:
Q. Is it nimble?
The focus of data governance practices must allow for nimble responses to changes in technology, customer needs, and internal processes. The organization must be able to respond to emergent technology.
Q. Will it enable you to apply policies and regulations to data brought into the organization by a person or process?
- Public company: Meet the obligation to protect the investment of the shareholders and manage risk while creating value.
- Private company: Meet privacy laws even if financial regulations are not applicable.
- Fulfill the obligations of external regulations from international, national, regional, and local governments.
Q. How does it Manage quality?
For big data, the data must be fit for purpose; context might need to be hypothesized for evaluation. Quality does not imply cleansing activities, which might mask the results.
Q. Does it understanding your complete business and information flow?
Attribution and lineage are very important in big data. Knowing what is the source and what is the destination is crucial in validating analytics results as fit for purpose.
Q. How does it understanding the language that you use, and can the framework manage it actively to reduce ambiguity, redundancy, and inconsistency?
Big data might not have a logical data model, so any structured data should be mapped to the enterprise model. Big data still has context and thus modeling becomes increasingly important to creating knowledge and understanding. The definitions evolve over time and the enterprise must plan to manage the shifting meaning.
Q. Does it manage classification?
It is critical for the business/steward to classify the overall source and the contents within as soon as it is brought in by its owner to support of information lifecycle management, access control, and regulatory compliance.
Q. How does it protect data quality and access?
Your information protection must not be compromised for the sake of expediency, convenience, or deadlines. Protect not just what you bring in, but what you join/link it to, and what you derive. Your customers will fault you for failing to protect them from malicious links. The enterprise must formulate the strategy to deal with more data, longer retention periods, more data subject to experimentation, and less process around it, all while trying to derive more value over longer periods.
Q. Does it foster stewardship?
Ensuring the appropriate use and reuse of data requires the action of an employee. E.g., this role cannot be automated, and it requires the active involvement of a member of the business organization to serve as the steward over the data element or source.
Q. Does it manage long-term requirements?
Policies and standards are the mechanism by which management communicates their long-range business requirements. They are essential to an effective governance program.
Q. How does it manage feedback?
As a companion to policies and standards, an escalation and exception process enables communication throughout the organization when policies and standards conflict with new business requirements. It forms the core process to drive improvements to the policy and standard documents.
Q. Does it Foster innovation?
Governance must not squelch innovation. Governance can and should make accommodations for new ideas and growth. This is managed through management of the infrastructure environments as part of the architecture.
Q. How does it control third-party content?
Third-party data plays an expanding role in big data. There are three types and governance controls must be adequate for the circumstances. They must consider applicable regulations for the operating geographic regions; therefore, you must understand and manage those obligations.
Do We Really Need Another Information Framework?
The EIM Consortium is a group of nine companies that formed this year with the mission to:
“Promote the adoption of Enterprise Information Management as a business function by establishing an open industry reference architecture in order to protect and optimize the business value derived from data assets.”
That sounds nice, but we do really need another framework for EIM or Data Governance? Yes we do, and here’s why. (more…)
Gartner’s official definition of Information Governance is “…the specification of decision rights and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archival and deletion of information. It includes the processes, roles, standards, and metrics that ensure the effective and efficient use of information in enabling a business to achieve its goals.” It therefore looks to address important considerations that key stakeholders within an enterprise face.
A CIO of a large European bank once asked me – “How long do we need to keep information?”
Keeping Information Governance relevant
This bank had to govern, index, search, and provide content to auditors to show it is managing data appropriately to meet Dodd-Frank regulation. In the past, this information was retrieved from a database or email. Now, however, the bank was required to produce voice recordings from phone conversations with customers, show the Reuters feeds coming in that are relevant, and document all appropriate IMs and social media interactions between employees.
All these were systems the business had never considered before. These environments continued to capture and create data and with it complex challenges. These islands of information that seemingly do not have anything to do with each other, yet impact how that bank governs itself and how it saves any of the records associated with trading or financial information.
Coping with the sheer growth is one issue; what to keep and what to delete is another. There is also the issue of what to do with all the data once you have it. The data is potentially a gold mine for the business, but most businesses just store it and forget about it.
Legislation, in tandem, is becoming more rigorous and there are potentially thousands of pieces of regulation relevant to multinational companies. Businesses operating in the EU, in particular, are affected by increasing regulation. There are a number of different regulations, including Solvency II, Dodd-Frank, HIPAA, Gramm-Leach-Bliley Act (GLBA), Basel III and new tax laws. In addition, companies face the expansion of state-regulated privacy initiatives and new rules relating to disaster recovery, transportation security, value chain transparency, consumer privacy, money laundering, and information security.
Regardless, an enterprise should consider the following 3 core elements before developing and implementing a policy framework.
Whatever your size or type of business, there are several key processes you must undertake in order to create an effective information governance program. As a Business Transformation Architect, I can see 3 foundation stones of an effective Information Governance Program:
Assess Your Business Maturity
Understand the full scope of requirements on your business is a heavy task. Assess whether your business is mature enough to embrace information governance. Many businesses in EMEA do not have an information governance team already in place, but instead have key stakeholders with responsibility for information assets spread across their legal, security, and IT teams.
Undertake a Regulatory Compliance Review
Understand the legal obligations to your business are critical in shaping an information governance program. Every business is subject to numerous compliance regimes managed by multiple regulatory agencies, which can differ across markets. Many compliance requirements are dependent upon the numbers of employees and/or turnover reaching certain limits. For example, certain records may need to be stored for 6 years in Poland, yet the same records may need to be stored for 3 years in France.
Establish an Information Governance Team
It is important that a core team be assigned responsibility for the implementation and success of the information governance program. This steering group and a nominated information governance lead can then drive forward operational and practical issues, including; Agreeing and developing a work program, Developing policy and strategy, and Communication and awareness planning.