Category Archives: Data Governance

Keeping Information Governance Relevant

Gartner’s official definition of Information Governance is “…the specification of decision rights and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archival and deletion of information. It includes the processes, roles, standards, and metrics that ensure the effective and efficient use of information in enabling a business to achieve its goals.” It therefore looks to address important considerations that key stakeholders within an enterprise face.

A CIO of a large European bank once asked me – “How long do we need to keep information?”

Keeping Information Governance relevant

Data-Cartoon-2This bank had to govern, index, search, and provide content to auditors to show it is managing data appropriately to meet Dodd-Frank regulation. In the past, this information was retrieved from a database or email. Now, however, the bank was required to produce voice recordings from phone conversations with customers, show the Reuters feeds coming in that are relevant, and document all appropriate IMs and social media interactions between employees.

All these were systems the business had never considered before. These environments continued to capture and create data and with it complex challenges. These islands of information that seemingly do not have anything to do with each other, yet impact how that bank governs itself and how it saves any of the records associated with trading or financial information.

Coping with the sheer growth is one issue; what to keep and what to delete is another. There is also the issue of what to do with all the data once you have it. The data is potentially a gold mine for the business, but most businesses just store it and forget about it.

Legislation, in tandem, is becoming more rigorous and there are potentially thousands of pieces of regulation relevant to multinational companies. Businesses operating in the EU, in particular, are affected by increasing regulation. There are a number of different regulations, including Solvency II, Dodd-Frank, HIPAA, Gramm-Leach-Bliley Act (GLBA), Basel III and new tax laws. In addition, companies face the expansion of state-regulated privacy initiatives and new rules relating to disaster recovery, transportation security, value chain transparency, consumer privacy, money laundering, and information security.

Regardless, an enterprise should consider the following 3 core elements before developing and implementing a policy framework.

Whatever your size or type of business, there are several key processes you must undertake in order to create an effective information governance program. As a Business Transformation Architect, I can see 3 foundation stones of an effective Information Governance Program:

Assess Your Business Maturity

Understand the full scope of requirements on your business is a heavy task. Assess whether your business is mature enough to embrace information governance. Many businesses in EMEA do not have an information governance team already in place, but instead have key stakeholders with responsibility for information assets spread across their legal, security, and IT teams.

Undertake a Regulatory Compliance Review

Understand the legal obligations to your business are critical in shaping an information governance program. Every business is subject to numerous compliance regimes managed by multiple regulatory agencies, which can differ across markets. Many compliance requirements are dependent upon the numbers of employees and/or turnover reaching certain limits. For example, certain records may need to be stored for 6 years in Poland, yet the same records may need to be stored for 3 years in France.

Establish an Information Governance Team

It is important that a core team be assigned responsibility for the implementation and success of the information governance program. This steering group and a nominated information governance lead can then drive forward operational and practical issues, including; Agreeing and developing a work program, Developing policy and strategy, and Communication and awareness planning.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Financial Services, Governance, Risk and Compliance | Tagged , , , , , | 1 Comment

CFO Move to Chief Profitability Officer

30% or higher of each company’s businesses are unprofitable

cfoAccording to Jonathan Brynes at the MIT Sloan School, “the most important issue facing most managers …is making more money from their existing businesses without costly new initiatives”. In Brynes’ cross industry research, he found that 30% or higher of each company’s businesses are unprofitable. Brynes claims these business losses are offset by what are “islands of high profitability”. The root cause of this issue is asserted to be the inability of current financial and management control systems to surface profitability problems and opportunities. Why is this the case? Byrnes believes that management budgetary guidance by its very nature assumes the continuation of the status quo. For this reason, the response to management asking for a revenue increase is to increase revenues for businesses that are profitable and unprofitable. Given this, “the areas of embedded unprofitability remain embedded and largely invisible”. At the same time to be completely fair, it should be recognized that it takes significant labor to accurately and completely put together a complete picture on direct and indirect costs.

The CFO needs to become the point person on profitability issues

cfo

Byrnes believes, nevertheless, that CFOs need to become the corporate point person for surfacing profitability issues. They, in fact, should act as the leader of a new and important role, the chief profitability officer. This may seem like an odd suggestion since virtually every CFO if asked would view profitability as a core element of their job. But Byrnes believes that CFOs need to move beyond broad, departmental performance measures and build profitability management processes into their companies’ core management activities. This task requires the CFO to determine two things.

  1. Which product lines, customers, segments, and channels are unprofitable so investments can be reduced or even eliminated?
  2. Which product lines, customers, segments, and channels are the most profitable so management can determine whether to expand investments and supporting operations?

Why didn’t portfolio management solve this problem?

cfoNow as a strategy MBA, Byrnes’ suggestion leave me wondering why the analysis proposed by strategy consultants like Boston Consulting Group didn’t solve this problem a long time ago. After all portfolio analysis has at its core the notion that relative market share and growth rate will determine profitability and which businesses a firm should build share, hold share, harvest share, or divest share—i.e. reduce, eliminate, or expand investment. The truth is getting at these figures, especially profitability, is a time consuming effort.

KPMG finds 91% of CFOs are held back by financial and performance systems

KPMG

As financial and business systems have become more complex, it has become harder and harder to holistically analyze customer and product profitability because the relevant data is spread over a myriad of systems, technologies, and locations. For this reason, 91% of CFO respondents in a recent KPMG survey said that they want to improve the quality of their financial and performance insight from the data they produce. An amazing 51% of these CFOs, also, admitted that the “collection, storage, and retrieval financial and performance data at their company is primarily a manual and/or spreadsheet-based exercise”. Think about it — a majority of these CFOs teams time is spent collecting financial data rather than actively managing corporate profitability.

How do we fix things?

FixWhat is needed is a solution that allows financial teams to proactively produce trustworthy financial data from each and every financial system and then reliably combine and aggregate the data coming from multiple financial systems. Having accomplished this, the solution needs to allow financial organizations to slice and dice net profitability for product lines and customers.

This approach would not only allow financial organizations to cut their financial operational costs but more importantly drive better business profitability by surfacing profitability gaps. At the same time, it would enable financial organizations to assist business units in making more informed customer and product line investment decisions. If a product line or business is narrowly profitable and lacks a broader strategic context or ability to increase profitability by growing market share, it is a candidate for investment reduction or elimination.

Strategic CFOs need to start asking questions of their business counterparts starting with their justification for their investment strategy. Key to doing this involves consolidating reliable profitability data across customers, products, channel partners, suppliers. This would eliminate the time spent searching for and manually reconciling data in different formats across multiple systems. It should deliver ready analysis across locations, applications, channels, and departments.

Some parting thoughts

Strategic CFOs tell us they are trying to seize the opportunity “to be a business person versus a bean counting historically oriented CPA”. I believe a key element of this is seizing the opportunity to become the firm’s chief profitability officer. To do this well, CFOs need dependable data that can be sliced and diced by business dimensions. Armed with this information, CFOs can determine the most and least profitability, businesses, product lines, and customers. As well, they can come to the business table with the perspective to help guide their company’s success.

Related links
Solution Brief: The Intelligent Data Platform
Related Blogs
CFOs Discuss Their Technology Priorities
The CFO Viewpoint upon Data
How CFOs can change the conversation with their CIO?
New type of CFO represents a potent CIO ally
Competing on Analytics
The Business Case for Better Data Connectivity

Twitter: @MylesSuer

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, CIO, Data Governance, Data Quality | Tagged , , , , , | Leave a comment

8 Information Management Challenges for UDI Compliance

“My team spends far too much time pulling together medical device data that’s scattered across different systems and reconciling it in spreadsheets to create compliance reports.” This quotation from a regulatory affairs leader at a medical device manufacturer highlights the impact of poorly managed medical device data on compliance reporting, such as the reports needed for the FDA’s Universal Device Identification (UDI) regulation. In fact, an overreliance on manual, time-consuming processes brings an increased risk of human error in UDI compliance reports.

frustrated_man_computer

Is your compliance team manually reconciling data for UDI compliance reports?

If you are an information management leader working for a medical device manufacturer, and your compliance team needs quick and easy access to medical device data for UDI compliance reporting, I have five questions for you:

1) How many Class III and Class II devices do you have?
2) How many systems or reporting data stores contain data about these medical devices?
3) How much time do employees spend manually fixing data errors before the data can be used for reporting?
4) How do you plan to manage medical device data so the compliance team can quickly and easily produce accurate reports for UDI Compliance?
5) How do you plan to help the compliance team manage the multi-step submission process?

Watch this on-demand webinar "3 EIM Best Practices for UDI Compliance"

Watch this on-demand webinar “3 EIM Best Practices for UDI Compliance”

For some helpful advice from data management experts, watch this on-demand webinar “3 Enterprise Information Management (EIM) Best Practices for UDI Compliance.”

The deadline to submit the first UDI compliance report to the FDA for Class III devices is September 24, 2014. But, the medical device data needed to produce the report is typically scattered among different internal systems, such as Enterprise Resource Planning (ERP) e.g. SAP and JD Edwards, Product Lifecycle Management (PLM), Manufacturing Execution Systems (MES) and external 3rd party device identifiers.

The traditional approach for dealing with poorly managed data is the compliance team burns the midnight oil to bring together and then manually reconcile all the medical device data in a spreadsheet. And, they have to do this each and every time a compliance report is due. The good news is your compliance team doesn’t have to.

Many medical device manufacturers are are leveraging their existing data governance programs, supported by a combination of data integration, data quality and master data management (MDM) technology to eliminate the need for manual data reconciliation. They are centralizing their medical device data management, so they have a single source of trusted medical device data for UDI compliance reporting as well as other compliance and revenue generating initiatives.

Get UDI data management advice from data experts Kelle O'Neal, Managing Partner at First San Francisco Partners and Bryan Balding, MDM Specialist at Informatica
Get UDI data management advice from data experts Kelle O’Neal, Managing Partner at First San Francisco Partners and Bryan Balding, MDM Specialist at Informatica

During this this on-demand webinar, Kelle O’Neal, Managing Partner at First San Francisco Partners, covers the eight information management challenges for UDI compliance as well as best practices for medical device data management.

Bryan Balding, MDM Solution Specialist at Informatica, shows you how to apply these best practices with the Informatica UDI Compliance Solution.

You’ll learn how to automate the process of capturing, managing and sharing medical device data to make it quicker and easier to create the reports needed for UDI compliance on ongoing basis.

 

 

20 Questions & Answers about Complying with the FDA Requirement for Unique Device Identification (UDI)

20 Questions & Answers about Complying with the FDA Requirement
for Unique Device Identification (UDI)

Also, we just published a joint whitepaper with First San Francisco Partners, Information Management FAQ for UDI: 20 Questions & Answers about Complying with the FDA Requirement for Unique Device Identification (UDI). Get answers to questions such as:

What is needed to support an EIM strategy for UDI compliance?
What role does data governance play in UDI compliance?
What are the components of a successful data governance program?
Why should I centralize my business-critical medical device data?
What does the architecture of a UDI compliance solution look like?

I invite you to download the UDI compliance FAQ now and share your feedback in the comments section below.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Integration, Data Quality, Enterprise Data Management, Life Sciences, Manufacturing, Master Data Management, Vertical | Tagged , , , , , , , , , , , , , , | Leave a comment

Building a Data Foundation for Execution

Building a Data Foundation for Execution

Building a Data Foundation

I have been re-reading Enterprise Architecture as Strategy from the MIT Center for Information Systems Research (CISR).*  One concept that they talk about that jumped out at me was the idea of a “Foundation for Execution.”  Everybody is working to drive new business initiatives, to digitize their businesses, and to thrive in an era of increased technology disruption and competition.  The ideas around a Foundation for Execution in the book are a highly practical and useful framework to deal with these problems.

This got me thinking: What is the biggest bottleneck in the delivery of business value today?  I know I look at things from a data perspective, but data is the biggest bottleneck.  Consider this prediction from Gartner:

“Gartner predicts organizations will spend one-third more on app integration in 2016 than they did in 2013. What’s more, by 2018, more than half the cost of implementing new large systems will be spent on integration. “

When we talk about application integration, we’re talking about moving data, synchronizing data, cleansing, data, transforming data, testing data.  The question for architects and senior management is this: Do you have the Data Foundation for Execution you need to drive the business results you require to compete?  The answer, unfortunately, for most companies is; No.

All too often data management is an add-on to larger application-based projects.  The result is unconnected and non-interoperable islands of data across the organization.  That simply is not going to work in the coming competitive environment.  Here are a couple of quick examples:

  • Many companies are looking to compete on their use of analytics.  That requires collecting, managing, and analyzing data from multiple internal and external sources.
  • Many companies are focusing on a better customer experience to drive their business. This again requires data from many internal sources, plus social, mobile and location-based data to be effective.

When I talk to architects about the business risks of not having a shared data architecture, and common tools and practices for enterprise data management, they “get” the problem.  So why aren’t they addressing it?  The issue is that they find that they are only funded to do the project they are working on and are dealing with very demanding timeframe requirements.  They have no funding or mandate to solve the larger enterprise data management problem, which is getting more complex and brittle with each new un-connected project or initiative that is added to the pile.

Studies such as “The Data Directive” by The Economist show that organizations that actively manage their data are more successful. But, if that is the desired future state, how do you get there?

Changing an organization to look at data as the fuel that drives strategy takes hard work and leadership. It also takes a strong enterprise data architecture vision and strategy.  For fresh thinking on the subject of building a data foundation for execution, see “Think Data-First to Drive Business Value” from Informatica.

* By the way, Informatica is proud to announce that we are now a sponsor of the MIT Center for Information Systems Research.

FacebookTwitterLinkedInEmailPrintShare
Posted in Architects, Business Impact / Benefits, CIO, Data Governance, Data Integration, Data Synchronization, Enterprise Data Management | Tagged , , , , , , | Leave a comment

Reflections Of A Former Data Analyst (Part 2) – Changing The Game For Data Plumbing

 

Elephant cleansing

Cleaning. Sometimes is challenging!

In my last blog I promised I would report back my experience on using Informatica Data Quality, a software tool that helps automate the hectic, tedious data plumbing task, a task that routinely consumes more than 80% of the analyst time. Today, I am happy to share what I’ve learned in the past couple of months.

But first, let me confess something. The reason it took me so long to get here was that I was dreaded by trying the software.  Never a savvy computer programmer, I was convinced that I would not be technical enough to master the tool and it would turn into a lengthy learning experience. The mental barrier dragged me down for a couple of months and I finally bit the bullet and got my hands on the software. I am happy to report that my fear  was truly unnecessary –  It took me one half day to get a good handle on most features in the Analyst Tool, a component  of the Data Quality designed for analyst and business users,   then I spent 3 days trying to figure out how to maneuver the Developer Tool, another key piece of the Data Quality offering mostly used by – you guessed it, developers and technical users.  I have to admit that I am no master of the Developer Tool after 3 days of wrestling with it, but, I got the basics and more importantly, my hands-on interaction with the entire software helped me understand the logic behind the overall design, and see for myself  how analyst and business user can easily collaborate with their IT counterpart within our Data Quality environment.

To break it all down, first comes to Profiling. As analyst we understand too well the importance of profiling as it provides an anatomy of the raw data we collected. In many cases, it is a must have first step in data preparation (especially when our  raw data came from different places and can also carry different formats).  A heavy user of Excel, I used to rely on all the tricks available in the spreadsheet to gain visibility of my data. I would filter, sort, build pivot table, make charts to learn what’s in my raw data.  Depending on how many columns in my data set, it could take hours, sometimes days just to figure out whether the data I received was any good at all, and how good it was.

which one do you like better?

which one do you like better?

Switching to the Analyst Tool in Data Quality, learning my raw data becomes a task of a few clicks – maximum 6 if I am picky about how I want it to be done.  Basically I load my data, click on a couple of options, and let the software do the rest.  A few seconds later I am able to visualize the statistics of the data fields I choose to examine,  I can also measure the quality of the raw data by using Scorecard feature in the software. No more fiddling with spreadsheet and staring at busy rows and columns.  Take a look at the above screenshots and let me know your preference?

Once I decide that my raw data is adequate enough to use after the profiling, I still need to clean up the nonsense in it before performing any analysis work, otherwise  bad things can happen — we call it garbage in garbage out. Again, to clean and standardize my data, Excel came to rescue in the past.  I would play with different functions and learn new ones, write macro or simply do it by hand. It was tedious but worked if I worked on static data set. Problem however, was when I needed to incorporate new data sources in a different format, many of the previously built formula would break loose and become inapplicable. I would have to start all over again. Spreadsheet tricks simply don’t scale in those situation.

Rule Builder in Analyst Tool

Rule Builder in Analyst Tool

With Data Quality Analyst Tool, I can use the Rule Builder to create a set of logical rules in hierarchical manner based on my objectives,  and test those rules to see the immediate results. The nice thing is, those rules are not subject to data format, location, or size, so I can reuse them when the new data comes in.  Profiling can be done at any time so I can re-examine my data after applying the rules, as many times as I like. Once I am satisfied with the rules, they will be passed on to my peers in IT so they can create executable rules based on the logic I create and run them automatically in production. No more worrying about the difference in format, volume or other discrepancies in the data sets, all the complexity is taken care of by the software, and all I need to do is to build meaningful rules to transform the data to the appropriate condition so I can have good quality data to work with for my analysis.  Best part? I can do all of the above without hassling my IT – feeling empowered is awesome!

Changing The Game For Data Plumbing

Use the Right Tool for the Job

Use the right tool for the right job will improve our results, save us time, and make our jobs much more enjoyable. For me, no more Excel for data cleansing after trying our Data Quality software, because now I can get a more done in less time, and I am no longer stressed out by the lengthy process.

I encourage my analyst friends to try Informatica Data Quality, or at least the Analyst Tool in it.  If you are like me, feeling weary about the steep learning curve then fear no more. Besides, if Data Quality can cut down your data cleansing time by half (mind you our customers have reported higher numbers), how many more predictive models you can build, how much you will learn, and how much faster you can build your reports in Tableau, with more confidence?

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Quality | Tagged , , , | Leave a comment

In a Data First World, Knowledge Really Is Power!

Knowledge Really IS Power!

Knowledge Really IS Power!

I have two quick questions for you. First, can you name the top three factors that will increase your sales or boost your profit? And second, are you sure about that?

That second question is a killer because most people — no matter if they’re in marketing, sales or manufacturing — rely on incomplete, inaccurate or just plain wrong information. Regardless of industry, we’ve been fixated on historic transactions because that’s what our systems are designed to provide us.

“Moneyball: The Art of Winning an Unfair Game” gives a great example of what I mean. The book (not the movie) describes Billy Beane hiring MBAs to map out the factors that would win a baseball game. They discovered something completely unexpected: That getting more batters on base would tire out pitchers. It didn’t matter if batters had multi-base hits, and it didn’t even matter if they walked. What mattered was forcing pitchers to throw ball after ball as they faced an unrelenting string of batters. Beane stopped looking at RBIs, ERAs and even home runs, and started hiring batters who consistently reached first base. To me, the book illustrates that the most useful knowledge isn’t always what we’ve been programmed to depend on or what is delivered to us via one app or another.

For years, people across industries have turned to ERP, CRM and web analytics systems to forecast sales and acquire new customers. By their nature, such systems are transactional, forcing us to rely on history as the best predictor of the future. Sure it might be helpful for retailers to identify last year’s biggest customers, but that doesn’t tell them whose blogs, posts or Tweets influenced additional sales. Isn’t it time for all businesses, regardless of industry, to adopt a different point of view — one that we at Informatica call “Data-First”? Instead of relying solely on transactions, a data-first POV shines a light on interactions. It’s like having a high knowledge IQ about relationships and connections that matter.

A data-first POV changes everything. With it, companies can unleash the killer app, the killer sales organization and the killer marketing campaign. Imagine, for example, if a sales person meeting a new customer knew that person’s concerns, interests and business connections ahead of time? Couldn’t that knowledge — gleaned from Tweets, blogs, LinkedIn connections, online posts and transactional data — provide a window into the problems the prospect wants to solve?

That’s the premise of two startups I know about, and it illustrates how a data-first POV can fuel innovation for developers and their customers. Today, we’re awash in data-fueled things that are somehow attached to the Internet. Our cars, phones, thermostats and even our wristbands are generating and gleaning data in new and exciting ways. That’s knowledge begging to be put to good use. The winners will be the ones who figure out that knowledge truly is power, and wield that power to their advantage.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Aggregation, Data Governance, Data Integration, Data Quality, Data Transformation | Tagged , , | Leave a comment

The Five C’s of Data Management

The Five C’s of Data Management

The Five C’s of Data Management

A few days ago, I came across a post, 5 C’s of MDM (Case, Content, Connecting, Cleansing, and Controlling), by Peter Krensky, Sr. Research Associate, Aberdeen Group and this response by Alan Duncan with his 5 C’s (Communicate, Co-operate, Collaborate, Cajole and Coerce). I like Alan’s list much better. Even though I work for a product company specializing in information management technology, the secret to successful enterprise information management (EIM) is in tackling the business and organizational issues, not the technology challenges. Fundamentally, data management at the enterprise level is an agreement problem, not a technology problem.

So, here I go with my 5 C’s: (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Big Data, Data Governance, Data Integration, Enterprise Data Management, Integration Competency Centers, Master Data Management | Tagged , , , , , | Leave a comment

Scary Times For Data Security

Scary Times For Data Security

Scary Times For Data Security

These are scary times we live in when it comes to data security. And the times are even scarier for today’s retailers, government agencies, financial institutions, and healthcare organizations. The internet has become a battlefield. Criminals are looking to steal trade secrets and personal data for financial gain. Terrorists seek to steal data for political gain. Both are after your Personally Identifiable Information, like your name, account numbers, social security number, date of birth, ID’s and passwords.

How are they accomplishing this?  A new generation of hackers has learned to reverse engineer popular software programs (e.g. Windows, Outlook Java, etc.) in order to find so called “holes”. Once those holes are exploited, the hackers develop “bugs” that infiltrate computer systems, search for sensitive data and return it to the bad guys. These bugs are then sold in the black market to the highest bidder. When successful, these hackers can wreak havoc across the globe.

I recently read a Time Magazine article titled “World War Zero: How Hackers Fight to Steal Your Secrets.” The article discussed a new generation of software companies made up of former hackers. These firms help other software companies by identifying potential security holes, before they can be used in malicious exploits.

This constant battle between good (data and software security firms) and bad (smart, young, programmers looking to make a quick/big buck) is happening every day. Unfortunately, the average consumer (you and I) are the innocent victims of this crazy and costly war. As a consumer in today’s digital and data-centric age, I worry when I see these headlines of ongoing data breaches from the Targets of the world to my local bank down the street. I wonder not “if” but “when” I will become the next victim.  According to the Ponemon institute, the average cost to a company was $3.5 million in US dollars and 15 percent more than what it cost last year.

As a 20 year software industry veteran, I’ve worked with many firms across global financial services industry. As a result, my concerned about data security exceed those of the average consumer. Here are the reasons for this:

  1. Everything is Digital: I remember the days when ATM machines were introduced, eliminating the need to wait in long teller lines. Nowadays, most of what we do with our financial institutions is digital and online whether on our mobile devices to desktop browsers. As such every interaction and transaction is creating sensitive data that gets disbursed across tens, hundreds, sometimes thousands of databases and systems in these firms.
  2. The Big Data Phenomenon: I’m not talking about sexy next generation analytic applications that promise to provide the best answer to run your business. What I am talking about is the volume of data that is being generated and collected from the countless number of computer systems (on-premise and in the cloud) that run today’s global financial services industry.
  3. Increase use of Off-Shore and On-Shore Development: Outsourcing technology projects to offshore development firms has be leverage off shore development partners to offset their operational and technology costs. With new technology initiatives.

Now here is the hard part.  Given these trends and heightened threats, do the companies I do business with know where the data resides that they need to protect?  How do they actually protect sensitive data when using it to support new IT projects both in-house or by off-shore development partners?   You’d be amazed what the truth is. 

According to the recent Ponemon Institute study “State of Data Centric Security” that surveyed 1,587 Global IT and IT security practitioners in 16 countries:

  • Only 16 percent of the respondents believe they know where all sensitive structured data is located and a very small percentage (7 percent) know where unstructured data resides.
  • Fifty-seven percent of respondents say not knowing where the organization’s sensitive or confidential data is located keeps them up at night.
  • Only 19 percent say their organizations use centralized access control management and entitlements and 14 percent use file system and access audits.

Even worse, those surveyed said that not knowing where sensitive and confidential information resides is a serious threat and the percentage of respondents who believe it is a high priority in their organizations. Seventy-nine percent of respondents agree it is a significant security risk facing their organizations. But a much smaller percentage (51 percent) believes that securing and/or protecting data is a high priority in their organizations.

I don’t know about you but this is alarming and worrisome to me.  I think I am ready to reach out to my banker and my local retailer and let him know about my concerns and make sure they ask and communicate my concerns to the top of their organization. In today’s globally and socially connected world, news travels fast and given how hard it is to build trustful customer relationships, one would think every business from the local mall to Wall St should be asking if they are doing what they need to identify and protect their number one digital asset – Their data.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Integration, Data Privacy, Data Quality, Data Services, Data Warehousing | Tagged , , , | Leave a comment

The Information Difference Pegs Informatica as a Top MDM Vendor

Information Difference MDM Diagram v2This blog post feels a little bit like bragging… and OK, I guess it is pretty self-congratulatory to announce that this year, Informatica was again chosen as a leader in MDM and PIM by The Information Difference. As you may know, The Information Difference is an independent research firm that specializes in the MDM industry and each year surveys, analyzes and ranks MDM and PIM providers and customers around the world. This year, like last year, The Information Difference named Informatica tops in the space.

Why do I feel especially chuffed about this?  Because of our customers.

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Quality, Enterprise Data Management, Master Data Management | Tagged , , , , , , | Leave a comment