Tag Archives: data security

CFOs Discuss Their Technology Priorities

Recently, I had the opportunity to talk to a number of CFOs about their technology priorities. These discussions represent an opportunity for CIOs to hear what their most critical stakeholder considers important. The CFOs did not hesitate or need to think much about this question. They said three things make their priority list. They are better financial system reliability, better application integration, and better data security and governance. The top two match well with a recent KPMG study which found the biggest improvement finance executives want to see—cited by 91% of survey respondents—is in the quality of financial and performance insight obtained from the data they produce, followed closely by the finance and accounting organization’s ability to proactively analyze that information before it is stale or out of date”

TrustBetter financial system reliability

CFOs want to know that their systems work and are reliable. They want the data collected from their systems to be analyzed in a timely fashion. Importantly, CFOs say they are worried not only about the timeliness of accounting and financial data. This is because they increasingly need to manage upward with information.  For this reason, they want timely, accurate information produced for financial and business decision makers. Their goal is to drive out better enterprise decision making.

In manufacturing, for example, CFOs say they want data to span from the manufacturing systems to the distribution system. They want to be able to push a button and get a report. These CFOs complain today about the need to manually massage and integrate data from system after system before they get what they and their business decision makers want and need.

IntegrationBetter Application Integration

CFOs really feel the pain of systems not talking to each other. CFOs know firsthand that they have “disparate systems” and that too much manual integration is going on. For them, they see firsthand the difficulties in connecting data from the frontend to backend systems. They personally feel the large number of manual steps required to pull data. They want their consolidation of account information to be less manual and to be more timely. One CFO said that “he wants the integration of the right systems to provide the right information to be done so they have the right information to manage and make decisions at the right time”.

Data Security and Governance

securityCFOs, at the same time, say they have become more worried about data security and governance. Even though CFOs believe that security is the job of the CIO and their CISO, they have an important role to play in data governance. CFOs say they are really worried about getting hacked. One CFO told me that he needs to know that systems are always working properly. Security of data matters today to CFOs for two reasons. First, data has a clear material impact. Just take a look at the out of pocket and revenue losses coming from the breach at Target. Second, CFOs, which were already being audited for technology and system compliance, feel that their audit firms will be obligated to extend what they were doing in security and governance and go as a part of regular compliance audits. One CFO put it this way. “This is a whole new direction for us. Target scared a lot of folks and will be to many respects a watershed event for CFOs”.

Take aways

So the message here is that CFOs prioritize three technology objectives for their CIOs– better IT reliability, better application integration, and improved data security and governance. Each of these represents an opportunity to make the CFOs life easier but more important to enable them to take on a more strategic role. The CFOs, that we talked to, want to become one of the top three decision makers in the enterprise. Fixing these things for CFOs will enable CIOs to build a closer CFO and business relationships.

Related links

Solution Brief: The Intelligent Data Platform

Solution Brief: Secure at Source

Related Blogs

The CFO Viewpoint upon Data

How CFOs can change the conversation with their CIO?

New type of CFO represents a potent CIO ally

Competing on Analytics

The Business Case for Better Data Connectivity

Twitter: @MylesSuer

FacebookTwitterLinkedInEmailPrintShare
Posted in CIO, Data Security, Governance, Risk and Compliance | Tagged , , , , , | Leave a comment

In a Data First World, IT must Empower Business Change!

IT must Empower Business ChangeYou probably know this already, but I’m going to say it anyway: It’s time you changed your infrastructure. I say this because most companies are still running infrastructure optimized for ERP, CRM and other transactional systems. That’s all well and good for running IT-intensive, back-office tasks. Unfortunately, this sort of infrastructure isn’t great for today’s business imperatives of mobility, cloud computing and Big Data analytics.

Virtually all of these imperatives are fueled by information gleaned from potentially dozens of sources to reveal our users’ and customers’ activities, relationships and likes. Forward-thinking companies are using such data to find new customers, retain existing ones and increase their market share. The trick lies in translating all this disparate data into useful meaning. And to do that, IT needs to move beyond focusing solely on transactions, and instead shine a light on the interactions that matter to their customers, their products and their business processes.

They need what we at Informatica call a “Data First” perspective. You can check out my first blog first about being Data First here.

A Data First POV changes everything from product development, to business processes, to how IT organizes itself and —most especially — the impact IT has on your company’s business. That’s because cloud computing, Big Data and mobile app development shift IT’s responsibilities away from running and administering equipment, onto aggregating, organizing and improving myriad data types pulled in from internal and external databases, online posts and public sources. And that shift makes IT a more-empowering force for business change. Think about it: The ability to connect and relate the dots across data from multiple sources finally gives you real power to improve entire business processes, departments and organizations.

I like to say that the role of IT is now “big I, little t,” with that lowercase “t” representing both technology and transactions. But that role requires a new set of priorities. They are:

  1. Think about information infrastructure first and application infrastructure second.
  2. Create great data by design. Architect for connectivity, cleanliness and security. Check out the eBook Data Integration for Dummies.
  3. Optimize for speed and ease of use – SaaS and mobile applications change often. Click here to try Informatica Cloud for free for 30 days.
  4. Make data a team sport. Get tools into your users’ hands so they can prepare and interact with it.

I never said this would be easy, and there’s no blueprint for how to go about doing it. Still, I recognize that a little guidance will be helpful. In a few weeks, Informatica’s CIO Eric Johnson and I will talk about how we at Informatica practice what we preach.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, Big Data, Business Impact / Benefits, Data Integration, Data Security, Data Services, Enterprise Data Management | Tagged , , , | Leave a comment

Malcolm Gladwell, Big Data and What’s to be Done About Too Much Information

Malcolm Gladwell wrote an article in The New Yorker magazine in January, 2007 entitled “Open Secrets.” In the article, he pointed out that a national-security expert had famously made a distinction between puzzles and mysteries.

New Yorker writer Malcolm Gladwell

New Yorker writer Malcolm Gladwell

Osama bin Laden’s whereabouts were, for many years, a puzzle. We couldn’t find him because we didn’t have enough information. The key to the puzzle, it was assumed, would eventually come from someone close to bin Laden, and until we could find that source, bin Laden would remain at large. In fact, that’s precisely what happened. Al-Qaida’s No. 3 leader, Khalid Sheikh Mohammed, gave authorities the nicknames of one of bin Laden’s couriers, who then became the linchpin to the CIA’s efforts to locate Bin Laden.

By contrast, the problem of what would happen in Iraq after the toppling of Saddam Hussein was a mystery. It wasn’t a question that had a simple, factual answer. Mysteries require judgments and the assessment of uncertainty, and the hard part is not that we have too little information but that we have too much.

This was written before “Big Data” was a household word and it begs the very interesting question of whether organizations and corporations that are, by anyone’s standards, totally deluged with data, are facing puzzles or mysteries. Consider the amount of data that a company like Western Union deals with.

Western Union is a 160-year old company. Having built scale in the money transfer business, the company is in the process of evolving its business model by enabling the expansion of digital products, growth of web and mobile channels, and a more personalized online customer experience. Sounds good – but get this: the company processes more than 29 transactions per seconds on average. That’s 242 million consumer-to-consumer transactions and 459 million business payments in a year. Nearly a billion transactions – a billion! As my six-year-old might say, that number is big enough “to go to the moon and back.” Layer on top of that the fact that the company operates in 200+ countries and territories, and conducts business in 120+ currencies. Senior Director and Head of Engineering Abhishek Banerjee has said, “The data is speaking to us. We just need to react to it.” That implies a puzzle, not a mystery – but only if data scientists are able to conduct statistical modeling and predictive analysis, systematically noting trends in sending and receiving behaviors. Check out what Banerjee and Western Union CTO Sanjay Saraf have to say about it here.

Or consider General Electric’s aggressive and pioneering move into what’s dubbed as the industrial internet. In a white paper entitled “The Case for an Industrial Big Data Platform: Laying the Groundwork for the New Industrial Age,” GE reveals some of the staggering statistics related to the industrial equipment that it manufactures and supports (services comprise 75% of GE’s bottom line):

  • A modern wind turbine contains approximately 50 sensors and control loops which collect data every 40 milliseconds.
  • A farm controller then receives more than 30 signals from each turbine at 160-millisecond intervals.
  • At every one-second interval, the farm monitoring software processes 200 raw sensor data points with various associated properties with each turbine.

Phew! I’m no electricity operations expert, and you probably aren’t either. And most of us will get no further than simply wrapping our heads around the simple fact that GE turbines are collecting a LOT of data. But what the paper goes on to say should grab your attention in a big way: “The key to success for this wind farm lies in the ability to collect and deliver the right data, at the right velocity, and in the right quantities to a wide set of well-orchestrated analytics.” And the paper goes on to recommend that anyone involved in the Industrial Internet revolution strongly consider its talent requirements, with the suggestion that Chief Data officers and/or Data Scientists may be the next critical hires.

Which brings us back to Malcolm Gladwell. In the aforementioned article, Gladwell goes on to pull apart the Enron debacle, and argues that it was a prime example of the perils of too much information. “If you sat through the trial of (former CEO) Jeffrey Skilling, you’d think that the Enron scandal was a puzzle. The company, the prosecution said, conducted shady side deals that no one quite understood. Senior executives withheld critical information from investors…We were not told enough—the classic puzzle premise—was the central assumption of the Enron prosecution.” But in fact, that was not true. Enron employed complicated – but perfectly legal–accounting techniques used by companies that engage in complicated financial trading. Many journalists and professors have gone back and looked at the firm’s regulatory filings, and have come to the conclusion that, while complex and difficult to identify, all of the company’s shenanigans were right there in plain view. Enron cannot be blamed for covering up the existence of its side deals. It didn’t; it disclosed them. As Gladwell summarizes:

“Puzzles are ‘transmitter-dependent’; they turn on what we are told. Mysteries are ‘receiver dependent’; they turn on the skills of the listener.”

big data

Wind turbines, jet engines and other machinery sensors generate unprecedented amounts of data

I would argue that this extremely complex, fast moving and seismic shift that we call Big Data will favor those who have developed the ability to attune, to listen and make sense of the data. Winners in this new world will recognize what looks like an overwhelming and intractable mystery, and break that mystery down into small and manageable chunks and demystify the landscape, to uncover the important nuggets of truth and significance.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Enterprise Data Management | Tagged , , , | 1 Comment

Time to Change Passwords…Again

Time to Change Passwords…Again

Time to Change Passwords…Again

Has everyone just forgotten about data masking?

The information security industry is reporting that more than 1.5 billion (yes, that’s with a “B”) emails and passwords have been hacked. It’s hard to tell from the article, but this could be the big one. (And just when we thought that James Bond had taken care of the Russian mafia.) From both large and small companies, nobody is safe. According to the experts the sites ranged from small e-commerce sites to Fortune 500 companies.  At this time the experts aren’t telling us who the big targets were.  We could be very unpleasantly surprised.

Most security experts admit that the bulk of the post-breach activity will be email spamming.  Insidious to be sure.  But imagine if the hackers were to get a little more intelligent about what they have.  How many individuals reuse passwords?  Experts say over 90% of consumers reuse passwords between popular sites.  And since email addresses are the most universally used “user name” on those sites, the chance of that 1.5 billion identities translating into millions of pirated activities is fairly high.

According to the recent published Ponemon study; 24% of respondents don’t know where their sensitive data is stored.  That is a staggering amount.  Further complicating the issue, the same study notes that 65% of the respondents have no comprehensive data forensics capability.  That means that consumers are more than likely to never hear from their provider that their data had been breached.  Until it is too late.

So now I guess we all get to go change our passwords again.  And we don’t know why, we just have to.  This is annoying.  But it’s not a permanent fix to have consumers constantly looking over their virtual shoulders.  Let’s talk about the enterprise sized firms first.  Ponemon indicates that 57% of respondents would like more trained data security personnel to protect data.  And the enterprise firm should have the resources to task IT personnel to protect data.  They also have the ability to license best in class technology to protect data.  There is no excuse not to implement an enterprise data masking technology.  This should be used hand in hand with network intrusion defenses to protect from end to end.

Smaller enterprises have similar options.  The same data masking technology can be leveraged on smaller scale by a smaller IT organization including the personnel to optimize the infrastructure.  Additionally, most small enterprises leverage Cloud based systems that should have the same defenses in place.  The small enterprise should bias their buying criteria in data systems for those that implement data masking technology.

Let me add a little fuel to the fire and talk about a different kind of cost.  Insurers cover Cyber Risk either as part of a Commercial General Liability policy or as a separate policy.  In 2013, insurers paid an average approaching $3.5M for each cyber breach claim.  The average per record cost of claims was over $6,000.  Now, imagine your enterprise’s slice of those 1.5 billion records.  Obviously these are claims, not premiums.  Premiums can range up to $40,000 per year for each $1M in coverage.  Insurers will typically give discounts for those companies that have demonstrated security practices and infrastructure.  I won’t belabor the point, it’s pure math at this point.

There is plenty of risk and cost to go around, to be sure.  But there is a way to stay protected with Informatica.  And now, let’s all take a few minutes to go change our passwords.  I’ll wait right here.  There, do you feel better?

For more information on Informatica’s data masking technology click here, where you can drill into dynamic and persistent data masking technology, leading in the industry.  So you should still change your passwords…but check out the industry’s leading data security technology after you do.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data masking, Data Privacy | Tagged , , , | 1 Comment

Takeaways from the Gartner Security and Risk Management Summit (2014)

Last week I had the opportunity to attend the Gartner Security and Risk Management Summit. At this event, Gartner analysts and security industry experts meet to discuss the latest trends, advances, best practices and research in the space. At the event, I had the privilege of connecting with customers, peers and partners. I was also excited to learn about changes that are shaping the data security landscape.

Here are some of the things I learned at the event:

  • Security continues to be a top CIO priority in 2014. Security is well-aligned with other trends such as big data, IoT, mobile, cloud, and collaboration. According to Gartner, the top CIO priority area is BI/analytics. Given our growing appetite for all things data and our increasing ability to mine data to increase top-line growth, this top billing makes perfect sense. The challenge is to protect the data assets that drive value for the company and ensure appropriate privacy controls.
  • Mobile and data security are the top focus for 2014 spending in North America according to Gartner’s pre-conference survey. Cloud rounds out the list when considering worldwide spending results.
  • Rise of the DRO (Digital Risk Officer). Fortunately, those same market trends are leading to an evolution of the CISO role to a Digital Security Officer and, longer term, a Digital Risk Officer. The DRO role will include determination of the risks and security of digital connectivity. Digital/Information Security risk is increasingly being reported as a business impact to the board.
  • Information management and information security are blending. Gartner assumes that 40% of global enterprises will have aligned governance of the two programs by 2017. This is not surprising given the overlap of common objectives such as inventories, classification, usage policies, and accountability/protection.
  • Security methodology is moving from a reactive approach to compliance-driven and proactive (risk-based) methodologies. There is simply too much data and too many events for analysts to monitor. Organizations need to understand their assets and their criticality. Big data analytics and context-aware security is then needed to reduce the noise and false positive rates to a manageable level. According to Gartner analyst Avivah Litan, ”By 2018, of all breaches that are detected within an enterprise, 70% will be found because they used context-aware security, up from 10% today.”

I want to close by sharing the identified Top Digital Security Trends for 2014

  • Software-defined security
  • Big data security analytics
  • Intelligent/Context-aware security controls
  • Application isolation
  • Endpoint threat detection and response
  • Website protection
  • Adaptive access
  • Securing the Internet of Things
FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, CIO, Data Governance, Data Privacy, Data Security, Governance, Risk and Compliance | Tagged , , , , , , , , | Leave a comment

Data Obfuscation and Data Value – Can They Coexist?

Data Obfuscation and Data Value

Data Obfuscation and Data Value

Data is growing exponentially. New technologies are at the root of the growth. With the advent of big data and machine data, enterprises have amassed amounts of data never before seen. Consider the example of Telecommunications companies. Telco has always collected large volumes of call data and customer data. However, the advent of 4G services, combined with the explosion of the mobile internet, has created data volume Telco has never seen before.

In response to the growth, organizations seek new ways to unlock the value of their data. Traditionally, data has been analyzed for a few key reasons. First, data was analyzed in order to identify ways to improve operational efficiency. Secondly, data was analyzed to identify opportunities to increase revenue.

As data expands, companies have found new uses for these growing data sets. Of late, organizations have started providing data to partners, who then sell the ‘intelligence’ they glean from within the data. Consider a coffee shop owner whose store doesn’t open until 8 AM. This owner would be interested in learning how many target customers (Perhaps people aged 25 to 45) walk past the closed shop between 6 AM and 8 AM. If this number is high enough, it may make sense to open the store earlier.

As much as organizations prioritize the value of data, customers prioritize the privacy of data. If an organization loses a customer’s data, it results in a several costs to the organization. These costs include:

  • Damage to the company’s reputation
  • A reduction of customer trust
  • Financial costs associated with the investigation of the loss
  • Possible governmental fines
  • Possible restitution costs

To guard against these risks, data that organizations provide to their partners must be obfuscated. This protects customer privacy. However, data that has been obfuscated is often of a lower value to the partner. For example, if the date of birth of those passing the coffee shop has been obfuscated, the store owner may not be able to determine if those passing by are potential customers. When data is obfuscated without consideration of the analysis that needs to be done, analysis results may not be correct.

There is away to provide data privacy for the customer while simultaneously monetizing enterprise data. To do so, organizations must allow trusted partners to define masking generalizations. With sufficient data masking governance, it is indeed possible for data obfuscation and data value to coexist.

Currently, there is a great deal of research around ensuring that obfuscated data is both protected and useful. Techniques and algorithms like ‘k-Anonymity’ and ‘l-Diversity’ ensure that sensitive data is safe and secure. However, these techniques have have not yet become mainstream. Once they do, the value of big data will be unlocked.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, B2B Data Exchange, Data masking, Data Privacy, Data Security, Telecommunications | Tagged , , , | Leave a comment

The Power and Security of Exponential Data

The Power and Security of Exponential Data

The Power and Security of Exponential Data

I recently heard a couple different analogies for data. The first is that data is the “new oil.” Data is a valuable resource that powers global business. Consequently, it is targeted for theft by hackers. The thinking is this: People are not after your servers, they’re after your data.

The other comparison is that data is like solar power. Like solar power, data is abundant. In addition, it’s getting cheaper and more efficient to harness. The juxtaposition of these images captures the current sentiment around data’s potential to improve our lives in many ways. For this to happen, however, corporations and data custodians must effectively balance the power of data with security and privacy concerns.

Many people have a preconception of security as an obstacle to productivity. Actually, good security practitioners understand that the purpose of security is to support the goals of the company by allowing the business to innovate and operate more quickly and effectively. Think back to the early days of online transactions; many people were not comfortable banking online or making web purchases for fear of fraud and theft. Similar fears slowed early adoption of mobile phone banking and purchasing applications. But security ecosystems evolved, concerns were addressed, and now Gartner estimates that worldwide mobile payment transaction values surpass $235B in 2013. An astute security executive once pointed out why cars have brakes: not to slow us down, but to allow us to drive faster, safely.

The pace of digital change and the current proliferation of data is not a simple linear function – it’s growing exponentially – and it’s not going to slow down. I believe this is generally a good thing. Our ability to harness data is how we will better understand our world. It’s how we will address challenges with critical resources such as energy and water. And it’s how we will innovate in research areas such as medicine and healthcare. And so, as a relatively new Informatica employee coming from a security background, I’m now at a crossroads of sorts. While Informatica’s goal of “Putting potential to work” resonates with my views and helps customers deliver on the promise of this data growth, I know we need to have proper controls in place. I’m proud to be part of a team building a new intelligent, context-aware approach to data security (Secure@SourceTM).

We recently announced Secure@SourceTM during InformaticaWorld 2014. One thing that impressed me was how quickly attendees (many of whom have little security background) understood how they could leverage data context to improve security controls, privacy, and data governance for their organizations. You can find a great introduction summary of Secure@SourceTM here.

I will be sharing more on Secure@SourceTM and data security in general, and would love to get your feedback. If you are an Informatica customer and would like to help shape the product direction, we are recruiting a select group of charter customers to drive and provide feedback for the first release. Customers who are interested in being a charter customer should register and send email to SecureCustomers@informatica.com.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Governance, Data Privacy, Data Security | Tagged , , , , , | Leave a comment

Data Security and Privacy: What’s Next?

DataSecurityData security breaches continue to escalate. Privacy legislation and enforcement is tightening and analysts have begun making dire predictions in regards to cyber security’s effectiveness. But there is more – Trusted insiders continue to be the major threat. In addition, most executives cannot identify the information they are trying to protect.

Data security is a senior management concern, not exclusive to IT. With this in mind, what is the next step CxOs must take to counter these breaches?

A new approach to Data Security

It is clear that a new approach is needed. This should focus on answering fundamental, but difficult and precise questions in regards to your data:

  1. What data should I be concerned about?
  2. Can I create re-usable rules for identifying and locating sensitive data in my organization?
  3. Can I do so both logically and physically?
  4. What is the source of the sensitive data and where is it consumed?
  5. What are the sensitive data relationships and proliferation?
  6. How is it protected? How should it be protected?
  7. How can I integrate data protection with my existing cyber security infrastructure?

The answers to these questions will help guide precise data security measures in order to protect the most valuable data. The answers need to be presented in an intuitive fashion, leveraging simple, yet revealing graphics and visualizations of your sensitive data risks and vulnerabilities.

At Informatica World 2014, Informatica will unveil its vision to help organizations address these concerns. This vision will assist in the development of precise security measures designed to counter the growing sophistication and frequency of cyber-attacks, and the ever present danger of rogue insiders.

Stay tuned, more to come from Informatica World 2014.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business/IT Collaboration, Data Privacy, Informatica World 2014 | Tagged , , , | Leave a comment

What types of data need protecting?

In my first article on the topic of citizens’ digital health and safety we looked at the states’ desire to keep their citizens healthy and safe and also at the various laws and regulations they have in place around data breaches and losses. The size and scale of the problem together with some ideas for effective risk mitigation are in this whitepaper.

The cost and frequency of data breaches continue to rise

The cost and frequency of data breaches continue to rise

Let’s now start delving a little deeper into the situation states are faced with. It’s pretty obvious that citizen data that enables an individual to be identified (PII) needs to be protected. We immediately think of the production data: data that is used in integrated eligibility systems; in health insurance exchanges; in data warehouses and so on. In some ways the production data is the least of our problems; our research shows that the average state has around 10 to 12 full copies of data for non-production (development, test, user acceptance and so on) purposes. This data tends to be much more vulnerable because it is widespread and used by a wide variety of people – often subcontractors or outsourcers, and often the content of the data is not well understood.

Obviously production systems need access to real production data (I’ll cover how best to protect that in the next issue), on the other hand non-production systems of every sort do not. Non-production systems most often need realistic, but not real data and realistic, but not real data volumes (except maybe for the performance/stress/throughput testing system). What need to be done? Well to start with, a three point risk remediation plan would be a good place to start.

1. Understand the non-production data using sophisticated data and schema profiling combined with NLP (Natural Language Processing) techniques help to identify previously unrealized PII that needs protecting.
2. Permanently mask the PII so that it is no longer the real data but is realistic enough for non-production uses and make sure that the same masking is applied to the attribute values wherever they appear in multiple tables/files.
3. Subset the data to reduce data volumes, this limits the size of the risk and also has positive effects on performance, run-times, backups etc.

Gartner has just published their 2013 magic quadrant for data masking this covers both what they call static (i.e. permanent or persistent masking) and dynamic (more on this in the next issue) masking. As usual the MQ gives a good overview of the issues behind the technology as well as a review of the position, strengths and weaknesses of the leading vendors.

It is (or at least should be) an imperative that from the top down state governments realize the importance and vulnerability of their citizens data and put in place a non-partisan plan to prevent any future breaches. As the reader might imagine, for any such plan to success needs a combination of cultural and organizational change (getting people to care) and putting the right technology – together these will greatly reduce the risk. In the next and final issue on this topic we will look at the vulnerabilities of production data, and what can be done to dramatically increase its privacy and security.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving, Data Governance, Data masking, Data Privacy, Public Sector | Tagged , , , , , | Leave a comment

No, Cloud Won’t Chase the Data Analytics Gremlins Away

Hosting Big Data applications in the cloud has compelling advantages. Scale doesn’t become as overwhelming an issue as it is within on-premise systems. IT will no longer feel compelled to throw more disks at burgeoning storage requirements, and performance becomes the contractual obligation of someone else outside the organization.

Cloud may help clear up some of the costlier and thornier problems of attempting to manage Big Data environments, but it also creates some new issues. As Ron Exler of Saugatuck Technology recently pointed out in a new report, cloud-based solutions “can be quickly configured to address some big data business needs, enabling outsourcing and potentially faster implementations.” However, he adds, employing the cloud also brings some risks as well.

Data security is one major risk area, and I could write many posts on this. But management issues also present other challenges. Too many organizations see cloud as an cure-all for their application and data management ills, but broken processes are never fixed when new technology is applied to them. There are also plenty of risks with the misappropriation of big data, and the cloud won’t make these risks go away. Exler lists some of the risks that stem from over-reliance on cloud technology, from the late delivery of business reports to the delivery of incorrect business information, resulting in decisions based on incorrect source data. Sound familiar? The gremlins that have haunted data analytic and management for years simply won’t disappear behind a cloud.

Exler makes three recommendations for moving big data into cloud environments – note that the solutions he proposes have nothing to do with technology, and everything to do with management:

1) Analyze the growth trajectory of your data and your business. Typically, organizations will have a lot of different moving parts and interfaces. And, as the business grows and changes, it will be constantly adding new data sources.  As Exler notes, “processing integration or hand off points in such piecemeal approaches represent high risk to data in the chain of possession – from collection points to raw data to data edits to data combination to data warehouse to analytics engine to viewing applications on multiple platforms.” Business growth and future requirements should be analyzed and modeled to make sure cloud engagements will be able “to provide adequate system performance, availability, and scalability to account for the projected business expansion,” he states.

2) Address data quality issues as close to the source as possible.  Because both cloud and big data environments have so many moving parts,  “finding the source of a data problem can be a significant challenge,” Exler warns. “Finding problems upstream in the data flow prevent time-consuming and expensive reprocessing that could be needed should errors be discovered downstream.” Such quality issues have a substantial business cost as well. When data errors are found, it becomes “an expensive company-wide fire drill to correct the data,” he says.

3) Build your project management, teamwork and communication skills. Because big data and cloud projects involve so many people and components from across the enterprise, requiring coordination and interaction between various specialists, subject matter experts, vendors, and outsourcing partners. “This coordination is not simple,” Exler warns. “Each group involved likely has different sets of terminology, work habits, communications methods, and documentation standards. Each group also has different priorities; oftentimes such new projects are delegated to lower priority for supporting groups.” Project managers must be leaders and understand the value of open and regular communications.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud Computing, Data Quality | Tagged , , , | Leave a comment