What is a Data Ready Enterprise?
One that is able to treat data as a strategic asset throughout the organization. One that invests in its enterprise architecture to build a foundation of data ‘readiness’. One that incorporates data into its culture and uses it to drive high performance teams.
Data Drives Profit
In a recent research study, ’Data drives profit in a data-ready enterprise’, more than half of respondents agreed that ‘An effective data strategy can be a competitive advantage for companies’. Yet less than 25% of those same respondents stated ’Our data management is good enough to satisfy our current needs’.
So we know that data is important – yet are we challenged in creating a data ready culture? Are we really only mediocre at best when it comes to managing data? With digital transformation initiatives in full swing (or behind us for those movers and shakers) and our key business processes automated, how do we pivot from focusing on the application to focusing on the data? It starts with defining business strategies at the top and building capabilities throughout necessary to become data ready.
What are the unique business strategies and capabilities required to be data-ready?
In most enterprises, there is a function that focuses specifically on the corporate strategy typically as part of a C-Suite. In order to transform to the data ready enterprise, this function will need to define policies, corporate goals and initiatives that focus the rest of the organization to adopt data management best practices and drive necessary change. This may even include creating an organization dedicated to the organization’s data management function and capabilities with a leader, such as a Chief Data Officer, who may not necessarily report to the CIO.
In order to make key business decisions based on a data-ready framework, organizations should leverage their enterprise architecture teams to identify data assets that are required to support each business function’s needs. Once those assets have been identified, key capabilities – such as data quality, data integration, mastering data, and data governance – need to be developed and matured. Rather than boiling the ocean, start with a key business initiative that would benefit the most from a focus on the data itself.
How do you get started?
If your organization is just starting on its transformational journey to becoming a data ready enterprise, focus on one or two key business initiatives that will significantly benefit from a data ready framework. For example, most Informatica customers start with one business initiative, such as ‘Total Customer Experience’ or ‘Next Generation Analytics’, to build their data ready organizational capabilities. By leveraging the enterprise architecture team, identify common requirements and technologies that can be leveraged across multiple initiatives, and focus on quick wins. Here is where an Intelligent Data Platform approach can offer significant value over point solutions or projects (deploy once, leverage everywhere).
If you are at the very beginning just trying to get support for why this transformation is critical for your business, download the ‘Data drives profit in the data ready enterprise’ ebook. If your executive management team agrees and is looking for how to get started, download the ‘How to organize the data ready enterprise’ ebook.
It’s time to get ready – data-ready!
The Chief Information Security Officer (CISO) and the Chief Risk Officer (CRO) generally speak in different languages. One speaks about how to secure an organization and its assets. The other speaks about the potential of losing something of value. One area where they find common ground is in the shared conversation of the Cost versus Risk of a data breach.
A data breach costs an organization in the US on average $201 per stolen record. The risk of a data breach is a number between 1 and 10 that indicates how at risk your organization is. The cost of implementing security measures and controls ranges based on the acceptable levels of risk an organization is willing to take.
This is the conversation that needs to be mastered in order to communicate the need for more resources to Chief Financial Officers and the rest of the C-Suite.
As organizations conduct vulnerability assessments of their IT landscape, they get a sense for how at risk their environments and systems are of being breached. Yet, in many cases, these vulnerability tools have significant blind spots when users replicate data to applications and systems that are not within reach of their assessment tools. This requires the addition of a data-centric approach to classifying, categorizing and measuring the value of data and its potential risk.
In the Informatica Secure@Source launch event, Larry Ponemon of the Ponemon Institute describes during a panel session how great it would be if there were a tool that could tell you ‘ here is the risk of the data’ and ‘here is the cost of that risk to the organization’. That is exactly what Secure@Source was designed to accomplish.
If you are unable to view the video, click here.
Security teams are not surprisingly consistently under-resourced. Teams are constantly responding to alerts and intelligence feeds which causes a cry of need for more resources. Yet, if these teams had a view into where the data was most at risk and could focus their energy on prioritized assets that if secured at the source would eliminate downstream risk, they may find their world less overwhelming.
Throughout the RSA conference this week, there was a steady drumbeat calling out the need for building a security mindset in an organization. Many breaches are caused by people making mistakes in our work places. How can you stop breaches caused by the human factor? It is all about increasing awareness and actively making an effort to build security mindedness into everything we do.
During one RSA breakout session entitled, How One Smart Phone Picture Can Take Down Your Company, Dr. Larry Ponemon, Founder of the Ponemon Institute, describes how a hacker really only needs one piece of valuable information to unlock a large-scale data breach, which can be achieved by taking a snapshot of log-in credentials on a screen and other low-tech means. In his research report, Visual Hacking Experimental Study, he cites how ‘certain situations are more risky. Documents on vacant desks and data visible on computer screens are most likely to be hacked.’ This research report was sponsored by 3M – which makes sense since they sell privacy screens for computers, iPads and iPhones.
What is really needed is to make teams aware of the risk and vulnerabilities through education and training, through policy definitions and enforcement, and through constant reminders from leadership.
One startup company, Apozy, took a novel approach using gamification to incentivize employees to incorporate best practices in their day to day routines. Informatica’s own CISO, Bill Burns, is using an internal competition between departments to motivate management to incorporate best practices.
While we continue to invest in technology to automate the implementation and enforcement of policies through controls, we also need to look at who we are hiring and incorporating the security conversation into the on-boarding process.
When recruiting, look to colleges and universities that offer courses and degrees in cybersecurity. (Check out the Ponemon Institute 2014 Best Schools for Cybersecurity). Arnold Federbaum, Adjunt Professor of Cyber Security at NYU School of Engineering discusses Data Security Culture and Higher Education in a panel video recorded during the Informatica Secure@Source product launch.
If you unable to view the video, click here.
Even the IRS has great training videos and podcasts to build awareness on potential risks of identity theft.
As we continue to see more data breach related news, it will be important to emphasize a security mindedness in an organizations culture, build policies that make sense and that have the appropriate level of enforcement, and if it is critical to your business, prioritize hiring those with a formal education and background in cybersecurity.
In an RSA Conference session entitled IAPP: Engineering Privacy: Why Security Isn’t Enough, Sagi Leizerov, E&Y’s Privacy Practice leader began with a plea:
‘We need effective ways to bring together privacy and security controls in an automated way”
Privacy professionals, according to Sagi, essentially need help in determining the use of information – which is a foundational definition of data privacy. Security tools and controls can provide the information necessary to perform that type of investigation conducted by privacy officers. Yet as data proliferates, are the existing security tools truly up for the task?
In other sessions, such as A Privacy Primer for Security Officers , many speakers are claiming that Data Security projects get prioritized as a result of a need to comply with Data Privacy policies and legislation.
We are in an age where data proliferation is one of the major sources of pain for both Chief Information Security Officers and Chief Privacy and Risk Officers (CPO/CRO). Business systems that were designed to automate key business processes store sensitive and private information are primary sources of data for business analytics. As more business users want access data to understand the state of their businesses, data naturally proliferates. Data proliferates to spreadsheets and presentations, emailed in and out of a corporate network, and potentially stored in a public cloud storage offering.
Even though the original intention for using this information was likely all above board, one security violation could potentially open up a can of worms for nefarious characters to take advantage of this data for mal intent. Jeff Northrop, the CTO of the International Association of Privacy Professionals (IAPP) suggests we need to close the gap between security and privacy in a panel discussion with Larry Ponemon, founder of the Ponemon Institute.
Sagi concluded his session by stating ‘Be a voice of change in your organization. Pilot products, be courageous, give new ideas a chance.’ In the recent launch of Informatica Secure@Source, we discuss the need for more alignment between security and privacy teams and the industry seems to agree. Congratulations to the Informatica Secure@Source development team for their recent announcement of winning Gold Medal in the New Product and Service Category at the Info Security Products Guide 2015 Global Excellence Awards!
For more on the importance of Data Security Intelligence in Privacy, watch Larry Ponemon, Founder of the Ponemon Institute and Jeff Northrop, CTO IAPP discuss this topic with Arnold Federbaum, former CISO and Adjunct Professor, NYU, and Linda Hewlett, Sr Enterprise Security Architect, Santander Holdings USA.
If unable to view the video, click here.
This week at the RSA conference, Informatica’s CISO Bill Burns presented to a packed room filled with security professionals coaching new and aspiring CISOs what battles to fight from the perspective of changing your frame of reference. This was, in my opinion, one of the most useful sessions of the day. Bill’s practical advice and insights made a lot of sense. Here are the top ideas I took away from the presentation.
The role of the CISO, at the end of the day, is to raise the bar of an organization’s security posture and leave it in a better place than when they arrived. With this as the context of his advice, he continued to review frames of reference a CISO should have when fighting for budget, resources, and mindshare.
Risk vs Threat
Focus on what you can control. You don’t know when the next zero day will be. You can’t predict when an attack will happen – but you prepare. Reduce the impact in the event of an attack. Conduct vulnerability assessments and change the conversation to things you can do.
Data vs Opinion
Use a data-driven approach to drive fact-based conversations. Use the scientific method to propose a hypothesis, experiment, conduct A/B tests, measure results, and prove/disprove your hypothesis. Make decisions to improve security based on the data and repeat. For example, test what message will work to your end users. Send two emails with a security message – one that focuses on compliance and another that focuses on best practices that are the right thing to do. See which emails the users respond to and use that message.
Relationships vs Transactions
Build relationships with your peers inside and outside the organization, take them out to lunch and ask them about their business. Remove subjectivity and opinions in your dialogue by leveraging 3rd party data and information from peers. For example, leverage your relationships and knowledge bases outside your organization to collect input on salaries, budgets, product reviews, successful training programs, feedback and your own sanity. Use that as part of your dialogue with your internal constituents to increase your relevance to their world while avoiding being viewed as transactional.
Business Impact vs Disruption
Speak to the business impact. Security can be a competitive advantage and it is a ‘must do’. Talk about the potential threat by looking at what happened to competitors and ask, what if that happened here? How would it disrupt our business? And have an answer at the ready, ‘My analysis shows that we could improve here versus there’. Connect the dots for the business.
Systems and Programs vs Tasks
Looking at all of the tasks that need to be completed can be a daunting task. Rather than focusing on the list of patches that need to be applied (you have to do that anyways), focus on the configuration management process and measure process improvements. Measure things like time to closure, and not so much the number of tasks.
For more information on Bill Burn’s recommendations and presentation, visit his session link.
To hear more about the changing role of the CISO, watch Larry Ponemon, Founder of the Ponemon Institute and Jeff Northrop, CTO IAPP discuss this topic with Arnold Federbaum, former CISO and Adjunct Professor, NYU, and Linda Hewlett, Sr Enterprise Security Architect, Santandar Holdings USA.
If unable to view the video, click here.
Informatica announced Secure@Source last week, unveiling the industry’s first data security intelligence offering. At a time when Not Knowing Where Sensitive and Confidential Data Reside is the number one thing that keeps security professionals up at night for two years in a row, according to The Ponemon Institute, it seems like the timing is right for a capability, such as Data Security Intelligence, that gives line of site to obscured threats.
Neuralytix conducted market research, entitled The Future State of Data Security Intelligence, where they define data security intelligence (DSI) as a framework for understanding the risk of sensitive or confidential data and recommending the optimal set of controls to mitigate that risk. DSI is comprised of technology that provides the definition, classification, discovery, and assessment phases of a data-centric security approach. The state:
By deploying data security intelligence in combination with data security controls, enterprises can gain active insight into where risks exist and proactively set controls to mitigate the impact in the event of a data breach.
The Enterprise Strategy Group further commented in the report, “Data‐centric Security: A New Information Security Perimeter”, authored by industry expert, Jon Oltsik:
To address modern threats and IT mobility, CISOs must adopt two new security perimeters around identity attributes and data-centric security. In this regard, sensitive data must be continuously monitored for situational awareness and risk management.
This launch precedes the security industry’s equivalent of the NFL’s Superbowl – RSA Conference, where the world talks security. Informatica will be there, debuting its first Data Security Intelligence offering Secure@Source. The team should be so proud – this is by far one of the coolest products I have had the opportunity to be a part of. Here is a brief blurb on what Secure@Source is and does:
Secure@Source discovers, analyzes and visualizes data relationships, proliferation and sensitivity that details data risks and vulnerabilities to focus data protection and monitoring to secure data from external breaches and insider abuse. Secure@Source leverages proven data integration and quality capabilities to provide integrated views of data, independent of platform, from legacy, cloud, big data and mobile environments.
Secure@Source provides granular detail on what data has value, where the data resides and how it transverses the enterprise and how it should be protected. Informatica leverages market leading technology for data discovery and profiling, protection and retirement, and innovative analysis and visualizations for monitoring data security in real-time.
At a conference where the world talks security, I’m looking forward to engaging in conversations with you about getting smarter about Data Security Intelligence and eliminate blind spots. See you at the venue from April 20-24, South Hall, Booth No.2626.
Original article is posted at techcrunch.com
It’s probably no surprise to the security professional community that once again, identity theft is among the IRS’s Dirty Dozen tax scams. Criminals use stolen Social Security numbers and other personally identifiable information to file tax claims illegally, deposit the tax refunds to rechargeable debit cards, and vanish before the average citizen gets around to filing.
Since the IRS began publishing its “Dirty Dozen” list to alert filers of the worst tax scams, identity theft has continually topped the list since 2011. In 2012, the IRS implemented a preventive measure to catch fraud prior to actually issuing refunds, and issued more than 2,400 enforcement actions against identity thieves. With an aggressive campaign to fight identity theft, the IRS saved over $1.4 billion in 2011 and over $63 billion since October 2014.
That’s great progress – but given that of the 117 million tax payers who filed electronically in 2014, 80 million received on average $2,851 directly deposited into their bank, which is more than $229 billion changing hands electronically. The pessimist in me has to believe that cyber criminals are already plotting how to nab more Social Security numbers and e-filing logins to tap into that big pot of gold.
So where are criminals getting the data to begin with? Any organization that has employees and a human resources department collects and possibly stores Social Security numbers, birthdays, addresses and income either on-premises or in a cloud HR application. This information is everything a criminal would need to fraudulently file taxes. Any time a common business process is digitally transformed, or moved to the cloud, the potential risk of exposure increases.
As the healthcare industry transforms to electronic health records and patient records, another abundant source of Social Security numbers and personally identifiable information increases the surface area of opportunity. When you look at the abundance of Social Security numbers stolen in major data breaches, such as the case with Anthem, you start to connect the dots.
One of my favorite dynamic infographics comes from the website Information is Beautiful entitled, ‘World’s Biggest Data Breaches.’ When you filter the data based on number of records versus sensitivity, the size of the bubbles indicate the severity. Even though the sensitivity score appears to be somewhat arbitrary, it does provide one way to assess the severity based on the type of information that was breached:
|Just email address/online information||1|
|Credit card information||300|
|Email password/health records||4000|
|Full bank account details||50000|
What would be an interesting addition is how many records were sold on the black market that resulted in tax or insurance fraud.
Cyber-security expert Brian Krebs, who was personally impacted by a criminal tax return filing last year, says we will likely see “more phony tax refund claims than last year.” With credentials for TurboTax and H&R Block marketed on black market websites for about 4 cents per identity, it is hard to disagree.
The Ponemon Institute published a survey last year, entitled The State of Data Centric Security. One research finding that sticks out is when security professionals were asked what keeps them up at night, and more than 50 percent said “not knowing where sensitive and confidential data reside.” As we enter full swing into tax season, what should security professionals be thinking about?
Data Security Intelligence promises to be the next big thing that provides a more automated and data-centric view into sensitive data discovery, classification and risk assessment. If you don’t know where the data is or its risk, how can you protect it? Maybe with a little more insight, we can at least reduce the surface area of exposed sensitive data.
The International Association of Privacy Professionals (IAPP) held its Global Privacy Summit in Washington DC March 4-6. The topic of Data-Centric Security was presented by Informatica’s Robert Shields, Product Marketing, Data Security Group. Here is a quick recap of the conversation in case you missed it.
In an age of the massive data breach, there is agreement between security and privacy professionals that we must redefine privacy policies and controls. What we are doing is just not working effectively. Network, Host and Endpoint Security needs to be strengthened by Data-Centric Security approaches. The focus needs to be on using data security controls such that they can be enforced no matter where sensitive or confidential data proliferates.
Data-Centric Security does not mean ‘encrypt it all’. That is completely impractical and introduces unnecessary cost and complexities. The approach can be simplified into four categorical steps: 1. Classify it, 2. Find it, 3. Assess its risk, 4. Protect it.
1. Classify it.
The idea behind Data-Centric Security is that based on policy, an enterprise defines its classifications of what is sensitive and confidential then apply controls to that set of data. For example, if the only classified and sensitive data that you store in your enterprise is employee data, than focus on just employee data. No need to boil the ocean in that case. However, if you have several data domains of sensitive and confidential data, you need to know where it resides and assess its risk to help prioritize your moves.
2. Find it.
Discover where in your enterprise sensitive and classified data reside. This means looking at how data is proliferating from its source to multiple targets – and not just copies made for backup and disaster recovery purposes.
For example, if you have a data warehouse where sensitive and confidential data is being loaded through a transformation process, the data is still considered classified or sensitive, but its shape or form may have changed. You also need to know when data leaves the firewall it becomes available to view on a mobile device, or accessible by a remote team, such as offshore development and support teams.
3.Assess its risk.
Next, you need to be able to assess the data risk based the number of users who may have access to the data and where those users are physically located and based on existing security controls that may already exist. If large volumes of sensitive data is potentially being exposed to a large population in another country, you might want to consider this data more at risk than a few number of records that are encrypted residing in your protected data center. That helps you prioritize where to start implementing controls to maximize the return on your efforts.
4. Protect it.
Once you have a sense of prioritization, you can then apply the appropriate, cost effective controls that aligns with its level of risk. Place monitoring tools around the sensitive data and detect when usage patterns become unusual. Train on normal user behavior and then initiate an alert to recommend a change to the application of a control.
In a world where policies are defined and enforced based on data privacy regulations and standards, it only makes sense to align the right intelligence and controls to ensure proper enforcement. In reality these four steps are complex and they do require cross-functional teams to come together and agree on a strategy.
Who remembers their first game of Pong? Celebrating more than 40 years of innovation, gaming is no longer limited to monochromatic screens and dedicated, proprietary platforms. The PC gaming industry is expected to exceed $35bn by 2018. Phone and handheld games is estimated at $34bn in 5 years and quickly closing the gap. According to EEDAR, 2014 recorded more than 141 million mobile gamers just in North America, generating $4.6B in revenue for mobile game vendors.
This growth has spawned a growing list of conferences specifically targeting gamers, game developers, the gaming industry and more recently gaming analytics! This past weekend in Boston, for example, was PAX East where people of all ages and walks of life played games on consoles, PC, handhelds, and good old fashioned board games. With my own children in attendance, the debate of commercial games versus indie favorites, such as Minecraft , dominates the dinner table.
Online games are where people congregate online, collaborate, and generate petabytes of data daily. With the added bonus of geospatial data from smart phones, the opportunity for more advanced analytics. Some of the basic metrics that determine whether a game is successful, according to Ninja Metrics, include:
- New Users, Daily Active Users, Retention
- Revenue per user
- Session length and number of sessions per user
Additionally, they provide predictive analytics, customer lifetime value, and cohort analysis. If this is your gig, there’s a conference for that as well – the Gaming Analytics Summit !
At the Game Developers Conference recently held in San Francisco, the focus of this event has shifted over the years from computer games to new gaming platforms that need to incorporate mobile, smartphone, and online components. In order to produce a successful game, it requires the following:
- Needs to be able to connect to a variety of devices and platforms
- Needs to use data to drive decisions and improve user experience
- Needs to ensure privacy laws are adhered to.
Developers are able to quickly access online gaming data and tweak or change their sprites’ attributes dynamically to maximize player experience.
When you look at what is happening in the gaming industry, you can start to see why colleges and universities like my own alma mater, WPI, now offers a computer science degree in Interactive Media and Game Design degree . The IMGD curriculum includes heavy coursework in data science, game theory, artificial intelligence and story boarding. When I asked a WPI IMGD student about what they are working on, they are mapping out decision trees that dictate what adversary to pop up based on the player’s history (sounds a lot like what we do in digital marketing…).
As we start to look at the Millennial Generation entering into the workforce, maybe we should look at our own recruiting efforts and consider game designers. They are masters in analytics and creativity with an appreciation for the importance of great data. Combining the magic and the math makes a great gaming experience. Who wouldn’t want that for their customers?
Informatica, over the last two years, successfully transformed from running 80% of its application portfolio on premises to 80% in the cloud. Success was based on two key criteria:
- Ensuring the SaaS-based processes are integrated with no disruption
- Data in the cloud continues to be available and accessible for analytics
With industry analysts predicting that the majority of new application deployments will be SaaS-based by 2017, the requirement of having connected data should not be negotiable. It is a must have. Most SaaS applications ensure businesses are able to keep processes integrated using connected and shared data through application programming interfaces (APIs).
If you are a consumer of SaaS applications, you probably know the importance of having clean, connected and secure data from the cloud. The promise of SaaS is improved agility. When data is not easily accessible, that promise is broken. With the plethora of options available in the SaaS ecosystem and marketplace, not having clean, connected and safe data is a compelling event for switching SaaS vendors.
If you are in the SaaS application development industry, you probably know that building these APIs and connectors is a critical requirement for success. However, how do you decide which applications you should build connectors for when the ecosystem keeps changing? Investment in developing connectors and interfaces consumes resources and competes with developing competitive and differentiating features.
This week, Informatica launched its inaugural DataMania event in San Francisco where the leading topic was SaaS application and data integration. Speakers from AWS, Adobe, App Dynamics, Dun & Bradstreet, and Marketo – to name a few – contributed to the discussion and confirmed that we entering into the era of the Data Ready Enterprise. Also during the event, Informatica announced the Connect-a-thon, a hackathon-like event, where SaaS vendors can get connected to hundreds of cloud and on-premises apps.
Without a doubt, transitioning to a cloud and SaaS-based application architecture can only be successful if the applications are easily connectable with shared data. Here at Informatica, this was absolutely the case. Whether you are in the business or a consumer of SaaS applications, consider the benefits of using a standard library of connectors, such as what Informatica Cloud offers so you can focus your time and energy on innovation and more strategic parts of your business.