Data Theft – Would You Pay The Ransom?
At Deloitte’s Experience Analytics conference this year, there was an interesting and entertaining lab focussing on how to prepare for, and respond to, a data theft. The format was a simulation of a data breach scenario: We were asked to image that we worked for a fictitious European airline who was being blackmailed. The blackmailers are threatening to publish personal information stolen from this fictitious airline if we don’t pay a ransom of £250k. Attendees were divided into groups – and everyone participated in lively debate and discussion while choosing their teams actions and responses. Whilst responses differed between groups, there was a clear common thread:
During the simulation, all teams wanted to delay almost all decisions and actions until they understood what data had been taken.
The simulation highlighted the importance of organisations having predefined procedures and policies for responding to these situations. In each group, there were differences of opinion on all topics pertaining to the challenge: Do we pay the ransom? Who are the stakeholders? When do we inform the regulators? When do we make a press announcement? With the clock ticking and only 72 hours to report the breach to the regulators, time is of the essence. The Deloitte lab highlighted that an actual data breach event would be a horrible time to discover an internal difference of opinion on which actions to take.
Based on my experience in the lab, even with data breach response policies and procedures being in place, people will be reluctant to take any specific action until they have a better understanding of what data had been taken. Typical questions people wanted answered were:
- What data is it? (names, credit card numbers, email addresses, travel plans ???)
- Where did it come from?
- Is it actually ours?
- How did the thieves get the data – and are they still in the system?
- Who exactly has been affected?
Whilst the thieves will probably answer the first question, it is the second that may be a sticking point in a real scenario. Given the highly complex IT environments of today’s average enterprises – data is spread across several different systems, both on premise and on the cloud. Most industries are in the midst of a data driven digital transformation – increasing pressure to innovate rapidly, often using personal data to create more meaningful experiences for our direct and indirect customers. Business units often spin up (cloud based) analytics environments to answer specific questions or test theories. So typically today’s complex IT landscape is also a very dynamic one.
Finding where the leaked data came from can be a huge task, which must be done rapidly to make informed choices in your response to a breach.
Informatica’s Secure@Source is designed to provide complete visibility into personal and sensitive data with data discovery and classification across the enterprise. It also monitors data access and movement – and can identify suspicious behaviour. Secure@Source can significantly reduce the time it takes your organisation to understand the crucial questions of: Where did the data come from, how did the thieves get it, and do they still have access to the system.
In the lab we were told of the source of the breach quickly – a test system that developers had inadvertently leaked the logon credentials via an online collaborative development portal. This reflects a common challenge that organisations need to overcome: Developers want to use live data to test their systems, but test systems rarely enjoy the same level of protection as production systems. Data breaches frequently originate from non-production systems. Of course, this is not the only source of breaches, organisations should also consider access to production systems with personal data. Data masking can be used as a prevention tool for data breaches. Dynamic and persistent data masking, combined with encryption, protect personal and sensitive data from unauthorised access, for application users, business intelligence, application testing, and outsourcing.
The lab was a fantastic experience for all involved. It highlighted the need for preparation for a scenario that requires you to take many decisions, involving internal and external stakeholders – and in a short period of time. Our chairperson for the meeting – Tim Johnson, a partner in Deloitte’s reputation, crisis and resilience practice – added a lot of insight from his many years of experience. The key insight I took from Tim was the one thing most teams missed:
In a time of crisis, we should focus on what could be the best outcome from the situation – all decisions should work towards achieving this outcome.
In the case of our airline simulation, possible desired outcomes could be: avoiding regulatory fines, retaining confidence in the airline among our customer base, keep bookings high despite the leak or demonstrating that we have our customer’s best interests at heart. Different target outcomes will drive different behaviours or sequences of events.
As you can imagine, there are no right answers per se. Tim shared that in his experience, responses and actions really will depend on the actual situation. But back to original question of this blog – in a similar situation will you pay the ransom? In our simulation, most groups chose not to, citing reasons of principle, or lack of trust in the blackmailers who have an obvious criminal (and hence dishonest) tendency. The decision was not unanimous in any group, and one group chose to pay the ransom because it wasn’t a large amount for an airline, and it may just work. It didn’t work. The bad guys leaked the data anyway regardless of whether a group paid on not. Does this reflect reality?
Again, no definitive answer, as actual situations will vary. All the more reason to have data theft policies and procedures in place prior to any crisis. In addition to the ability to rapidly determine the source, cause and scope of data theft in order to make informed decisions with your specific response goals in mind.