At the recent Bosch Connected World conference in Berlin, Stefan Bungart, Software Leader Europe at GE, presented a very interesting keynote, “How Data Eats the World”—which I assume refers to Marc Andreesen’s statement that “Software eats the world”. One of the key points he addressed in his keynote was the importance of generating actionable insight from Big Data, securely and in real-time at every level, from local to global and at an industrial scale will be the key to survival. Companies that do not invest in DATA now, will eventually end up like consumer companies which missed the Internet: It will be too late.
As software and the value of data are becoming a larger part of the business value chain, the lines between different industries become more vague, or as GE’s Chairman and CEO Jeff Immelt once stated: “If you went to bed last night as an industrial company, you’re going to wake up today as a software and analytics company.” This is not only true for an industrial company, but for many companies that produce “things”: cars, jet-engines, boats, trains, lawn-mowers, tooth-brushes, nut-runners, computers, network-equipment, etc. GE, Bosch, Technicolor and Cisco are just a few of the industrial companies that offer an Internet of Things (IoT) platform. By offering the IoT platform, they enter domains of companies such as Amazon (AWS), Google, etc. As Google and Apple are moving into new areas such as manufacturing cars and watches and offering insurance, the industry-lines are becoming blurred and service becomes the key differentiator. The best service offerings will be contingent upon the best analytics and the best analytics require a complete and reliable data-platform. Only companies that can leverage data will be able to compete and thrive in the future.
The idea of this “servitization” is that instead of selling assets, companies offer service that utilizes those assets. For example, Siemens offers a service for body-scans to hospitals instead of selling the MRI scanner, Philips sells lightning services to cities and large companies, not the light bulbs. These business models enable suppliers to minimize disruption and repairs as this will cost them money. Also, it is more attractive to have as much functionality of devices in software so that upgrades or adjustments can be done without replacing physical components. This is made possible by the fact that all devices are connected, generate data and can be monitored and managed from another location. The data is used to analyse functionality, power consumption, usage , but also can be utilised to predict malfunction, proactive maintenance planning, etc.
So what impact does this have on data and on IT? First of all, the volumes are immense. Whereas the total global volume of for example Twitter messages is around 150GB, ONE gas-turbine with around 200 sensors generates close to 600GB per day! But according to IDC only 3% of potentially useful data is tagged and less than 1% is currently analysed. Secondly, the structure of the data is now always straightforward and even a similar device is producing different content (messages) as it can be on a different software level. This has impact on the backend processing and reliability of the analysis of the data.
Also the data often needs to put into context with other master data from thea, locations or customers for real-time decision making. This is a non-trivial task. Next, Governance is an aspect that needs top-level support. Questions like: Who owns the data? Who may see/use the data? What data needs to be kept or archived and for how long? What needs to be answered and governed in IoT projects with the same priorities as the data in the more traditional applications.
To summarize, managing data and mastering data governance is becoming one of the most important pillars of companies that lead the digital age. Companies that fail to do so will be at risk for becoming a new Blockbuster or Kodak: companies that didn’t adopt quickly enough. In order to avoid this, companies need to evaluate a data platform can support a comprehensive data strategy which encapsulates scalability, quality, governance, security, ease of use and flexibility, and that enables them to choose the most appropriate data processing infrastructure, whether that is on premise or in the cloud, or most likely a hybrid combination of these.
Google recently announced that it has built a data-after-death tool to allow consumers to decide what happens to their data after they die. The feature applies to Google’s email services, as well as social networking site, Google Plus. Whilst Google claims the new tool will help users to plan their digital afterlife and protect their privacy and security, internet users have expressed concern about what happens to their personal information after their death.
Google’s new tool is essentially the equivalent of a ‘digital will’ – giving an increasingly online population more autonomy to decide what happens to their data after they die. Of course, it’s natural for consumers to be nervous about whose hands their data falls into after they meet their maker. But, in today’s information economy, as we expose more and more personal information across a growing number of places, we have to accept that we’re going to leave a digital footprint behind after we’re gone. It will be up to businesses and consumers to work together in order to ensure this footprint is accessed and protected in an appropriate way.
When it comes to dealing with personal data, Google, as well as any other business that attempts to tackle this issue head on, will need an intelligent approach. As consumers, customers, employees and social networkers, our personal information is spread across various websites and data storage facilities, both in the cloud and on-premise. This data deluge has the potential to overwhelm the businesses that are busy collecting and making sense of it, and that’s why it’s so important they have solid data management and protection policies in place to sift through the masses and derive true data value, whilst ensuring they have the full permission of the consumer.
So if you’re in the business of collecting, managing and storing customer data, what steps would you put in place to ensure that information lives on in the way individuals want it to after they’ve gone?
It was recently reported that millions of tweets, Facebook status updates and personal information taken from eBay, Tripadvisor and Rightmove are to be preserved for the nation by The British Library and four other “legal deposit libraries”. This means they’ll have the right to collect and store everything that is published online through these means in the UK.
With estimates that this could make up to one billion pages a year available for research, there’s no doubt that this move will add to an ever-increasing mountain of digital information. The legal deposit libraries will have quite a task on their hands when it comes to sifting through, managing, monitoring and storing this data in order to accurately report on historical online events.
Of course, where more personal data is involved, the issue of data privacy comes hand in hand. With more tweets, status updates and personal online musings tracked, stored and preserved the pressure will be on the legal deposit libraries to adequately protect that data. Keeping digital records of historical data long term has never really been an issue before as, according to the British Library’s Richard Gibby, the average web page lasts just 75 days.
Undoubtedly, policies and procedures will be put in place in time to govern the documentation of internet history. This will pave the way for a new era of documenting important historical events, one in which digital information has the potential to take centre stage. However, as always accuracy in reporting will be key and in order to make this documentation project a success, the British Library will need to put in place solid data management policies to ensure it can sift through the masses and create accurate records from a variety of data sources that will stand the test of time.
There’s a lesson to learn here for businesses across every sector – what data management steps are you taking today to ensure that you’re set up to thrive in tomorrow’s world?
The term “big data” has been bandied around so much in recent months that arguably, it’s lost a lot of meaning in the IT industry. Typically, IT teams have heard the phrase, and know they need to be doing something, but that something isn’t being done. As IDC pointed out last year, there is a concerning shortage of trained big data technology experts, and failure to recognise the implications that not managing big data can have on the business is dangerous. In today’s information economy, as increasingly digital consumers, customers, employees and social networkers we’re handing over more and more personal information for businesses and third parties to collate, manage and analyse. On top of the growth in digital data, emerging trends such as cloud computing are having a huge impact on the amount of information businesses are required to handle and store on behalf of their customers. Furthermore, it’s not just the amount of information that’s spiralling out of control: it’s also the way in which it is structured and used. There has been a dramatic rise in the amount of unstructured data, such as photos, videos and social media, which presents businesses with new challenges as to how to collate, handle and analyse it. As a result, information is growing exponentially. Experts now predict a staggering 4300% increase in annual data generation by 2020. Unless businesses put policies in place to manage this wealth of information, it will become worthless, and due to the often extortionate costs to store the data, it will instead end up having a huge impact on the business’ bottom line. Maxed out data centres Many businesses have limited resource to invest in physical servers and storage and so are increasingly looking to data centres to store their information in. As a result, data centres across Europe are quickly filling up. Due to European data retention regulations, which dictate that information is generally stored for longer periods than in other regions such as the US, businesses across Europe have to wait a very long time to archive their data. For instance, under EU law, telecommunications service and network providers are obliged to retain certain categories of data for a specific period of time (typically between six months and two years) and to make that information available to law enforcement where needed. With this in mind, it’s no surprise that investment in high performance storage capacity has become a key priority for many. Time for a clear out So how can organisations deal with these storage issues? They can upgrade or replace their servers, parting with lots of capital expenditure to bring in more power or more memory for Central Processing Units (CPUs). An alternative solution would be to “spring clean” their information. Smart partitioning allows businesses to spend just one tenth of the amount required to purchase new servers and storage capacity, and actually refocus how they’re organising their information. With smart partitioning capabilities, businesses can get all the benefits of archiving the information that’s not necessarily eligible for archiving (due to EU retention regulations). Furthermore, application retirement frees up floor space, drives the modernisation initiative, allows mainframe systems and older platforms to be replaced and legacy data to be migrated to virtual archives. Before IT professionals go out and buy big data systems, they need to spring clean their information and make room for big data. Poor economic conditions across Europe have stifled innovation for a lot of organisations, as they have been forced to focus on staying alive rather than putting investment into R&D to help improve operational efficiencies. They are, therefore, looking for ways to squeeze more out of their already shrinking budgets. The likes of smart partitioning and application retirement offer businesses a real solution to the growing big data conundrum. So maybe it’s time you got your feather duster out, and gave your information a good clean out this spring?
Last November saw Google announce the findings of its bi-annual Transparency Report. Interestingly, the findings revealed that government surveillance of online lives is on a sharp incline. In fact, governments around the world made nearly 21,000 requests for access to Google data in the first six months of 2012, a huge leap from the 12,539 requests reported in 2009. The US government was the biggest culprit, making 7,969 requests in the first six months of 2012, while Turkey made the most requests for content to be removed.
Despite the big numbers, this demand for data should come as no surprise. Information is increasingly being accepted as the currency of today, so it fits that demand for Google data would be so high. Data is an undeniably invaluable asset and both the private and the public sector are realising this. (more…)
Recently, the UK’s Parliament and the Internet conference brought together leading figures from Government, Parliament, academia and the industry to discuss and debate the most pressing policy issues facing the Internet.
As expected, data privacy and security was top of the agenda for much of the day, with a number of discussions highlighting the extent to which consumer data is being exposed to security risks and the need for the right legislation and protection to keep it safe. (more…)
Some interesting news hit UK headlines last year that companies could be made to give the public greater access to their personal transaction data in an electronic, portable and machine-readable format. That’s if the midata project has anything to do with it.
Launched in April 2011 midata is part of the UK Government’s consumer empowerment strategy, Better Choices: Better Deals. Essentially, it’s a partnership between government, consumer groups and major businesses. Its aim is to give consumers access to the data that they produce, from the likes of household utilities, and banking, to internet transactions and high street loyalty cards. (more…)
In my previous blog, I looked at the need among enterprises for application retirement. But, what kind of software solution is best for supporting effective application retirement?
It’s important to realise that retirement projects might start small, with one or two applications, and then quickly blossom into full-fledged rationalisation initiatives where hundreds of dissimilar applications are retired. So relying on individual applications or database vendors for tools and support can easily lead to a fragmented and uneven retirement strategy and archiving environment. In any event, some major application vendors offer little or even no archiving capabilities. (more…)
Whether the result of growth or acquisition, enterprises that have been around a decade or longer typically have large and complex information environments with lots of redundant and obsolete applications. But there’s always the worry that one day there will be a need to access the old applications’ data. So these applications are still managed and maintained even though the data is rarely needed resulting in sizable costs in license fees, maintenance, power, data center space, backups, and precious IT time. In many companies, there are hundreds, even thousands, of obsolete or redundant applications, and the business continues to support them with expensive production level infrastructure and SLAs.
But as we’re facing the longest double dip recession for 50 years, businesses are being forced to think about reclaiming this extraneous spend for more strategic purposes by retiring any outdated application…but without losing access to the data. Keeping data from dormant applications “live” as a safeguard is more than just good common sense. In many cases, keeping the data readily accessible is compulsory due to corporate, industry, and governmental compliance demands. But you needn’t spend full production costs to do so. (more…)
In recent years, there have been a number of embarrassing, high profile data breach blunders. We all heard about the secret government documents detailing the UK’s policies for fighting global terrorist funding, drugs trafficking and money laundering, which were found on a London-bound train in June 2008. More recently, in 2011, Oliver Letwin faced fierce criticism after dumping documents on terrorism and national security into a bin in St. James Park in London, on no less than five occasions.
Whilst these extreme, high profile cases are rare, there are thousands of companies who have been found to mishandle confidential information relating to their customers. Indeed nearly half of the 500+ senior IT professionals surveyed for some recent research into data security admitted they had experienced a data breach. (more…)