Tag Archives: data growth

Data archiving – time for a spring clean?

The term “big data” has been bandied around so much in recent months that arguably, it’s lost a lot of meaning in the IT industry. Typically, IT teams have heard the phrase, and know they need to be doing something, but that something isn’t being done. As IDC pointed out last year, there is a concerning shortage of trained big data technology experts, and failure to recognise the implications that not managing big data can have on the business is dangerous. In today’s information economy, as increasingly digital consumers, customers, employees and social networkers we’re handing over more and more personal information for businesses and third parties to collate, manage and analyse. On top of the growth in digital data, emerging trends such as cloud computing are having a huge impact on the amount of information businesses are required to handle and store on behalf of their customers. Furthermore, it’s not just the amount of information that’s spiralling out of control: it’s also the way in which it is structured and used. There has been a dramatic rise in the amount of unstructured data, such as photos, videos and social media, which presents businesses with new challenges as to how to collate, handle and analyse it. As a result, information is growing exponentially. Experts now predict a staggering 4300% increase in annual data generation by 2020. Unless businesses put policies in place to manage this wealth of information, it will become worthless, and due to the often extortionate costs to store the data, it will instead end up having a huge impact on the business’ bottom line. Maxed out data centres Many businesses have limited resource to invest in physical servers and storage and so are increasingly looking to data centres to store their information in. As a result, data centres across Europe are quickly filling up. Due to European data retention regulations, which dictate that information is generally stored for longer periods than in other regions such as the US, businesses across Europe have to wait a very long time to archive their data. For instance, under EU law, telecommunications service and network providers are obliged to retain certain categories of data for a specific period of time (typically between six months and two years) and to make that information available to law enforcement where needed. With this in mind, it’s no surprise that investment in high performance storage capacity has become a key priority for many. Time for a clear out So how can organisations deal with these storage issues? They can upgrade or replace their servers, parting with lots of capital expenditure to bring in more power or more memory for Central Processing Units (CPUs). An alternative solution would be to “spring clean” their information. Smart partitioning allows businesses to spend just one tenth of the amount required to purchase new servers and storage capacity, and actually refocus how they’re organising their information. With smart partitioning capabilities, businesses can get all the benefits of archiving the information that’s not necessarily eligible for archiving (due to EU retention regulations). Furthermore, application retirement frees up floor space, drives the modernisation initiative, allows mainframe systems and older platforms to be replaced and legacy data to be migrated to virtual archives. Before IT professionals go out and buy big data systems, they need to spring clean their information and make room for big data. Poor economic conditions across Europe have stifled innovation for a lot of organisations, as they have been forced to focus on staying alive rather than putting investment into R&D to help improve operational efficiencies. They are, therefore, looking for ways to squeeze more out of their already shrinking budgets. The likes of smart partitioning and application retirement offer businesses a real solution to the growing big data conundrum. So maybe it’s time you got your feather duster out, and gave your information a good clean out this spring?

FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, B2B Data Exchange, Data Aggregation, Data Archiving | Tagged , , , , , , , , , , | Leave a comment

Why Backups Are Terrible Archives

Businesses retain information in an Enterprise data archiving either for compliance – adhere to data retention regulations – or because business users are afraid to let go of data they are used to having access to. Many IT have told us they retain data in archives because they are looking to cut infrastructure costs and do not have retention requirements clearly articulated from the business. As a result, enterprise data archiving has morphed into serving multiple purposes for IT –they can eliminate costs associated with maintaining aging data in production applications, allow business users to access the information on demand, all while adhering to some – if any known or defined – retention policies.  (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving | Tagged , , , | Leave a comment

Lean Data Warehouse – Clean Up The Waste

Many years ago (over 30 to be precise) I can recall walking the halls of more than one fortune 500 company and seeing four-foot high stacks of boxes with computer printouts in the hallway outside of managers’ offices.  In fact it was not uncommon to see pallet-loads of computer printouts in some companies. When I asked one manager what the reports were and why they had so many, he said “we don’t look at the reports any more but we don’t know how to get the data center to stop sending them.” (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Enterprise Data Management, Integration Competency Centers | Tagged , , , , , | Leave a comment

Dodd-Frank Legislation and Structured Data Retention

The “Dodd-Frank Wall Street Reform and Consumer Protection Act” has recently been passed by the US federal government to regulate financial institutions. Per this legislation, there will be more “watchdog” agencies that will be auditing banks, lending and investment institutions to ensure compliance. As an example, there will be an Office of Financial Research within the Federal Treasury responsible for collecting and analyzing data. This legislation brings with it a higher risk of fines for non-compliance. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Application Retirement, Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Customer Services, Customers, Data Governance, Data Services, Data Warehousing, Database Archiving, Enterprise Data Management, Financial Services, Governance, Risk and Compliance, Mainframe, Mergers and Acquisitions, Operational Efficiency | Tagged , , , , , , , , , , , , , , , | Leave a comment

Start Running Because The Data Tsunami Is Approaching

The phrase ‘Data Tsunami’ has been used by numerous authors in the last few months and it’s difficult to find another suitable analogy because what’s approaching is of such an increased order of magnitude that the IT industries continued expectations for data growth will be swamped in the next few years.
However impressive a spectacle a Tsunami is, it still wreaks havoc to those who are unprepared or believe they can tread water and simply float to the surface when the trouble has passed.

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, Big Data, Data Transformation, Telecommunications, Uncategorized | Tagged , , | 1 Comment

Making Big Data A Little Smaller

Big Data. The term has certainly caught on and the phenomenon is real. Every nanosecond of every day, more and more data is being created at ever increasing speeds. And since storage has become so economical, both cost and foot print, there are fewer compelling reasons for the enterprise to manage down the size of its databases. In response, new and emerging technologies such as universal database connectivity, complex event processing, connectivity to social network feeds and in-memory processing have been developed to better manage Big Data’s scale. While this is great news for the enterprise, it comes with some challenges in respect to business analytics. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Complex Event Processing | Tagged , , , , | Leave a comment

Efficiency Is The Name Of Today’s Game

In my last post I talked about airlines becoming more efficient (or not) and I started thinking about how everything today is about efficiency – not a bad thing when you consider the growth of data volumes we’re seeing everywhere (go to YouTube and search “exponential times” – some interesting videos). Efficiency is necessary for scale, but also efficiency is about better use of resources (think Green). (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Financial Services, Ultra Messaging | Tagged , , , , , , , , , | Leave a comment

Series: Architecting A Database Archiving Solution Final Part 5: Data Growth Assessments

As a final part of our series, Architecting A Database Archiving Solution, we will review a process I use to assess a client’s existing Total Cost of Ownership of their database application and how to justify a database archiving solution. The key metrics I begin with are listed below and explained:

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Business Impact / Benefits, Data Integration, Database Archiving, Operational Efficiency | Tagged , , , , , , , , | Leave a comment

Series: Architecting A Database Archiving Solution Part 4: Archive Repository Options

During this series of “Architecting a Database Archiving Solution”, we discussed the Anatomy of A Database Archiving Solution and End User Access Requirements.  In this post we will review the archive repository options at a very high level. Each option has its pros and cons and needs to be evaluated in more detail to determine which will be the best fit for your situation.
(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Application Retirement, Database Archiving, Governance, Risk and Compliance | Tagged , , , , , , , , | 2 Comments

Series: Architecting A Database Archiving Solution Part 3: End User Access & Performance Expectations

In my previous blog as part of the series, architecting a database archiving solution, we discussed the major architecture components.  In this session, we will focus on how end user access requirements and expected performance service levels drive the core of an architecture discussion.

End user access requirements can be determined by answering the following questions.  When data is archived from a source database:

  • How long does the archived data need to be retained? The longer the retention period, the more the solution architecture needs to account for potentially significant data volumes and technology upgrades or obsolescence. This will determine cost factors of keeping data online in a database or an archive file, versus nearline or offline on other media such as tape. (more…)
FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Application Retirement, Database Archiving | Tagged , , , , , , , , | Leave a comment