Category Archives: B2B

In a Data First World, IT must Empower Business Change!

IT must Empower Business ChangeYou probably know this already, but I’m going to say it anyway: It’s time you changed your infrastructure. I say this because most companies are still running infrastructure optimized for ERP, CRM and other transactional systems. That’s all well and good for running IT-intensive, back-office tasks. Unfortunately, this sort of infrastructure isn’t great for today’s business imperatives of mobility, cloud computing and Big Data analytics.

Virtually all of these imperatives are fueled by information gleaned from potentially dozens of sources to reveal our users’ and customers’ activities, relationships and likes. Forward-thinking companies are using such data to find new customers, retain existing ones and increase their market share. The trick lies in translating all this disparate data into useful meaning. And to do that, IT needs to move beyond focusing solely on transactions, and instead shine a light on the interactions that matter to their customers, their products and their business processes.

They need what we at Informatica call a “Data First” perspective. You can check out my first blog first about being Data First here.

A Data First POV changes everything from product development, to business processes, to how IT organizes itself and —most especially — the impact IT has on your company’s business. That’s because cloud computing, Big Data and mobile app development shift IT’s responsibilities away from running and administering equipment, onto aggregating, organizing and improving myriad data types pulled in from internal and external databases, online posts and public sources. And that shift makes IT a more-empowering force for business change. Think about it: The ability to connect and relate the dots across data from multiple sources finally gives you real power to improve entire business processes, departments and organizations.

I like to say that the role of IT is now “big I, little t,” with that lowercase “t” representing both technology and transactions. But that role requires a new set of priorities. They are:

  1. Think about information infrastructure first and application infrastructure second.
  2. Create great data by design. Architect for connectivity, cleanliness and security. Check out the eBook Data Integration for Dummies.
  3. Optimize for speed and ease of use – SaaS and mobile applications change often. Click here to try Informatica Cloud for free for 30 days.
  4. Make data a team sport. Get tools into your users’ hands so they can prepare and interact with it.

I never said this would be easy, and there’s no blueprint for how to go about doing it. Still, I recognize that a little guidance will be helpful. In a few weeks, Informatica’s CIO Eric Johnson and I will talk about how we at Informatica practice what we preach.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, Big Data, Business Impact / Benefits, Data Integration, Data Security, Data Services, Enterprise Data Management | Tagged , , , | Leave a comment

Right Product, Right Customer, Right Place – The Informed Purchase Journey

The Informed Purchase Journey

The way we shop has changed. It’s hard to keep up with customer demands in a single channel, much less many. Selling products today has changed and always will. The video below shows how today’s customer takes The Informed Purchase Journey:

“Customers expect a seamless experience that makes it easy for them to engage at every touchpoint on their “decision journey. Informatica PIM is key component on  transformation from a product centric view to a consumer experience driven marketing with more efficiency.” – Heather Hanson – Global Head of Marketing Technology at Electrolux

Selling products today is:

  • Shopper-controlled. It’s never been easier for consumers to compare products and prices. This has eroded old customer loyalty and means you have to earn every sale.
  • Global. If you’re selling your products in different regions, you’re facing complex localization and supply chain coordination.
  • Fast. Product lifecycles are short. Time-to-market is critical (and gets tougher the more channels you’re selling through).
  • SKU-heavy. Endless-aisle assortments are great for margins. That’s a huge opportunity, but product data overload due to the large number of SKUs and their attributes adds up to a huge admin burden.
  • Data driven. Product data alone is more than a handful to deal with. But you also need to know as much about your customers as you know about your products. And the explosion of channels and touch points doesn’t make it any easier to connect the dots.

Conversion Power – From Deal Breaker To Deal Maker

For years, a customer’s purchase journey was something of “An Unexpected Journey.” Lack of insight into the journey was a struggle for retailers and brands. The journey is fraught with more questions about product than ever before, even for fast moving consumer goods.

Today, the consumer behaviors and the role of product information have changed since the advent of substantial bandwidths and social buying. To do so, lets examine the way shoppers buy today.

  • Due to Google shoppers use 10.4 sources in average (zero moment of truth ZMOT google research)
  •  133% higher conversion rate shown by mobile shoppers who view customer content like reviews.
  • Digital devices’ influence 50% of in-store purchase behavior by end of 2014 (Deloitte’s Digital Divide)

How Informatica PIM 7.1 turns information from deal breaker to deal maker

PIM 7.1 comes with new data quality dashboards, helping users like category managers, marketing texters, managers or ecommerce specialists to do the right things. The quality dashboards point users to the things they have to do next in order to get the data right, out and ready for sales.

Eliminate Shelf Lag: The Early Product Closes the Sale

For vendors, this effectively means time-to-market: the availability of a product plus the time it takes to collect all relevant product information so you can display it to the customer (product introduction time).

The biggest threat is not the competition – it’s your own time-consuming, internal processes. We call this Shelf Lag, and it’s a big inhibitor of retailer profits. Here’s why:

  • You can’t sell what you can’t display.
  • Be ready to spin up new channels
  • Watch your margins.

How Informatica PIM 7.1 speeds up product introduction and customer experience

“By 2017… customer experience is what buyers are going to use to make purchase decisions.” (Source: Gartner’s Hype Cycle for E-Commerce, 2013) PIM 7.1 comes with new editable channel previews. This helps business users like marketing, translators, merchandisers or product managers to envistion how the product looks at the cutomer facing webshop, catalog or other touchpoint. Getting products live online within seconds, we is key because the customer always wants it now. For eCommerce product data Informatica PIM is certified for IBM WebSphere Commerce to get products ready for ecommerce within seconds.

The editable channel previews helps professionals in product management, merchandizing, marketing and ecommerce to envision their products as customers are facing it. The way of “what you see is what you get (WYSIWYG)” product data management improves customer shopping experience with best and authentic information. With the new eCommerce integration, Informatica speeds up the time to market in eBusiness. The new standard (certified by IBM WebSphere Commerce enables a live update of eShops with real time integration.

The growing need for fast and s ecure collaboration across globally acting enterprises is addressed by the Business Process Management tool of Informatica, which can now be used for PIM customers.

Intelligent insights: How relevant is our offering to your customers?

This is the age of annoyance and information overload. Each day, the average person has to handle more than 7,000 pieces of information. Only 25% of Americans say there are brand loyal. That means brands and retailers have to earn every new sale in a transparent world. In this context information needs to be relevant to the recipient.

  • Where do the data come from? How can product information auto-cleansed and characterizing into a taxonomy?
  • Is the supplier performance hitting our standards?
  • How can we mitigate risks like hidden costs and work with trusted suppliers only?
  • How can we and build customer segmentations for marketing?
  • How to build product personalization and predict the next logical buy of the customer?

It is all about The Right product. To the Right Person. In the Right Way. Learn more about the vision of the Intelligent Data Plaform.

Informatica PIM Builds the Basis of Real Time Commerce Information

All these innovations speed up the new product introduction and collaboration massively. As buyers today are always online and connected, PIM helps our customer to serve the informed purchase journey, with the right information in at the right touch point and in real time.

  1. Real-time commerce (certification with IBM WebSphere Commerce), which eliminates shelf lag
  2. Editable channel preview which help to envision how customers view the product
  3. Data quality dashboards for improved conversion power, which means selling more with better information
  4. Business Process Management for better collaboration throughout the enterprise
  5. Accelerator for global data synchronization (GDSN like GS1 for food and CPG) – which helps to improve quality of data and fulfill legal requirements

All this makes merchandizers more productive and increases average spend per customer.

Find out how the new release of Informatica PIM 7.1 helps you to unleash conversion power on the customer’s informed purchase journey.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, CMO, Manufacturing, Master Data Management, PiM, Product Information Management, Retail | Tagged , , , | Leave a comment

How GS1 and PIM Help to Fulfill Legal Regulations and Feed Distribution Channels

Manufacturers and retailers are constantly being challenged by the market. They continually seek ways to optimize their business processes and improve their margins. They face a number of challenges. These challenges include the following:

  • Delays in getting products ordered
  • Delays in getting products displayed on the shelf
  • Out of stock issues
  • Constant pressure to comply with how information is exchanged with local partners
  • Pressure to comply with how information is exchanged with international distribution partners

Recently, new regulations have been mandated by governing bodies. These bodies include the US Food and Drug Administration (FDA) as well as European Union (EU) entities. One example of these regulations is EU Regulation 1169/2011. This regulation focuses on nutrition and contents for food labels.

How much would it mean to a supplier if they could reduce their “time to shelf?” What would it mean if they could improve their order and item administration?

GS1 and PIM

If you’re a supplier, and if these improvements would benefit you, you’ll want to explore solutions. In particular, you’d benefit from a solution which could do the following:

  • Make your business available to the widest possible audience, both locally and internationally
  • Eliminate the need to build individual “point to point” interfaces
  • Provide the ability to communicate both “one on one” with a partner and broadly with othe
  • Eliminate product data inconsistencies
  • Improve data quality
  • Improve productivity

One such solution that can accomplish these things is Informatica’s combination of PIM and GDSN.

For Manufacturers

Manufacturers of CPG or food products have to adhere to strict compliance regulations. The new EU Regulation 1169/2011 on the provision of food information to consumers changes existing legislation on food labeling. The new rules take effect on December 13, 2014. The obligation to provide nutrition information will apply from 13 December 2016. The US Food & Drug Administration (FDA) enforces record keeping and the Hazard Analysis & Critical Control Points (HACCP).

In addition to that information standards are key factor feedbug distributors and retailers as our customer Vitakraft says:

“For us as a manufacturer of pet food, the retailers and distributors are key distribution channels. With the GS1 Accelerator for Informatica PIM we connect with the Global Data Synchronization Network (GDSN). Leveraging GDSN we serve our retail and distribution partners with product information for all sales channels. Informatica, helps us to meet the expectations of our business partners and customers in the e-business.”

Heiko Cichala, Product & Electronic Data Interchange Management

For Retailers

On one side retailers like supermarkets, expect from their vendors or manufacturers to get all  required information which is required legally – on the other side they are looking for strategies to leverage information for better customer service and experience (Check out “the supermarket of tomorrow”).

Companies, like German food retailer Edeka offer an app for push marketing, or help matching customer profiles of dietary or allergy profiles with QR-code scanned products on the shopping list within the supermarket app.

The Informatica GS1 Accelerator

GS1, GDSN for PIM

 

The GS1 Accelerator from Informatica offers suppliers and manufacturers the capability to ensure their data is not only of high quality but also confirms to GS1 standards. The Informatica GDSN accelerator offers the possibility to provide this high quality data directly to a certified data pool for synchronisation with their trading partners.

The quality of the data can be ensured by the Data Quality rules engine of the PIM system. It leverages the Global Product Classification hierarchy that conforms to GS1 standards for communication with the data pools.

All GDSN related activities is encapsulated within PIM can be initiated from there itself. The product data can easily be transferred to the data pool and released to a specific trading partner or made public for all recipients of a Target Market.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, PiM, Product Information Management | Tagged , | Leave a comment

Health Plans, Create Competitive Differentiation with Risk Adjustment

improve risk adjustmentExploring Risk Adjustment as a Source of Competitive Differentiation

Risk adjustment is a hot topic in healthcare. Today, I interviewed my colleague, Noreen Hurley to learn more. Noreen tell us about your experience with risk adjustment.

Before I joined Informatica I worked for a health plan in Boston. I managed several programs  including CMS Five Start Quality Rating System and Risk Adjustment Redesign.  We recognized the need for a robust diagnostic profile of our members in support of risk adjustment. However, because the information resides in multiple sources, gathering and connecting the data presented many challenges. I see the opportunity for health plans to transform risk adjustment.

As risk adjustment becomes an integral component in healthcare, I encourage health plans to create a core competency around the development of diagnostic profiles. This should be the case for health plans and ACO’s.  This profile is the source of reimbursement for an individual. This profile is also the basis for clinical care management.  Augmented with social and demographic data, the profile can create a roadmap for successfully engaging each member.

Why is risk adjustment important?

Risk Adjustment is increasingly entrenched in the healthcare ecosystem.  Originating in Medicare Advantage, it is now applicable to other areas.  Risk adjustment is mission critical to protect financial viability and identify a clinical baseline for  members.

What are a few examples of the increasing importance of risk adjustment?

1)      Centers for Medicare and Medicaid (CMS) continues to increase the focus on Risk Adjustment. They are evaluating the value provided to the Federal government and beneficiaries.  CMS has questioned the efficacy of home assessments and challenged health plans to provide a value statement beyond the harvesting of diagnoses codes which result solely in revenue enhancement.   Illustrating additional value has been a challenge. Integrating data across the health plan will help address this challenge and derive value.

2)      Marketplace members will also require risk adjustment calculations.  After the first three years, the three “R’s” will dwindle down to one ‘R”.  When Reinsurance and Risk Corridors end, we will be left with Risk Adjustment. To succeed with this new population, health plans need a clear strategy to obtain, analyze and process data.  CMS processing delays make risk adjustment even more difficult.  A Health Plan’s ability to manage this information  will be critical to success.

3)      Dual Eligibles, Medicaid members and ACO’s also rely on risk management for profitability and improved quality.

With an enhanced diagnostic profile — one that is accurate, complete and shared — I believe it is possible to enhance care, deliver appropriate reimbursements and provide coordinated care.

How can payers better enable risk adjustment?

  • Facilitate timely analysis of accurate data from a variety of sources, in any  format.
  • Integrate and reconcile data from initial receipt through adjudication and  submission.
  • Deliver clean and normalized data to business users.
  • Provide an aggregated view of master data about members, providers and the relationships between them to reveal insights and enable a differentiated level of service.
  • Apply natural language processing to capture insights otherwise trapped in text based notes.

With clean, safe and connected data,  health plans can profile members and identify undocumented diagnoses. With this data, health plans will also be able to create reports identifying providers who would benefit from additional training and support (about coding accuracy and completeness).

What will clean, safe and connected data allow?

  • Allow risk adjustment to become a core competency and source of differentiation.  Revenue impacts are expanding to lines of business representing larger and increasingly complex populations.
  • Educate, motivate and engage providers with accurate reporting.  Obtaining and acting on diagnostic data is best done when the member/patient is meeting with the caregiver.  Clear and trusted feedback to physicians will contribute to a strong partnership.
  • Improve patient care, reduce medical cost, increase quality ratings and engage members.
FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Data Governance, Data Integration, Enterprise Data Management, Healthcare, Master Data Management, Operational Efficiency | Tagged , , | Leave a comment

An Introduction to Big Data and Hadoop from Informatica University

A full house, lots of funny names and what does it all mean?

Cloudera, Appfluent and Informatica partnered today at Informatica World in Las Vegas to deliver together a one day training session on Introduction to Hadoop and Big Data.  Technologies overview, best practices, and how to get started were on the agenda.  Of course, we needed to start off with a little history.  Processing and computing was important in the old days.  And, even in the old days it was hard to do and very expensive. 

Today it’s all about scalability.   What Cloudera does is “Spread the Data and Spread the Processing” with Hadoop optimized for scanning lots of data.  It’s the Hadoop File System (HDFS) that slices up the data.  It takes a slice of data and then takes another slice.  Map Reduce is then used to spread the processing.  How does spreading the data and the processing help us with scalability?

When we spread the data and processing we need to index the data.  How do we do this?  We add the Get Puts.  That’s Get a Row, Put a Row.  Basically this is what helps us find a row of data easily.  The potential for processing millions of rows of data today is more and more a reality for many businesses.  Once we can find and process a row of data easily we can focus on our data analysis.

Data Analysis, what’s important to you and your business?  Appfluent gives us the map to identify data and workloads to offload and archive to Hadoop.  It helps us assess what is not necessary to load into the Data Warehouse.  The Data Warehouse today with the exponential growth in volume and types of data will soon cost too much unless we identify what to load and offload. 

Informatica has the tools to help you with processing your data.  Tools that understand Hadoop and that you already use today.  This helps you with a managing these volumes of data in a cost effective way.  Add to that the ability to reuse what you have already developed.  Now that makes these new tools and technologies exciting.

In this Big Data and Hadoop session, #INFA14, you will learn:

  • Common terminologies used in Big Data
  • Technologies, tools, and use cases associated with Hadoop
  • How to identify and qualify the most appropriate jobs for Hadoop
  • Options and best practices for using Hadoop to improve processes and increase efficiency

Live action at Informatica World 2014, May 12 9:00 am – 5:00 pm and updates at:

Twitter: @INFAWorld

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, Big Data | Tagged , | Leave a comment

Where Is My Broadband Insurance Bundle?

As I continue to counsel insurers about master data, they all agree immediately that it is something they need to get their hands around fast.  If you ask participants in a workshop at any carrier; no matter if life, p&c, health or excess, they all raise their hands when I ask, “Do you have broadband bundle at home for internet, voice and TV as well as wireless voice and data?”, followed by “Would you want your company to be the insurance version of this?”

Buying insurance like broadband

Buying insurance like broadband

Now let me be clear; while communication service providers offer very sophisticated bundles, they are also still grappling with a comprehensive view of a client across all services (data, voice, text, residential, business, international, TV, mobile, etc.) each of their touch points (website, call center, local store).  They are also miles away of including any sort of meaningful network data (jitter, dropped calls, failed call setups, etc.)

Similarly, my insurance investigations typically touch most of the frontline consumer (business and personal) contact points including agencies, marketing (incl. CEM & VOC) and the service center.  On all these we typically see a significant lack of productivity given that policy, billing, payments and claims systems are service line specific, while supporting functions from developing leads and underwriting to claims adjucation often handle more than one type of claim.

This lack of performance is worsened even more by the fact that campaigns have sub-optimal campaign response and conversion rates.  As touchpoint-enabling CRM applications also suffer from a lack of complete or consistent contact preference information, interactions may violate local privacy regulations. In addition, service centers may capture leads only to log them into a black box AS400 policy system to disappear.

Here again we often hear that the fix could just happen by scrubbing data before it goes into the data warehouse.  However, the data typically does not sync back to the source systems so any interaction with a client via chat, phone or face-to-face will not have real time, accurate information to execute a flawless transaction.

On the insurance IT side we also see enormous overhead; from scrubbing every database from source via staging to the analytical reporting environment every month or quarter to one-off clean up projects for the next acquired book-of-business.  For a mid-sized, regional carrier (ca. $6B net premiums written) we find an average of $13.1 million in annual benefits from a central customer hub.  This figure results in a ROI of between 600-900% depending on requirement complexity, distribution model, IT infrastructure and service lines.  This number includes some baseline revenue improvements, productivity gains and cost avoidance as well as reduction.

On the health insurance side, my clients have complained about regional data sources contributing incomplete (often driven by local process & law) and incorrect data (name, address, etc.) to untrusted reports from membership, claims and sales data warehouses.  This makes budgeting of such items like medical advice lines staffed  by nurses, sales compensation planning and even identifying high-risk members (now driven by the Affordable Care Act) a true mission impossible, which makes the life of the pricing teams challenging.

Over in the life insurers category, whole and universal life plans now encounter a situation where high value clients first faced lower than expected yields due to the low interest rate environment on top of front-loaded fees as well as the front loading of the cost of the term component.  Now, as bonds are forecast to decrease in value in the near future, publicly traded carriers will likely be forced to sell bonds before maturity to make good on term life commitments and whole life minimum yield commitments to keep policies in force.

This means that insurers need a full profile of clients as they experience life changes like a move, loss of job, a promotion or birth.   Such changes require the proper mitigation strategy, which can be employed to protect a baseline of coverage in order to maintain or improve the premium.  This can range from splitting term from whole life to using managed investment portfolio yields to temporarily pad premium shortfalls.

Overall, without a true, timely and complete picture of a client and his/her personal and professional relationships over time and what strategies were presented, considered appealing and ultimately put in force, how will margins improve?  Surely, social media data can help here but it should be a second step after mastering what is available in-house already.  What are some of your experiences how carriers have tried to collect and use core customer data?

Disclaimer:
Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer  and on our observations.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Customer Services, Customers, Data Governance, Data Privacy, Data Quality, Data Warehousing, Enterprise Data Management, Governance, Risk and Compliance, Healthcare, Master Data Management, Vertical | Tagged , , , , , , , , | Leave a comment

Sensational Find – $200 Million Hidden in a Teenager’s Bedroom!

That tag line got your attention – did it not?  Last week I talked about how companies are trying to squeeze more value out of their asset data (e.g. equipment of any kind) and the systems that house it.  I also highlighted the fact that IT departments in many companies with physical asset-heavy business models have tried (and often failed) to create a consistent view of asset data in a new ERP or data warehouse application.  These environments are neither equipped to deal with all life cycle aspects of asset information, nor are they fixing the root of the data problem in the sources, i.e. where the stuff is and what it look like. It is like a teenager whose parents have spent thousands of dollars on buying him the latest garments but he always wears the same three outfits because he cannot find the other ones in the pile he hoardes under her bed.  And now they bought him a smart phone to fix it.  So before you buy him the next black designer shirt, maybe it would be good to find out how many of the same designer shirts he already has, what state they are in and where they are.

Finding the asset in your teenager's mess

Finding the asset in your teenager’s mess

Recently, I had the chance to work on a like problem with a large overseas oil & gas company and a North American utility.  Both are by definition asset heavy, very conservative in their business practices, highly regulated, very much dependent on outside market forces such as the oil price and geographically very dispersed; and thus, by default a classic system integration spaghetti dish.

My challenge was to find out where the biggest opportunities were in terms of harnessing data for financial benefit.

The initial sense in oil & gas was that most of the financial opportunity hidden in asset data was in G&G (geophysical & geological) and the least on the retail side (lubricants and gas for sale at operated gas stations).  On the utility side, the go to area for opportunity appeared to be maintenance operations.  Let’s say that I was about right with these assertions but that there were a lot more skeletons in the closet with diamond rings on their fingers than I anticipated.

After talking extensively with a number of department heads in the oil company; starting with the IT folks running half of the 400 G&G applications, the ERP instances (turns out there were 5, not 1) and the data warehouses (3), I queried the people in charge of lubricant and crude plant operations, hydrocarbon trading, finance (tax, insurance, treasury) as well as supply chain, production management, land management and HSE (health, safety, environmental).

The net-net was that the production management people said that there is no issue as they already cleaned up the ERP instance around customer and asset (well) information. The supply chain folks also indicated that they have used another vendor’s MDM application to clean up their vendor data, which funnily enough was not put back into the procurement system responsible for ordering parts.  The data warehouse/BI team was comfortable that they cleaned up any information for supply chain, production and finance reports before dimension and fact tables were populated for any data marts.

All of this was pretty much a series of denial sessions on your 12-step road to recovery as the IT folks had very little interaction with the business to get any sense of how relevant, correct, timely and useful these actions are for the end consumer of the information.  They also had to run and adjust fixes every month or quarter as source systems changed, new legislation dictated adjustments and new executive guidelines were announced.

While every department tried to run semi-automated and monthly clean up jobs with scripts and some off-the-shelve software to fix their particular situation, the corporate (holding) company and any downstream consumers had no consistency to make sensible decisions on where and how to invest without throwing another legion of bodies (by now over 100 FTEs in total) at the same problem.

So at every stage of the data flow from sources to the ERP to the operational BI and lastly the finance BI environment, people repeated the same tasks: profile, understand, move, aggregate, enrich, format and load.

Despite the departmental clean-up efforts, areas like production operations did not know with certainty (even after their clean up) how many well heads and bores they had, where they were downhole and who changed a characteristic as mundane as the well name last and why (governance, location match).

Marketing (Trading) was surprisingly open about their issues.  They could not process incoming, anchored crude shipments into inventory or assess who the counterparty they sold to was owned by and what payment terms were appropriate given the credit or concentration risk associated (reference data, hierarchy mgmt.).  As a consequence, operating cash accuracy was low despite ongoing improvements in the process and thus, incurred opportunity cost.

Operational assets like rig equipment had excess insurance coverage (location, operational data linkage) and fines paid to local governments for incorrectly filing or not renewing work visas was not returned for up to two years incurring opportunity cost (employee reference data).

A big chunk of savings was locked up in unplanned NPT (non-production time) because inconsistent, incorrect well data triggered incorrect maintenance intervals. Similarly, OEM specific DCS (drill control system) component software was lacking a central reference data store, which did not trigger alerts before components failed. If you add on top a lack of linkage of data served by thousands of sensors via well logs and Pi historians and their ever changing roll-up for operations and finance, the resulting chaos is complete.

One approach we employed around NPT improvements was to take the revenue from production figure from their 10k and combine it with the industry benchmark related to number of NPT days per 100 day of production (typically about 30% across avg depth on & offshore types).  Then you overlay it with a benchmark (if they don’t know) how many of these NPT days were due to bad data, not equipment failure or alike, and just fix a portion of that, you are getting big numbers.

When I sat back and looked at all the potential it came to more than $200 million in savings over 5 years and this before any sensor data from rig equipment, like the myriad of siloed applications running within a drill control system, are integrated and leveraged via a Hadoop cluster to influence operational decisions like drill string configuration or asmyth.

Next time I’ll share some insight into the results of my most recent utility engagement but I would love to hear from you what your experience is in these two or other similar industries.

Disclaimer:
Recommendations contained in this post are estimates only and are based entirely upon information provided by the prospective customer  and on our observations.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, B2B, Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Data Aggregation, Data Governance, Data Integration, Data Quality, Enterprise Data Management, Governance, Risk and Compliance, Manufacturing, Master Data Management, Mergers and Acquisitions, Operational Efficiency, Uncategorized, Utilities & Energy, Vertical | Tagged , , , , , , , | Leave a comment

What is In-Database Archiving in Oracle

In my last blog on this topic, I discussed several areas where a database archiving solution can complement or help you to better leverage the Oracle In-Database Archiving feature.  For an introduction of what the new In-Database Archiving feature in Oracle 12c is, refer to Part 1 of my blog on this topic.

Here, I will discuss additional areas where a database archiving solution can complement the new Oracle In-Database Archiving feature:

  • Graphical UI for ease of administration – In database archiving is currently a technical feature of Oracle database, and not easily visible or mange-able outside of the DBA persona.   This is where a database archiving solution provides a more comprehensive set of graphical user interfaces (GUI) that makes this feature easier to monitor and manage.
  • Enabling application of In-Database Archiving for packaged applications and complex data models – Concepts of business entities or transactional records composed of related tables to maintain data and referential integrity as you archive, move, purge, and retain data, as well as business rules to determine when data has become inactive and can therefore be safely archived allow DBAs to apply this new Oracle feature to more complex data models.  Also, the availability of application accelerators (prebuilt metadata of business entities and business rules for packaged applications) enables the application of In-Database Archiving to packaged applications like Oracle E-Business Suite, PeopleSoft, Siebel, and JD Edwards

 

  • Data growth monitoring and analysis – available in some database archiving solution to enable monitoring and tracking of data growth trends and the identification of which tables, modules, and business entities are the largest and fastest growing to focus your ILM policies on.
  • Performance monitoring and analysis – also available in some database archiving solution —  allows Oracle administrators to easily and more meaningfully monitor and analyze database and application performance.  They can identify the root cause of performance issues, and from there, administrators can define smart partitioning policies to segment data (i.e. mark them as inactive) and monitor the impact of the policy on improving query performance.  This capability helps you to identify which set of records should potentially be “marked as inactive” and segmented.
  • Automatic purging of unused or aged data based on policies – database archiving solutions allow administrators to define ILM policies to automate the purging of records that are truly no longer used and have been in the inactive state for some time.
  • Optimal data organization, placement, and purging, leveraging Oracle partitioning – a database archiving solution like Informatica Data Archive is optimized to leverage Oracle partitioning to optimally move data to inactive tablespaces, and purge inactive data by dropping or truncating partitions.  All of these actions are automated based on policies, again eliminating the need for scripting by the DBA.
  • Extreme compression to reduce cost and storage capacity consumption – up to 98% (90%-95% on average) compression is available in some database archiving solutions as compared to the 30%-60% compression available in native database compression.
  • Compliance management – Enforcement of retention and disposal policies with the ability to apply legal holds on archived data are part of a comprehensive database archiving solution.
  • Central policy management, across heterogeneous databases – a database archiving solution helps you to manage data growth, improve performance, reduce costs, ensure compliance to retention regulations, and define and apply data management policies across multiple heterogeneous database types, beyond Oracle.
FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Application Retirement, B2B | Leave a comment

Get Your Data Butt Off The Couch and Move It

Data is everywhere.  It’s in databases and applications spread across your enterprise.  It’s in the hands of your customers and partners.  It’s in cloud applications and cloud servers.  It’s on spreadsheets and documents on your employee’s laptops and tablets.  It’s in smartphones, sensors and GPS devices.  It’s in the blogosphere, the twittersphere and your friends’ Facebook timelines. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, B2B, Big Data, Cloud Computing, Complex Event Processing, Data Governance, Data Integration, Data Migration, Data Quality, Data Services, Data Transformation, Data Warehousing, Enterprise Data Management, Integration Competency Centers | Tagged , , , | Leave a comment