Category Archives: Data Migration

What’s Driving Core Banking Modernization?

Renew

What’s Driving Core Banking Modernization

When’s the last time you visited your local branch bank and spoke to a human being? How about talking to your banker over the phone?  Can’t remember?  Well you’re not alone and don’t worry, it’s not a bad thing. The days of operating physical branches with expensive workers to greet and service customers  are being replaced with more modern and customer friendly mobile banking applications that allow consumers to deposit checks from the phone, apply for a mortgage and sign closing documents electronically, to eliminating the need to go to an ATM and get physical cash by using mobile payment solutions like Apple Pay.  In fact, a new report titled ‘Bricks + Clicks: Building the Digital Branch,’ from Jeanne Capachin and Jim Marous takes an in-depth look at how banks and credit unions are changing their branch and customer channel strategies to meet the demand of today’s digital banking customer.

Why am I talking about this? These market trends are dominating the CEO and CIO agenda in today’s banking industry. I just returned from the 2015 IDC Asian Financial Congress event in Singapore where the digital journey for the next generation bank was a major agenda item. According the IDC Financial Insights, global banks will invest $31.5B USD in core banking modernization to enable these services, improve operational efficiency, and position these banks to better compete on technology and convenience across markets. Core banking modernization initiatives are complex, costly, and fraught with risks. Let’s take a closer look. (more…)

Share
Posted in Application Retirement, Architects, Banking & Capital Markets, Data Migration, Data Privacy, Data Quality, Vertical | Tagged , , | Leave a comment

Informatica joins new ServiceMax Marketplace – offers rapid, cost effective integration with ERP and Cloud apps for Field Service Automation

ERP

Informatica Partners with ServiceMax

To deliver flawless field service, companies often require integration across multiple applications for various work processes.  A good example is automatically ordering and shipping parts through an ERP system to arrive ahead of a timely field service visit.  Informatica has partnered with ServiceMax, the leading field service automation solution, and subsequently joined the new ServiceMax Marketplace to offer customers integration solutions for many ERP and Cloud applications frequently involved in ServiceMax deployments.  Comprised of Cloud Integration Templates built on Informatica Cloud for frequent customer integration “patterns”, these solutions will speed and cost contain the ServiceMax implementation cycle and help customers realize the full potential of their field service initiatives.

Existing members of the ServiceMax Community can see a demo or take advantage of a free 30-day trial that provides full capabilities of Informatica Cloud Integration for ServiceMax with prebuilt connectors to hundreds of 3rd party systems including SAP, Oracle, Salesforce, Netsuite and Workday, powered by the Informatica Vibe virtual data machine for near-universal access to cloud and on-premise data.  The Informatica Cloud Integration for Servicemax solution:

  • Accelerates ERP integration through prebuilt Cloud templates focused on key work processes and the objects on common between systems as much as 85%
  • Synchronizes key master data such as Customer Master, Material Master, Sales Orders, Plant information, Stock history and others
  • Enables simplified implementation and customization through easy to use user interfaces
  • Eliminates the need for IT intervention during configuration and deployment of ServiceMax integrations.

We look forward to working with ServiceMax through the ServiceMax Marketplace to help joint customers deliver Flawless Service!

Share
Posted in 5 Sales Plays, Business Impact / Benefits, Cloud, Cloud Application Integration, Cloud Data Integration, Cloud Data Management, Data Integration, Data Integration Platform, Data Migration, Data Synchronization, Operational Efficiency, Professional Services, SaaS | Tagged , , , | Leave a comment

How to Ace Application Migration & Consolidation (Hint: Data Management)

Myth Vs Reality: Application Migration & Consolidation

Myth Vs Reality: Application Migration & Consolidation (No, it’s not about dating)

Will your application consolidation or migration go live on time and on budget?  According to Gartner, “through 2019, more than 50% of data migration projects will exceed budget and/or result in some form of business disruption due to flawed execution.”1  That is a scary number by any measure. A colleague of mine put it well: ‘I wouldn’t get on a plane that had 50% chance of failure’. So should you be losing sleep over your migration or consolidation project? Well that depends.  Are you the former CIO of Levi Strauss? Who, according to Harvard Business Review, was forced to resign due to a botched SAP migration project and a $192.5 million earnings write-off?2  If so, perhaps you would feel a bit apprehensive. Otherwise, I say you can be cautiously optimistic, if you go into it with a healthy dose of reality. Please ensure you have a good understanding of the potential pitfalls and how to address them.  You need an appreciation for the myths and realities of application consolidation and migration.

First off, let me get one thing off my chest.  If you don’t pay close attention to your data, throughout the application consolidation or migration process, you are almost guaranteed delays and budget overruns. Data consolidation and migration is at least 30%-40% of the application go-live effort. We have learned this by helping customers deliver over 1500 projects of this type.  What’s worse, if you are not super meticulous about your data, you can be assured to encounter unhappy business stakeholders at the end of this treacherous journey. The users of your new application expect all their business-critical data to be there at the end of the road. All the bells and whistles in your new application will matter naught if the data falls apart.  Imagine if you will, students’ transcripts gone missing, or your frequent-flyer balance a 100,000 miles short!  Need I say more?  Now, you may already be guessing where I am going with this.  That’s right, we are talking about the myths and realities related to your data!   Let’s explore a few of these.

Myth #1: All my data is there.

Reality #1: It may be there… But can you get it? if you want to find, access and move out all the data from your legacy systems, you must have a good set of connectivity tools to easily and automatically find, access and extract the data from your source systems. You don’t want to hand-code this for each source.  Ouch!

Myth #2: I can just move my data from point A to point B.

Reality #2: You can try that approach if you want.  However you might not be happy with the results.  Reality is that there can be significant gaps and format mismatches between the data in your legacy system and the data required by your new application. Additionally you will likely need to assemble data from disparate systems. You need sophisticated tools to profile, assemble and transform your legacy data so that it is purpose-fit for your new application.

Myth #3: All my data is clean.

Reality #3:  It’s not. And here is a tip:  better profile, scrub and cleanse your data before you migrate it. You don’t want to put a shiny new application on top of questionable data . In other words let’s get a fresh start on the data in your new application!

Myth #4: All my data will move over as expected

Reality #4: It will not.  Any time you move and transform large sets of data, there is room for logical or operational errors and surprises.  The best way to avoid this is to automatically validate that your data has moved over as intended.

Myth #5: It’s a one-time effort.

Reality #5: ‘Load and explode’ is formula for disaster.  Our proven methodology recommends you first prototype your migration path and identify a small subset of the data to move over. Then test it, tweak your model, try it again and gradually expand.  More importantly, your application architecture should not be a one-time effort.  It is work in progress and really an ongoing journey.  Regardless of where you are on this journey, we recommend paying close attention to managing your application’s data foundation.

As you can see, there is a multitude of data issues that can plague an application consolidation or migration project and lead to its doom.  These potential challenges are not always recognized and understood early on.  This perception gap is a root-cause of project failure. This is why we are excited to host Philip Russom, of TDWI, in our upcoming webinar to discuss data management best practices and methodologies for application consolidation and migration. If you are undertaking any IT modernization or rationalization project, such as consolidating applications or migrating legacy applications to the cloud or to ‘on-prem’ application, such as SAP, this webinar is a must-see.

So what’s your reality going to be like?  Will your project run like a dream or will it escalate into a scary nightmare? Here’s hoping for the former.  And also hoping you can join us for this upcoming webinar to learn more:

Webinar with TDWI:
Successful Application Consolidation & Migration: Data Management Best Practices.

Date: Tuesday March 10, 10 am PT / 1 pm ET

Don’t miss out, Register Today!

1) Gartner report titled “Best Practices Mitigate Data Migration Risks and Challenges” published on December 9, 2014

2) Harvard Business Review: ‘Why your IT project may be riskier than you think’.

Share
Posted in Data Integration, Data Migration, Data Quality, Enterprise Data Management | Tagged , , , , , , , , , , , , , | 2 Comments

Software Modernization Strategies

Software Modernization

A Level-Up – Software Modernization

Every year, I get a replacement desk calendar to help keep all of our activities straight – and for a family of four, that is no easy task. I start with taking all of the little appointment cards the dentist, orthodontist, pediatrician and GP give to us for appointments that occur beyond the current calendar dates. I transcribe them all. Then I go through last year’s calendar to transfer any information that is relevant to this year’s calendar. And finally, I put the calendar down in the basement next to previous year calendars so I can refer back to them if I need. Last year’s calendar contains a lot of useful information, but no longer has the ability to solve my need to organize schedules for this year.

In a very loose way – this is very similar to application retirement. Many larger health plans have existing systems that were created several years (sometimes even several decades) ago. These legacy systems have been customized to reflect the health plan’s very specific business processes. They may be hosted on costly hardware, developed in antiquated software languages and rely on a few developers that are very close to retirement. The cost of supporting these (most likely) antiquated systems can be diverting valuable dollars away from innovation.

The process that I use to move appointment and contact data from one calendar to the next works for me – but is relatively small in scale. Imagine if I was trying to do this for an entire organization without losing context, detail or accuracy!

There are several methodologies for determining the best strategy for your organization to approach software modernization, including:

  • Architecture Driven Modernization (ADM) is the initiative to standardize views of the existing systems in order to enable common modernization activities like code analysis and comprehension, and software transformation.
  • SABA (Bennett et al., 1999) is a high-level framework for planning the evolution and migration of legacy systems, taking into account both organizational and technical issues.
  • SRRT (Economic Model to Software Rewriting and Replacement Times), Chan et al. (1996), Formal model for determining optimal software rewrite and replacement timings based on versatile metrics data.
  • And if all else fails:  Model Driven Engineering (MDE) is being investigated as an approach for reverse engineering and then forward engineering software code

My calendar migration process evolved over time, your method for software modernization should be well planned prior to the go-live date for the new software system.

Share
Posted in Application ILM, Application Retirement, Business Impact / Benefits, Data Archiving, Data Migration, data replication, Database Archiving, Healthcare, Operational Efficiency | Tagged , , | Leave a comment

Take These Steps to Avoid Wasting Your Marketing Technology Budget

Avoid Wasting Your Marketing Technology Budget

Don’t Waste Your Marketing Tech Budget

This year, the irresistible pull of digital marketing met an unstoppable force: Girl Scout cookies. It’s an $800 million-a-year fundraiser that is only expected to increase with a newly announced addition of digital sales.

The New York Times reports that beginning in this month and into January, for the first time, the Girl Scouts of America will be able to sell Thin Mints and other favorites online through invite-only websites. The websites will be accompanied by a mobile app, giving customers new digital options.

As the Girl Scouts update from a door-to-door approach to include a newly introduced digital program, it’s just one more sign of where marketing trends are heading.

From digital cookies to digital marketing technology:

If 2014 is the year of the digital cookie, then 2015 will be the year of marketing technology. Here’s just a few of the strongest indicators:

  • A study found that 67% of marketing departments plan to increase spending on technology over the next two years, according to the Harvard Business Review.
  • Gartner predicts that by 2017, CMOs will outspend CIOs on IT-related expenses.
  • Also by 2017, one-third of the total marketing budget will be dedicated to digital marketing, according to survey results from Teradata.
  • A new LinkedIn/Salesforce survey found that 56% of marketers see their relationships with the CIO as very important or critical.
  • Social media is a mainstream channel for marketers, making technology for measuring and managing this channel of paramount importance. This is not just true of B2C companies. Of high level executive B2B buyers, 75% used social media to make purchasing decisions, according to a 2014 survey by market research firm IDC.

From social to analytics to email marketing, much of what marketers see in technology offerings is often labeled as “cloud-based.” While cloud technology has many features and benefits, what are we really saying when we talk about the cloud?

What the cloud means… to marketers.

Beginning around 2012, multitudes of businesses in many industries began adapting “the cloud” as a feature or a benefit to their products or services. Whether or not the business truly was cloud-based was not as clear, which led to the term “cloudwashing.” We hear the so much about cloud, it is easy for us to overlook what it really means and what the benefits really are.

The cloud is more than a buzzword – and in particular, marketers need to know what it truly means to them.

For marketers, “the cloud” has many benefits. A service that is cloud-based gives you amazing flexibility and choices over the way you use a product or service:

  • A cloud-enabled product or service can be integrated into your existing systems. For marketers, this can range from integration into websites, marketing automation systems, CRMs, point-of-sale platforms, and any other business application.
  • You don’t have to learn a new system, the way you might when adapting a new application, software, or other enterprise system. You won’t have to set aside a lot of time and effort for new training for you or your staff.
  • Due to the flexibility that lets you integrate anywhere, you can deploy a cloud-based product or service across all of your organization’s applications or processes, increasing efficiencies and ensuring that all of your employees have access to the same technology tools at the same time.
  • There’s no need to worry about ongoing system updates, as those happen automatically behind the scenes.

In 2015, marketers should embrace the convenience of cloud-based services, as they help put the focus on benefits instead of spending time managing the technology.

Are you using data quality in the cloud?

If you are planning to move data out of an on-premise application or software to a cloud-based service, you can take advantage of this ideal time to ensure these data quality best practices are in place.

Verify and cleanse your data first, before it is moved to the cloud. Since it’s likely that your move to the cloud will make this data available across your organization — within marketing, sales, customer service, and other departments — applying data quality best practices first will increase operational efficiency and bring down costs from invalid or unusable data.

There may be more to add to this list, depending on the nature of your own business. Make sure that:

  • Postal addresses are valid, accurate, current and complete
  • Email addresses are valid
  • Telephone numbers are valid, accurate, and current
  • Increase the effectiveness of future data analysis by making sure all data fields are consistent and every individual data element is clearly defined
  • Fill in missing data
  • Remove duplicate contact and customer records

Once you have cleansed and verified your existing data and move it to the cloud, use a real-time verification and cleansing solution at the point of entry or point of collection in real-time to ensure good data quality across your organization on an ongoing basis.

The biggest roadblock to effective marketing technology is: Bad data.

Budgeting for marketing technology is going to become a bigger and bigger piece of the pie (or cookie, if you prefer) for B2C and B2B organizations alike. The first step all marketers need to take to make sure those investments fully pay off and don’t go wasted is great customer data.

Marketing technology is fueled by data. A recent Harvard Business Review article listed some of the most important marketing technologies. They included tools for analytics, conversion, email, search engine marketing, remarketing, mobile, and marketing automation.

What do they all have in common? These tools all drive customer communication, engagement, and relationships, all of which require valid and actionable customer data to work at all.

You can’t plan your marketing strategy off of data that tells you the wrong things about who your customers are, how they prefer to be contacted, and what messages work the best. Make data quality a major part of your 2015 marketing technology planning to get the most from your investment.

Marketing technology is going to be big in 2015 — where do you start?

With all of this in mind, how can marketers prepare for their technology needs in 2015? Get started with this free virtual conference from MarketingProfs that is totally focused on marketing technology.

This great event includes a keynote from Teradata’s CMO, Lisa Arthur, on “Using Data to Build Strong Marketing Strategies.” Register here for the December 12 Marketing Technology Virtual Conference from MarketingProfs.

Even if you can’t make it live that day at the virtual conference, it’s still smart to sign up so you receive on-demand recordings from the sessions when the event ends. Register now!

Share
Posted in Cloud, DaaS, Data Migration, Data Quality, Data Services | Tagged , , , | Leave a comment

Getting Value Out of Data Integration

The post is by Philip Howard, Research Director, Bloor Research.

Getting value out of Data Integration

Live Bloor Webinar, Nov 5

One of the standard metrics used to support buying decisions for enterprise software is total cost of ownership. Typically, the other major metric is functionality. However functionality is ephemeral. Not only does it evolve with every new release but while particular features may be relevant to today’s project there is no guarantee that those same features will be applicable to tomorrow’s needs. A broader metric than functionality is capability: how suitable is this product for a range of different project scenarios and will it support both simple and complex environments?

Earlier this year Bloor Research published some research into the data integration market, which exactly investigated these issues: how often were tools reused, how many targets and sources were involved, for what sort of projects were products deemed suitable? And then we compared these with total cost of ownership figures that we also captured in our survey. I will be discussing the results of our research live with Kristin Kokie, who is the interim CIO of Informatica, on Guy Fawkes’ day (November 5th). I don’t promise anything explosive but it should be interesting and I hope you can join us. The discussions will be vendor neutral (mostly: I expect that Kristin has a degree of bias).

To Register for the Webinar, click Here.

Share
Posted in Data Integration, Data Integration Platform, Data Migration | Tagged , , | Leave a comment

Not Just For Play, Western Union Puts Hadoop to Work

Not Just For Play, Western Union Puts Hadoop to Work

Put Hadoop to Work

Everyone’s talking about Hadoop for empowering analysts to quickly experiment, discover, and predict new insights.  But Hadoop isn’t just for play.  Leading enterprises like Western Union are putting Hadoop to work on their most mission-critical data pipelines.  Last week at Strata + Hadoop World, we had a chance to hear how Western Union uses Cloudera’s Hadoop-based enterprise data hub and Informatica to deliver faster, simpler, and cleaner data pipelines.

Western Union, a multi-billion dollar global financial services and communications company, data is recognized as their core asset.  Like many other financial services firms, Western Union thrives on data for both harvesting new business opportunities and managing its internal operations.  And like many other enterprises, Western Union isn’t just ingesting data from relational data sources.  They are mining a number of new information-rich sources like clickstream data and log data.  With Western Union’s scale and speed demands, the data pipeline just has to work so they can optimize customer experience across multiple channels (e.g. retail, online, mobile, etc.) to grow the business.

Let’s level set on how important scale and speed is to Western Union.  Western Union processes more than 29 financial transactions every second.  Analytical performance simply can’t be the bottleneck for extracting insights from this blazing velocity of data.  So to maximize the performance of their data warehouse appliance, Western Union offloaded data quality and data integration workloads onto a Cloudera Hadoop cluster.  Using the Informatica Big Data Edition, Western Union capitalized on the performance and scalability of Hadoop while unleashing the productivity of their Informatica developers.

Informatica Big Data Edition enables data driven organizations to profile, parse, transform, and cleanse data on Hadoop with a simple visual development environment, prebuilt transformations, and reusable business rules.  So instead of hand coding one-off scripts, developers can easily create mappings without worrying about the underlying execution platform.  Raw data can be easily loaded into Hadoop using Informatica Data Replication and Informatica’s suite of PowerExchange connectors.  After the data is prepared, it can be loaded into a data warehouse appliance for supporting high performance analysis.  It’s a win-win solution for both data managers and data consumers.  Using Hadoop and Informatica, the right workloads are processed by the right platforms so that the right people get the right data at the right time.

Using Informatica’s Big Data solutions, Western Union is transforming the economics of data delivery, enabling data consumers to create safer and more personalized experiences for Western Union’s customers.  Learn how the Informatica Big Data Edition can help put Hadoop to work for you.  And download a free trial to get started today!

Share
Posted in Data Migration, Hadoop | Tagged , , | Leave a comment

The King of Benchmarks Rules the Realm of Averages

A mid-sized insurer recently approached our team for help. They wanted to understand how they fell short in making their case to their executives. Specifically, they proposed that fixing their customer data was key to supporting the executive team’s highly aggressive 3-year growth plan. (This plan was 3x today’s revenue).  Given this core organizational mission – aside from being a warm and fuzzy place to work supporting its local community – the slam dunk solution to help here is simple.  Just reducing the data migration effort around the next acquisition or avoiding the ritual annual, one-off data clean-up project already pays for any tool set enhancing data acquisitions, integration and hygiene.  Will it get you to 3x today’s revenue?  It probably won’t.  What will help are the following:

The King of Benchmarks Rules the Realm of Averages

Making the Math Work (courtesy of Scott Adams)

Hard cost avoidance via software maintenance or consulting elimination is the easy part of the exercise. That is why CFOs love it and focus so much on it.  It is easy to grasp and immediate (aka next quarter).

Soft cost reduction, like staff redundancies are a bit harder.  Despite them being viable, in my experience very few decision makers want work on a business case to lay off staff.  My team had one so far. They look at these savings as freed up capacity, which can be re-deployed more productively.   Productivity is also a bit harder to quantify as you typically have to understand how data travels and gets worked on between departments.

However, revenue effects are even harder and esoteric to many people as they include projections.  They are often considered “soft” benefits, although they outweigh the other areas by 2-3 times in terms of impact.  Ultimately, every organization runs their strategy based on projections (see the insurer in my first paragraph).

The hardest to quantify is risk. Not only is it based on projections – often from a third party (Moody’s, TransUnion, etc.) – but few people understand it. More often, clients don’t even accept you investigating this area if you don’t have an advanced degree in insurance math. Nevertheless, risk can generate extra “soft” cost avoidance (beefing up reserve account balance creating opportunity cost) but also revenue (realizing a risk premium previously ignored).  Often risk profiles change due to relationships, which can be links to new “horizontal” information (transactional attributes) or vertical (hierarchical) from parent-child relationships of an entity and the parent’s or children’s transactions.

Given the above, my initial advice to the insurer would be to look at the heartache of their last acquisition, use a benchmark for IT productivity from improved data management capabilities (typically 20-26% – Yankee Group) and there you go.  This is just the IT side so consider increasing the upper range by 1.4x (Harvard Business School) as every attribute change (last mobile view date) requires additional meetings on a manager, director and VP level.  These people’s time gets increasingly more expensive.  You could also use Aberdeen’s benchmark of 13hrs per average master data attribute fix instead.

You can also look at productivity areas, which are typically overly measured.  Let’s assume a call center rep spends 20% of the average call time of 12 minutes (depending on the call type – account or bill inquiry, dispute, etc.) understanding

  • Who the customer is
  • What he bought online and in-store
  • If he tried to resolve his issue on the website or store
  • How he uses equipment
  • What he cares about
  • If he prefers call backs, SMS or email confirmations
  • His response rate to offers
  • His/her value to the company

If he spends these 20% of every call stringing together insights from five applications and twelve screens instead of one frame in seconds, which is the same information in every application he touches, you just freed up 20% worth of his hourly compensation.

Then look at the software, hardware, maintenance and ongoing management of the likely customer record sources (pick the worst and best quality one based on your current understanding), which will end up in a centrally governed instance.  Per DAMA, every duplicate record will cost you between $0.45 (party) and $0.85 (product) per transaction (edit touch).  At the very least each record will be touched once a year (likely 3-5 times), so multiply your duplicated record count by that and you have your savings from just de-duplication.  You can also use Aberdeen’s benchmark of 71 serious errors per 1,000 records, meaning the chance of transactional failure and required effort (% of one or more FTE’s daily workday) to fix is high.  If this does not work for you, run a data profile with one of the many tools out there.

If the sign says it - do it!

If the sign says it – do it!

If standardization of records (zip codes, billing codes, currency, etc.) is the problem, ask your business partner how many customer contacts (calls, mailing, emails, orders, invoices or account statements) fail outright and/or require validation because of these attributes.  Once again, if you apply the productivity gains mentioned earlier, there are you savings.  If you look at the number of orders that get delayed in form of payment or revenue recognition and the average order amount by a week or a month, you were just able to quantify how much profit (multiply by operating margin) you would be able to pull into the current financial year from the next one.

The same is true for speeding up the introduction or a new product or a change to it generating profits earlier.  Note that looking at the time value of funds realized earlier is too small in most instances especially in the current interest environment.

If emails bounce back or snail mail gets returned (no such address, no such name at this address, no such domain, no such user at this domain), e(mail) verification tools can help reduce the bounces. If every mail piece (forget email due to the miniscule cost) costs $1.25 – and this will vary by type of mailing (catalog, promotion post card, statement letter), incorrect or incomplete records are wasted cost.  If you can, use fully loaded print cost incl. 3rd party data prep and returns handling.  You will never capture all cost inputs but take a conservative stab.

If it was an offer, reduced bounces should also improve your response rate (also true for email now). Prospect mail response rates are typically around 1.2% (Direct Marketing Association), whereas phone response rates are around 8.2%.  If you know that your current response rate is half that (for argument sake) and you send out 100,000 emails of which 1.3% (Silverpop) have customer data issues, then fixing 81-93% of them (our experience) will drop the bounce rate to under 0.3% meaning more emails will arrive/be relevant. This in turn multiplied by a standard conversion rate (MarketingSherpa) of 3% (industry and channel specific) and average order (your data) multiplied by operating margin gets you a   benefit value for revenue.

If product data and inventory carrying cost or supplier spend are your issue, find out how many supplier shipments you receive every month, the average cost of a part (or cost range), apply the Aberdeen master data failure rate (71 in 1,000) to use cases around lack of or incorrect supersession or alternate part data, to assess the value of a single shipment’s overspend.  You can also just use the ending inventory amount from the 10-k report and apply 3-10% improvement (Aberdeen) in a top-down approach. Alternatively, apply 3.2-4.9% to your annual supplier spend (KPMG).

You could also investigate the expediting or return cost of shipments in a period due to incorrectly aggregated customer forecasts, wrong or incomplete product information or wrong shipment instructions in a product or location profile. Apply Aberdeen’s 5% improvement rate and there you go.

Consider that a North American utility told us that just fixing their 200 Tier1 suppliers’ product information achieved an increase in discounts from $14 to $120 million. They also found that fixing one basic out of sixty attributes in one part category saves them over $200,000 annually.

So what ROI percentages would you find tolerable or justifiable for, say an EDW project, a CRM project, a new claims system, etc.? What would the annual savings or new revenue be that you were comfortable with?  What was the craziest improvement you have seen coming to fruition, which nobody expected?

Next time, I will add some more “use cases” to the list and look at some philosophical implications of averages.

Share
Posted in Business Impact / Benefits, Business/IT Collaboration, Data Integration, Data Migration, Data Quality, Enterprise Data Management, Master Data Management | Tagged , , , | Leave a comment

Mainframe Connectivity? Are you kidding me?

Mainframe Connectivity?  Are you kidding me?

Mainframe Connectivity? Are you kidding me?

I know I normally don’t write about topics like this, but this topic just kept coming up for some reason in the past few customer visits I had, so I thought I would share what I recently learned about mainframe connectivity with you.

Ah yes, the Old Mainframe. It just won’t go away. Which means there is still valuable data sitting in it. And that leads to a question that I have been asked about repeatedly in the past few weeks, about why an organization should use a tool like Informatica PowerExchange to extract data from a mainframe when you can also do it with a script that extracts the data as a flat file.

So below, thanks to Phil Line, Informatica’s Product Manager for Mainframe connectivity, are the top ten reasons to use PowerExchange over hand coding a flat file extraction.

1) Data will be “fresh” as of the time the data is needed – not already old based on when the extraction was run.

2) Any data extracted directly from files will be as the file held it, any additional processes needed to run in order to extract/transfer data to LUW could potentially alter the original formats.

3) The consuming application can get the data when it needs it; there wouldn’t be any scheduling issues between creating the extract file and then being able to use it.

4) There is less work to do if PowerExchange reads the data directly from the mainframe, data type processing as well as potential code page issues are all handled by PowerExchange.

5) Unlike any files created with ftp type processes, where problems could cut short the expected data transfer, PowerExchange/PowerCenter provide log messages so as to ensure that all data has been processed.

6) The consumer has the capacity only to select the data that is needed for the consumer application, use of filtering can reduce the amount of data being transferred as well as any potential security aspects.

7) Any data access of mainframe based data can be secured according to the security tools in place on the mainframe; PowerExchange is fully compliant to RACF, ACF2 & Top-Secret security products.

8) Using Informatica’s PowerExchange, along with Informatica consuming tools (PowerCenter, Mercury etc.) provides a much simpler and cleaner architecture. The simpler the architecture the easier it is to find problems as well as audit the processes that are touching the data.

9) PowerExchange generally can help avoid the normal bottlenecks associated to getting data off of the mainframe, programmers are not needed to create the extract processes, new schedules don’t need to be created to ensure that the extracts run, in the event of changes being necessary they can be controlled by the Business group consuming the data.

10) Helps control mainframe data extraction processes that are still being run but from which no one uses the generated data as the original system that requested the data has now become obsolete.

Share
Posted in Data Migration, Data Synchronization, Data Transformation, Database Archiving | Leave a comment

The Swiss Army Knife of Data Integration

The Swiss Army Knife of Data Integration

The Swiss Army Knife of Data Integration

Back in 1884, a man had a revolutionary idea; he envisioned a compact knife that was lightweight and would combine the functions of many stand-alone tools into a single tool. This idea became what the world has known for over a century as the Swiss Army Knife.

This creative thinking to solve a problem came from a request to build a soldier knife from the Swiss Army.  In the end, the solution was all about getting the right tool for the right job in the right place. In many cases soldiers didn’t need industrial strength tools, all they really needed was a compact and lightweight tool to get the job at hand done quickly.

Putting this into perspective with today’s world of Data Integration, using enterprise-class data integration tools for the smaller data integration project is over kill and typically out of reach for the smaller organization. However, these smaller data integration projects are just as important as those larger enterprise projects, and they are often the innovation behind a new way of business thinking. The traditional hand-coding approach to addressing the smaller data integration project is not-scalable, not-repeatable and prone to human error, what’s needed is a compact, flexible and powerful off-the-shelf tool.

Thankfully, over a century after the world embraced the Swiss Army Knife, someone at Informatica was paying attention to revolutionary ideas. If you’ve not yet heard the news about the Informatica platform, a version called PowerCenter Express has been released and it is free of charge so you can use it to handle an assortment of what I’d characterize as high complexity / low volume data integration challenges and experience a subset of the Informatica platform for yourself. I’d emphasize that PowerCenter Express doesn’t replace the need for Informatica’s enterprise grade products, but it is ideal for rapid prototyping, profiling data, and developing quick proof of concepts.

PowerCenter Express provides a glimpse of the evolving Informatica platform by integrating four Informatica products into a single, compact tool. There are no database dependencies and the product installs in just under 10 minutes. Much to my own surprise, I use PowerCenter express quite often going about the various aspects of my job with Informatica. I have it installed on my laptop so it travels with me wherever I go. It starts up quickly so it’s ideal for getting a little work done on an airplane. 

For example, recently I wanted to explore building some rules for an upcoming proof of concept on a plane ride home so I could claw back some personal time for my weekend. I used PowerCenter Express to profile some data and create a mapping.  And this mapping wasn’t something I needed to throw away and recreate in an enterprise version after my flight landed. Vibe, Informatica’s build once / run anywhere metadata driven architecture allows me to export a mapping I create in PowerCenter Express to one of the enterprise versions of Informatica’s products such as PowerCenter, DataQuality or Informatica Cloud.

As I alluded to earlier in this article, being a free offering I honestly didn’t expect too much from PowerCenter Express when I first started exploring it. However, due to my own positive experiences, I now like to think of PowerCenter Express as the Swiss Army Knife of Data Integration.

To start claiming back some of your personal time, get started with the free version of PowerCenter Express, found on the Informatica Marketplace at:  https://community.informatica.com/solutions/pcexpress

 Business Use Cases

Business Use Case for PowerCenter Express

Share
Posted in Architects, Data Integration, Data Migration, Data Transformation, Data Warehousing, PowerCenter, Vibe | Tagged , | Leave a comment