Category Archives: Data Transformation

The Case for Smart Data: When Big Data Isn’t Enough

Every two years, the typical company doubles the amount of data they store. However, this Data is inherently “dumb.” Acquiring more of it only seems to compound its lack of intellect.

When revitalizing your business, I won’t ask to look at your data – not even a little bit. Instead, we look at the process of how you use the data. What I want to know is this:

How much of your day-to-day operations are driven by your data?

The Case for Smart Data

I recently learned that 7-Eleven Japan has pushed decision-making down to the store level – in fact, to the level of clerks. Store clerks decide what goes on the shelves in their individual 7-Eleven stores. These clerks push incredible inventory turns. Some 70% of the products on the shelves are new to stores each year. As a result, this chain has been the most profitable Japanese retailer for 30 years running.

Click to enlarge

Click to enlarge

Instead of just reading the data and making wild guesses on why something works and why something doesn’t, these clerks acquired the skill of looking at the quantitative and the qualitative and connected dots. Data told them what people are talking about, how it’s related to their product and how much weight it carried. You can achieve this as well. To do so, you must introduce a culture that emphasizes discipline around processes. A disciplined process culture uses:

  1. A template approach to data with common processes, reuse of components, and a single face presented to customers
  2. Employees who consistently follow standard procedures

If you cannot develop such company-wide consistency, you will not gain benefits of ERP or CRM systems.

Make data available to the masses. Like at 7-Eleven Japan, don’t centralize the data decision-making process. Instead, push it out to the ranks. By putting these cultures and practices into play, businesses can use data to run smarter.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Business/IT Collaboration, Data Synchronization, Data Transformation, Data Warehousing, Master Data Management, Retail | Tagged , , , , , | Leave a comment

In a Data First World, Knowledge Really Is Power!

Knowledge Really IS Power!

Knowledge Really IS Power!

I have two quick questions for you. First, can you name the top three factors that will increase your sales or boost your profit? And second, are you sure about that?

That second question is a killer because most people — no matter if they’re in marketing, sales or manufacturing — rely on incomplete, inaccurate or just plain wrong information. Regardless of industry, we’ve been fixated on historic transactions because that’s what our systems are designed to provide us.

“Moneyball: The Art of Winning an Unfair Game” gives a great example of what I mean. The book (not the movie) describes Billy Beane hiring MBAs to map out the factors that would win a baseball game. They discovered something completely unexpected: That getting more batters on base would tire out pitchers. It didn’t matter if batters had multi-base hits, and it didn’t even matter if they walked. What mattered was forcing pitchers to throw ball after ball as they faced an unrelenting string of batters. Beane stopped looking at RBIs, ERAs and even home runs, and started hiring batters who consistently reached first base. To me, the book illustrates that the most useful knowledge isn’t always what we’ve been programmed to depend on or what is delivered to us via one app or another.

For years, people across industries have turned to ERP, CRM and web analytics systems to forecast sales and acquire new customers. By their nature, such systems are transactional, forcing us to rely on history as the best predictor of the future. Sure it might be helpful for retailers to identify last year’s biggest customers, but that doesn’t tell them whose blogs, posts or Tweets influenced additional sales. Isn’t it time for all businesses, regardless of industry, to adopt a different point of view — one that we at Informatica call “Data-First”? Instead of relying solely on transactions, a data-first POV shines a light on interactions. It’s like having a high knowledge IQ about relationships and connections that matter.

A data-first POV changes everything. With it, companies can unleash the killer app, the killer sales organization and the killer marketing campaign. Imagine, for example, if a sales person meeting a new customer knew that person’s concerns, interests and business connections ahead of time? Couldn’t that knowledge — gleaned from Tweets, blogs, LinkedIn connections, online posts and transactional data — provide a window into the problems the prospect wants to solve?

That’s the premise of two startups I know about, and it illustrates how a data-first POV can fuel innovation for developers and their customers. Today, we’re awash in data-fueled things that are somehow attached to the Internet. Our cars, phones, thermostats and even our wristbands are generating and gleaning data in new and exciting ways. That’s knowledge begging to be put to good use. The winners will be the ones who figure out that knowledge truly is power, and wield that power to their advantage.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Aggregation, Data Governance, Data Integration, Data Quality, Data Transformation | Tagged , , | Leave a comment

Best Practices for Using PowerExchange CDC for Oracle

This post was written by guest author Justin Passofaro, Principal Data Management Consultant at SSG, a consulting practice focused on innovative ways to leverage data for better business decisions.

Using PowerExchange CDC for OracleConfiguring your Oracle environment for using PowerExchange CDC can be challenging, but there are some best practices you can follow that will greatly simplify the process. There are two major factors to consider when approaching this: latency requirements for your data and the ability to restart your environment.

Data Latency Requirements

The first factor that will effect latency of your data is the location of your PowerExchange CDC installation. From a best practice perspective, it is optimal to install the PowerExchange Listener on the source database server as this eliminates the need to pass data across the network and will provide the smallest amount of latency from source to target.

The volume of data that PowerExchange CDC has to process can also have a significant impact on performance. There are several items in addition to the changed data that can effect performance, including, but are not limited to, Oracle catalog dumps, Oracle workload monitor customizations and other non-Oracle tools that use the redo logs. You should conduct a review of all the processes that access Oracle redo logs, and make an effort to minimize them in terms both volume and frequency. For example, you could monitor the redo log switches and the creation of archived log files to see how busy the source database is. The size of your production archive logs and knowing how often they are being created will provide the information necessary to properly configure PowerExchange CDC.

Environment Restart Ability

When certain changes are made to the source database environment, the PowerExchange CDC process will need to be stopped and restarted. The amount of time this restart takes should be considered whenever this needs to occur. PowerExchange CDC must be restarted when any of the following changes occur:

-          A change is made to the schema or a table that is part of the CDC process

-          An existing Capture Registration is changed

-          A change is made to the PowerExchange configuration files

-          An Oracle patch is applied

-          An Operating System patch or upgrade is applied

-          A PowerExchange version upgrade or service pack is applied

If using the CDC with LogMiner, a copy of the Oracle catalog must be placed on the archive log in order to function properly. The frequency of these copies is site-specific and will have an impact on the amount of time it will take to restart the CDC process.

Once your PowerExchange CDC process is in production, any changes to the environment must have extensive impact analysis performed to ensure the integrity of the data and the transactions remains intact upon restart. Understanding the configurable parameters in the PowerExchange configuration files that will assist restart performance is of the utmost importance.

Even with the challenges presented when configuring PowerExchange CDC for Oracle, there are trusted and proven methods that can significantly improve your ability to complete this process and have real time or near real time access to your data. At SSG, we’re committed to always utilizing best practice methodology with our PowerExchange Baseline Deployments.  In addition, we provide in-depth knowledge transfer to set end users up with a solid foundation for optimizing PowerExchange functionality.

Visit the Informatica Marketplace to learn more about SSG’s Baseline Deployment offerings.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Services, Data Synchronization, Data Transformation, Enterprise Data Management | Leave a comment

Mainframe Connectivity? Are you kidding me?

Mainframe Connectivity?  Are you kidding me?

Mainframe Connectivity? Are you kidding me?

I know I normally don’t write about topics like this, but this topic just kept coming up for some reason in the past few customer visits I had, so I thought I would share what I recently learned about mainframe connectivity with you.

Ah yes, the Old Mainframe. It just won’t go away. Which means there is still valuable data sitting in it. And that leads to a question that I have been asked about repeatedly in the past few weeks, about why an organization should use a tool like Informatica PowerExchange to extract data from a mainframe when you can also do it with a script that extracts the data as a flat file.

So below, thanks to Phil Line, Informatica’s Product Manager for Mainframe connectivity, are the top ten reasons to use PowerExchange over hand coding a flat file extraction.

1) Data will be “fresh” as of the time the data is needed – not already old based on when the extraction was run.

2) Any data extracted directly from files will be as the file held it, any additional processes needed to run in order to extract/transfer data to LUW could potentially alter the original formats.

3) The consuming application can get the data when it needs it; there wouldn’t be any scheduling issues between creating the extract file and then being able to use it.

4) There is less work to do if PowerExchange reads the data directly from the mainframe, data type processing as well as potential code page issues are all handled by PowerExchange.

5) Unlike any files created with ftp type processes, where problems could cut short the expected data transfer, PowerExchange/PowerCenter provide log messages so as to ensure that all data has been processed.

6) The consumer has the capacity only to select the data that is needed for the consumer application, use of filtering can reduce the amount of data being transferred as well as any potential security aspects.

7) Any data access of mainframe based data can be secured according to the security tools in place on the mainframe; PowerExchange is fully compliant to RACF, ACF2 & Top-Secret security products.

8) Using Informatica’s PowerExchange, along with Informatica consuming tools (PowerCenter, Mercury etc.) provides a much simpler and cleaner architecture. The simpler the architecture the easier it is to find problems as well as audit the processes that are touching the data.

9) PowerExchange generally can help avoid the normal bottlenecks associated to getting data off of the mainframe, programmers are not needed to create the extract processes, new schedules don’t need to be created to ensure that the extracts run, in the event of changes being necessary they can be controlled by the Business group consuming the data.

10) Helps control mainframe data extraction processes that are still being run but from which no one uses the generated data as the original system that requested the data has now become obsolete.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Migration, Data Synchronization, Data Transformation, Database Archiving | Leave a comment

The Swiss Army Knife of Data Integration

The Swiss Army Knife of Data Integration

The Swiss Army Knife of Data Integration

Back in 1884, a man had a revolutionary idea; he envisioned a compact knife that was lightweight and would combine the functions of many stand-alone tools into a single tool. This idea became what the world has known for over a century as the Swiss Army Knife.

This creative thinking to solve a problem came from a request to build a soldier knife from the Swiss Army.  In the end, the solution was all about getting the right tool for the right job in the right place. In many cases soldiers didn’t need industrial strength tools, all they really needed was a compact and lightweight tool to get the job at hand done quickly.

Putting this into perspective with today’s world of Data Integration, using enterprise-class data integration tools for the smaller data integration project is over kill and typically out of reach for the smaller organization. However, these smaller data integration projects are just as important as those larger enterprise projects, and they are often the innovation behind a new way of business thinking. The traditional hand-coding approach to addressing the smaller data integration project is not-scalable, not-repeatable and prone to human error, what’s needed is a compact, flexible and powerful off-the-shelf tool.

Thankfully, over a century after the world embraced the Swiss Army Knife, someone at Informatica was paying attention to revolutionary ideas. If you’ve not yet heard the news about the Informatica platform, a version called PowerCenter Express has been released and it is free of charge so you can use it to handle an assortment of what I’d characterize as high complexity / low volume data integration challenges and experience a subset of the Informatica platform for yourself. I’d emphasize that PowerCenter Express doesn’t replace the need for Informatica’s enterprise grade products, but it is ideal for rapid prototyping, profiling data, and developing quick proof of concepts.

PowerCenter Express provides a glimpse of the evolving Informatica platform by integrating four Informatica products into a single, compact tool. There are no database dependencies and the product installs in just under 10 minutes. Much to my own surprise, I use PowerCenter express quite often going about the various aspects of my job with Informatica. I have it installed on my laptop so it travels with me wherever I go. It starts up quickly so it’s ideal for getting a little work done on an airplane. 

For example, recently I wanted to explore building some rules for an upcoming proof of concept on a plane ride home so I could claw back some personal time for my weekend. I used PowerCenter Express to profile some data and create a mapping.  And this mapping wasn’t something I needed to throw away and recreate in an enterprise version after my flight landed. Vibe, Informatica’s build once / run anywhere metadata driven architecture allows me to export a mapping I create in PowerCenter Express to one of the enterprise versions of Informatica’s products such as PowerCenter, DataQuality or Informatica Cloud.

As I alluded to earlier in this article, being a free offering I honestly didn’t expect too much from PowerCenter Express when I first started exploring it. However, due to my own positive experiences, I now like to think of PowerCenter Express as the Swiss Army Knife of Data Integration.

To start claiming back some of your personal time, get started with the free version of PowerCenter Express, found on the Informatica Marketplace at:  https://community.informatica.com/solutions/pcexpress

 Business Use Cases

Business Use Case for PowerCenter Express

FacebookTwitterLinkedInEmailPrintShare
Posted in Architects, Data Integration, Data Migration, Data Transformation, Data Warehousing, PowerCenter, Vibe | Tagged , | Leave a comment

4 Steps to Bring Big Data to the Business

Bring Big Data to the Business

Bring Big Data to the Business

By now, the business benefits of effectively leveraging big data have become well known. Enhanced analytical capabilities, greater understanding of customers, and ability to predict trends before they happen are just some of the advantages. But big data doesn’t just appear and present itself. It needs to be made tangible to the business. All too often, executives are intimidated by the concept of big data, thinking the only way to work with it is to have an advanced degree in statistics.

There are ways to make big data more than an abstract concept that can only be loved by data scientists. Four of these ways were recently covered in a report by David Stodder, director of business intelligence research for TDWI, as part of TDWI’s special report on What Works in Big Data.

Go real-time

The time is ripe for experimentation with real-time, interactive analytics technologies, Stodder says. The next major step in the movement toward big data is enabling real-time or near-real-time delivery of information. Real-time data has been a challenge with BI data for years, with limited success, Stodder says. The good news is that Hadoop framework, originally built for batch processing, now includes interactive querying and streaming applications, he reports. This opens the way for real-time processing of big data.

Design for self-service

Interest in self-service access to analytical data continues to grow. “Increasing users’ self-reliance and reducing their dependence on IT are broadly shared goals,” Stodder says. “Nontechnical users—those not well versed in writing queries or navigating data schemas—are requesting to do more on their own.” There is an impressive array of self-service tools and platforms now appearing on the market. “Many tools automate steps for underlying data access and integration, enabling users to do more source selection and transformation on their own, including for data from Hadoop files,” he says. “In addition, new tools are hitting the market that put greater emphasis on exploratory analytics over traditional BI reporting; these are aimed at the needs of users who want to access raw big data files, perform ad-hoc requests routinely, and invoke transformations after data extraction and loading (that is, ELT) rather than before.”

Encourage visualization

Nothing gets a point across faster than having data points visually displayed – decision-makers can draw inferences within seconds. “Data visualization has been an important component of BI and analytics for a long time, but it takes on added significance in the era of big data,” Stodder says. “As expressions of meaning, visualizations are becoming a critical way for users to collaborate on data; users can share visualizations linked to text annotations as well as other types of content, such as pictures, audio files, and maps to put together comprehensive, shared views.”

Unify views of data

Users are working with many different data types these days, and are looking to bring this information into a single view – “rather than having to move from one interface to another to view data in disparate silos,” says Stodder. Unstructured data – graphics and video files – can also provide a fuller context to reports, he adds.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Data Transformation | Tagged , , | Leave a comment

And Now, Time for Real Disruptive Innovation

Disruptive Innovation

Disruptive Innovation

Lately, there’s been a raging debate over Clayton Christensen’s definition of “disruptive innovation,” and whether this is the key to ultimate success in markets. Clayton, author of the ground-breaking book, The Innovator’s Dilemma, says industry-leading firms tend to pursue high-margin revenue-producing business, leaving lower end, less profitable parts of their markets to new, upstart players. For established leaders, there’s often not enough profit in selling to under-served or unserved markets. However, what happens is the upstarts end up gradually moving up the food chain with their new business models, eating gradually into leaders’ positions and either chasing them upstream, or out of business.

The interesting thing is that many of the upstarts do not even intend to take on the market leader in the segment. Christensen cites the classic example of Digital Equipment Corporation in the 1980s, which was unable to make the transition from large, expensive enterprise systems to smaller, PC-based equipment. The PC upstarts in this case did not take on Digital directly – rather they addressed unmet needs in another part of the market.

Christensen wrote and published The Innovator’s Dilemma more than 17 years ago, but his message keeps reverberating across the business world. Lately, Jill Lapore questioned some of thinking that has evolved around disruptive innovation in a recent New Yorker article. “Disruptive innovation is a theory about why businesses fail. It’s not more than that. It doesn’t explain change. It’s not a law of nature,” she writes. Christensen responded with a rebuttal to Lapore’s thesis, noting that “disruption doesn’t happen overnight,” and that “[Disruptive innovation] is not a theory about survivability.”

There is something Lapore points out that both she and Christensen can agree on: “disruption” is being oversold and misinterpreted on a wide scale these days. Every new product that rolls out is now branded as “disruptive.” As stated above, the true essence of disruption is creating new markets where the leaders would not tread.

Data itself can potentially be a source of disruption, as data analytics and information emerge as strategic business assets. While the ability to provide data analysis at real-time speeds, or make new insights possible isn’t disruption in the Christensen sense, we are seeing the rise of new business models built around data and information that could bring new leaders to the forefront. Data analytics can either play a role in supporting this movement, or data itself may be the new product or service disrupting existing markets.

We’ve already been seeing this disruption taking place within the publishing industry, for example – companies or sites providing real-time or near real-time services such as financial updates, weather forecasts and classified advertising have displaced traditional newspapers and other media as information sources.

Employing data analytics as a tool for insights never before available within an industry sector also may be part of disruptive innovation. Tesla Motors, for example, is disruptive to the automotive industry because it manufactures entirely electric cars. But the formula to its success is its employment of massive amounts of data from its array of vehicle in-devices to assure quality and efficiency.

Likewise, data-driven disruption may be occurring in places that may have been difficult to innovate. For example, it’s long been speculated that some of the digital giants, particularly Google, are poised to enter the long-staid insurance industry. If this were to happen, Google would not enter as a typical insurance company with a new web-based spin. Rather, the company would be employing new techniques of data gathering, insight and analysis to offer an entirely new model to consumers – one based on data. As Christopher Hernaes recently related in TechCrunch, Google’s ability to collect and mine data on homes, business and autos give it a unique value proposition n the industry’s value chain.

We’re in an era in which Christensen’s mode of disruptive innovation has become a way of life. Increasingly, it appears that enterprises that are adept and recognizing and acting upon the strategic potential of data may be joining the ranks of the disruptors.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business/IT Collaboration, Data Transformation, Governance, Risk and Compliance | Leave a comment

Garbage In, Treasure Out

Even in “good” data there is a lot of garbage. For example a person’s name.  John could also be spelled as Jon or Von (I have a high school sports trophy to prove it).  Schmidt could become Schmitt or Smith. In Hungarian my name is Janos Kovacs. Human beings entering data make errors in spelling, phonetics, and keypunching.  We also have to deal with variations associated with compound and account names, abbreviations, nicknames, prefix & suffix variations, foreign names, and missing elements. As long as humans are involved in entering data there will be a significant amount of garbage in any database.  So how do we turn this gibberish into gems of information?

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data Transformation, Identity Resolution, Integration Competency Centers | Tagged , , , | Leave a comment

Get Your Data Butt Off The Couch and Move It

Data is everywhere.  It’s in databases and applications spread across your enterprise.  It’s in the hands of your customers and partners.  It’s in cloud applications and cloud servers.  It’s on spreadsheets and documents on your employee’s laptops and tablets.  It’s in smartphones, sensors and GPS devices.  It’s in the blogosphere, the twittersphere and your friends’ Facebook timelines. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, B2B, Big Data, Cloud Computing, Complex Event Processing, Data Governance, Data Integration, Data Migration, Data Quality, Data Services, Data Transformation, Data Warehousing, Enterprise Data Management, Integration Competency Centers | Tagged , , , | Leave a comment

Hierarchical Data – More Than Just XML

In a recent Aberdeen Group Analyst Insight paper it was identified that 50% of their survey respondents were currently integrating Hierarchical data sources with 13% planning to implement this capability in the next 12 months. But the changing trend is that of those organisations currently integrating XML data where nearly a third are using or are planning to integrate other Hierarchical sources with the need to integrate JSON coming out in the lead with COBOL records and Google Protocol Buffers close behind. Apache AVRO has not been integrated much currently but shows the biggest growth in planned integration and also number of projects. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, Big Data, Data Transformation, Uncategorized | Tagged , , , , , , , , , | Leave a comment