Tag Archives: SAP

We Are Sports – SportScheck Omnichannel Retail

Are you a manager dedicated to fashion, B2C or retail? This blog provides an overview what companies can learn on omnichannel from SportScheck.

SportScheck is one of Germany’s most successful multichannel businesses. SportScheck (btw Ventana Research Innovation Award Winner) is an equipment and clothing specialist for almost every sport and its website gets over 52 million hits per year, making it one of the most successful online stores in Germany.

Each year, more than million customers sign up for the mail-order business while over 17 million customers visit its brick and mortar stores (Source). These figures undoubtedly describe the success of SportScheck’s multichannel strategy. SportScheck also strives to deliver innovative concepts in all of its sales channels, while always aiming to provide customers with the best shopping experience possible. This philosophy can be carried out only in conjunction with modern systems landscapes and optimized processes.

Complete, reliable, and attractive information – across every channel – is the key to a great customer experience and better sales. It’s hard to keep up with customer demands in a single channel, much less multiple channels. Download The Informed Purchase Journey. The Informed Purchase Journey requires the right product, to right customer at the right place. Enjoy the video!

What is the Business Initiative in SportScheck

  • Providing the customer the same deals across all sales channels with a centralized location for all product information
  • Improve customer service in all sales channels with perfect product data
  • Make sure customers have enough product information to make a purchase without the order being returned

Intelligent and Agile Processes are Key to Success

“Good customer service, whether online, in-store, or in print, needs perfect product data” said Alexander Pischetsrieder in an interview. At the Munich-based sporting goods retailer, there had been no centralized system for product data before now. After extensive research and evaluation, the company decided to implement the product information management (PIM) system from Informatica.

The main reason for the introduction of Informatica Product Information Management (PIM) solutions was its support for a true multichannel strategy. Customers should have access to the same deals across all sales channels. In addition to making a breadth of information available, customer service still remains key.

In times where information is THE killer app, key challenges are, keeping information up to date and ensuring efficient processes. In a retail scenario, product catalog onboarding starts with PIM to get the latest product information. A dataset in the relevant systems that is always up-to-date is a further basis, which allows companies to react immediately to market movements and implement marketing requirements as quickly as possible. Data must be exchanged between the systems practically in real time. If you want to learn more details, how SportScheck solved the technical integration between SAP ERP and Informatica PIM

Product Data Equals Demonstrated Expertise

“I am convinced that a well-presented product with lots of pictures and details sells better. For us, this signals knowing our product. That sets us apart from the large discount stores,” notes Alexander Pischetsrieder. “In the end, we have to ask: who is the customer going to trust? We gain trust here with our product knowledge and our love of sports in general.” Just like our motto says, “We get our fans excited.” By offering a professional search engine, product comparisons, and many other features, 
PIM adds value not only in ecommerce – and that gets us excited!”

Benefits for SportScheck

  • Centralized location for all product information across all sales channels
  • An agile system that is capable of interweaving the different retail processes across sales channels into a smooth, cross-channel function
  • Self-Service portal for agencies and suppliers with direct upload to the PIM system

For German readers I can highly recommend this video on the customer use case. If you are interested in more details, ask me on Twitter @benrund.

PS: This blog is based on the PIM case study on SportScheck.

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management, PiM, Product Information Management, Real-Time, Retail, Uncategorized | Tagged , , , , , | Leave a comment

Who’s the Top MDM Vendor – IBM, SAP, Oracle, or Informatica?

The MDM space is filled by a number of well-known players. In addition to Informatica, you’ll also find IBM, Oracle, SAP, and a host of other vendors. But who’s the leader of this space?

The Information Difference is an analyst firm that specializes in MDM, and this firm has been watching the MDM space for years. Recently, The Information Difference compared 12 different MDM vendors, ranked them using 200 criteria, and published their findings in a report of the MDM landscape for Q2, 2013. The firm compared the vendors’ MDM offerings across 6 categories: data governance, business rules, data quality, data storage, data provision, and data movement.

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management, PiM | Tagged , , , , , , , , , , | Leave a comment

Exploring the Informatica ILM Nearline 6.1A

In today’s post, I want to write about the “Informatica ILM Nearline 6.1A″. Although this Nearline concept is not new, it is still not very known and represents the logical evolution of business applications,  data warehouses and information lifecycle approaches that have struggled to maintain acceptable performance levels in the face of the increasingly intense “data tsunami” that looms over today’s business world. Whereas older archiving solutions based their viability on the declining prices of hardware and storage, ILM Nearline 6.1A embraces the dynamism of a software and services approach to fully leverage the potential of large enterprise data architectures.

Looking back, we can now see that the older data management solutions presented a paradox: in order to mitigate performance issues and meet Service Level Agreements (SLA) with users, they actually prevented or limited ad-hoc access to data. On the basis of system monitoring and usage statistics, this inaccessible data was then declared to be unused, and this was cited as an excuse for locking it away entirely. In effect, users were told: “Since you can’t get at it, you can’t use it, and therefore we’re not going to give it to you”!

ILM Nearline 6.1A, by contrast, allows historical data to be accessed with near-online speeds, empowering business analysts to measure and perfect key business initiatives through analysis of actual historical details. In other words, ILM Nearline 6.1A gives you all the data you want, when and how you want it (without impacting the performance of existing warehouse reporting systems!).

Aside from the obvious economic and environmental benefits of this software-centric approach and the associated best practices, the value of ILM Nearline 6.1A can be assessed in terms of the core proposition cited by Tim O’Reilly when he coined the term “Web 2.0″:

“The value of the software is proportional to the scale and dynamism of the data it helps to manage.”

In this regard, ILM Nearline 6.1A provides a number of important advantages over prior methodologies:

Keeps data accessible: ILM Nearline 6.1A enables optimal performance from the online database while keeping all data easily accessible. This massively reduces the work required to identify, access and restore archived data, while minimizing the performance hit involved in doing so in a production environment.

Keeps the online database “lean”: Because data archived to the ILM Nearline 6.1A can still be easily accessed by users at near-online speeds, it allows for much more recent data to be moved out of the online system than would be possible with archiving. This results in far better online system performance and greater flexibility to further support user requirements without performance trade-offs. It is also a big win for customers moving their systems to HANA.

Relieves data management stress: Data can be moved to ILM Nearline 6.1A without the substantial ongoing analysis of user access patterns that is usually required by archiving products. The process is typically based on a rule as simple as “move all data older than x months from the ten largest InfoProviders”.

Mitigates administrative risk: Unlike archived data, ILM Nearline 6.1A data requires little or no additional ongoing administration, and no additional administrative intervention is required to access it.

Lets analysts be analysts: With ILM Nearline 6.1A, far less time is taken up in gaining access to key data and “cleansing it”, so much more time can be spent performing “what if” scenarios before recommending a course of action for the company. This improves not only the productivity but also the quality of work of key business analysts and statistical gurus.

Copes with data structure changes: ILM Nearline 6.1A can easily deal with data model changes, making it possible to query data structured according to an older model alongside current data. With archive data, this would require considerable administrative work.

Leverages existing storage environments: Compared to older archiving products/strategies, the high degree of compression offered by ILM Nearline 6.1A greatly increases the amount of information that can be stored as well as the speed at which it can be accessed.

Keeps data private and secure: ILM Nearline 6.1A has privacy and security features that protect key information from being seen by ad-hoc business analysts (for example: names, social security numbers, credit card information).

In short, ILM Nearline 6.1A offers a significant advantage over other nearline and archiving technologies. When data needs be removed from the online database in order to improve performance, but still needs to be readily accessible by users to conduct long-term analysis, historical reporting, or to rebuild aggregates/KPIs/InfoCubes for period-over-period analysis, ILM Nearline 6.1A is currently the only workable solution available.

In my next post, I’ll discuss more specifically how implementing the ILM Nearline 6.1A solution can benefit your business apps, data warehouses and your business processes.


FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving | Tagged , , , , | 2 Comments

Under the hood: decommissioning an SAP system with Informatica Data Archive for Application Retirement

If you reached this blog, you are already familiar with the reasons why you need to do a house cleaning on your old applications. If not, this subject has been explored in other discussions, like this one from Claudia Chandra.

All the explanations below are based on Informatica Data Archive for application retirement.

Very often, customers are surprised to know that Informatica’s solution for application retirement can also decommission SAP system. The market has the feeling that SAP is different, or “another beast”. And it really is!

A typical SAP requires software licenses, maintenance contracts, and hardware for the transactional application itself, the corresponding data warehouse and databases, operating systems, server, storage, and any additional software and hardware licenses that you may have on top of the application.  Your company may want to retire older versions of the application or consolidate multiple instances in order to save costs. Our engineering group has some very experienced SAP resources, including myself here, with more than 16 years of hands-on work with SAP technology. And we were able to simplify the SAP retirement process in a way that makes the Informatica Data Archive solution decommission SAP as any other type of application.

Next are the steps to decommission an SAP system using Informatica Data Archive.

Let’s start with some facts: SAP has some “special” tables which can only be read by the SAP kernel itself. In a typical SAP ECC 6.0, around 9% of these tables fall in these categories, representing around 6,000 tables.

More specifically, these tables are known as “clusters”, “pools” and I created a third category with transparent tables which have a binary column, or RAW data type, which only SAP application can unravel.

1)    Mining

In this step, we will get all the metadata of the SAP system being retired, including all transparent, cluster and pools tables, all columns with data types. This metadata will be kept with the data in the optimized archive.

2)    Extraction from source

Informatica Data Archive 6.1.x is able to connect to all database servers certified by SAP, to retrieve rows from the transparent tables.

On the SAP system, it is required to install an ABAP agent, which has the programs developed by Informatica to read all the rows from the special tables and archive files and to pull all the attachments in its original format. These programs are delivered as an SAP transport, which is imported in the SAP system prior to the beginning of the decommissioning process.

Leveraging the Java connector publicly available through the SAP portal (SAPJCo), Informatica Data Archive connects to an SAP application server on the system being decommissioned and make calls to the programs imported though the transport. The tasks are performed using background threads and the process is monitored from the Informatica Data Archive environment, including all the logging, status and monitoring of the whole retirement process happening in the SAP system.

Extraction of table rows in database

Below you can see what all SAP table types are and how our solution deals with it:

Table type Table name in SAP
(Logical name)
Table name in the database(Physical table) How we handle it?
Cluster tables BSEG RFBLG The engine reads all the rows from the logical tables by connecting to the SAP application level and store in the archive store as if the table existed in the database as a physical table.The engine also reads all rows of the physical tables and stores as they are, as a policy insurance only, since the data cannot be read without an SAP system up and running
Pool tables A016 KAPOL
Transparent tables with RAW field PCL2STXL PCL2STXL The engine creates a new table in the archive store and read all rows from the original table, but the RAW field is unraveled.The engine reads all rows of the physical tables and store as they are, as a policy insurance only, since the data cannot be read without an SAP system up and running

 

The engine also reads all rows of the original table PCL2 or STXL and stores as they are, as a policy insurance only, since the data cannot be read without an SAP system up and running

 

The Informatica Data Archive will extract the data of all tables, independently of their types.

Table rows in archive files

Another source of table rows is the archived data. SAP has its own archiving framework, which is based on a creation of archiving files, also known as ADK files. These files store table rows in an SAP proprietary compacted form, which can only be read by ABAP code running in a SAP system.

Once created, these files are located in the file system and can be stored in an external storage using an ArchiveLink implementation.

The Informatica Data Archive engine also reads the table rows from all ADK files, independent of their location, as long as the files are accessible by the SAP application being retired. These table rows will be stored in the archive store as well, along with the original table.

Very important: After the SAP system is retired, any implementation or ArchiveLink can be retired as well, along with the storage that was holding the ADK files.

3)    Attachments

Business transactions in SAP systems have the ability to have attachments linked to them. The SAP Generic Object Services (GOS) is a way to upload documents, add notes to a transaction, add URLs relevant to the document, all still referencing a business document, like a purchase order or a financial document. Some other SAP applications, like CRM, have its own mechanism of attaching documents, complementing GOS features.

All these methods can store the attachments in the SAP database, or at SAP Knowledge Provider (KPro) or externally in storages, leveraging an ArchiveLink implementation.

Informatica’s engine is able to download all the attachment files, notes and URLs as discrete files, independent of where they are stored, keeping the relationship to the original business document. The relationship is stored in a table created by Informatica in the archive store, which contains the key of the business document and the link to the attachments, notes and URLs that were assigned to it in the original SAP system.

All these files are stored in the archive store, along with the structured data – or tables.

4)    Load into optimized archive

All data and attachments are then loaded into Informatica’s optimized archive,. The archival store will compress the archived data up to 98%

5)    Search and data visualization

All structured data are accessible though JDBC/ODBC, as any other relational database. The user has the option to use the search capability that comes with the product, which allows users to run simple queries and view data as business entities.

Another option is to use the integrated reporting, capability within the product, which allows users to create pixel-perfect reports, using drag and drop technology, querying the data using SQL and displaying the data as business entities, which are defined in prebuilt SAP application accelerators. .

Informatica also has a collection of reports for SAP to display data for customers, vendors, general ledger accounts, assets and financial documents.

Some customers prefer to use their own corporate standard 3rd party reporting tool. That is also possible as long as the tool can connect to JDBC/ODBC sources, which is a market standard for connecting to databases.

Hopefully this blog helped you to understand what Informatica Data Archive for Application Retirement does to decommission an SAP system. If you need any further information, please comment below. Thank you.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, Data Archiving | Tagged | Leave a comment

Keeping Your SAP HANA Costs Under Control

Adopting SAP HANA can offer significant new business value, but it can also be an expensive proposition. If you are contemplating or in the process of moving to HANA, it’s worth your time to understand your options for Nearlining your SAP data.  The latest version of Informatica ILM Nearline, released in February, has been certified by SAP and can run with SAP BW systems running on HANA or any relational database supported by SAP.

Nearlining your company’s production SAP BW before migrating to a HANA-based BW can provide huge saving potentials. Even if your HANA project has already started, Nearlining the production data will help keep the database growth flat. We have customers that have actually been able to shrink InfoProviders by enforcing strict rules on data retention on the data stored in the live database.

Informatica World is around the corner, and I will be there with my peers to demo and talk about the latest version of Informatica ILM Nearline.  Click here to learn more about Informatica World 2013 and make sure you sign up for one my Hands On Lab sessions on this topic. See you at the Aria in Las Vegas in June.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Informatica Events | Tagged , , , , , | 1 Comment

Managing Your SAP BW Growth With a Nearline Strategy

SAP’s data warehouse solution (SAP BW) provides enterprises the ability to easily build a warehouse over their existing operational systems with pre-defined extraction and reporting objects and methods. Data that is loaded into SAP BW is stored in a layered architecture which encourages reusability of data throughout the system in a standardized way. SAP’s implementation also enables easy audits of data delivery mechanisms that are used to produce various reports within the system.

To allow enterprises to achieve this level of standardization and auditability, SAP BW must persistently store large amounts of data within different layers of their architecture. Managing the size of the objects within these layers will become increasingly important as the system grows to insure high levels of performance for end-user queries and data delivery. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM | Tagged , , , , , , | Leave a comment

Informatica 9.5 for Big Data Challenge #1: Cloud

Just five years ago, there was a perception held by many in our industry that the world of data for enterprises was simplifying. This was in large part due to the wave of consolidation among application vendors. With SAP and Oracle gobbling up the competition to build massive, monolithic application stacks, the story was that this consolidation would simplify data integration and data management. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud Computing, Informatica 9.5 | Tagged , , , , | Leave a comment

Dynamic Data Masking in a Nutshell

I’ve been asked numerous times how Dynamic Data Masking works, so here it is – The Dynamic Data Masking process. Believe me it’s simple …

The use case –IT personnel, developers, consultants and outsource support teams have  access to production business applications (SAP, PeopleSoft, Oracle) or clones/backups that contains sensitive customer information and credit card information.

We cannot block their access, as they are required to ensure application performance, but we need to secure the data they are accessing.

These are the initial installation steps required:

  1. Install Informatica Dynamic Data Masking on the database or on a dedicated server as it acts as a proxy.
  2. Import one of our predefined rule sets that has been prepared for the application/data or create your own custom rules.
  3. Define the roles/responsibilities that need to be anonymized, using predefined hooks to ActiveDirectory/LDAP and application responsibilities.

Now how does Dynamic Data Masking work?

  1. User requests are intercepted in real-time by the Dynamic Data Masking server software.
  2. User roles and responsibilities are evaluated, and if they have been specified by the rules as requiring masking, Dynamic Dasta Masking rewrites them to return masked/scrambled personal information. No application changes, no database changes – completely transparent.

Sounds simple – yes it is.

Other common use cases include protecting development and reporting tool access to production databases, anonymizing datawarehouse reports and design tools, securing production clones and training environments.

See? Simple!

FacebookTwitterLinkedInEmailPrintShare
Posted in Data masking, Data Privacy | Tagged , , , , | Leave a comment

2012 Cloud Integration Predictions – Data, MDM, BI, Platform and IT as a Service

I spent last weekend reading Geoffrey Moore’s new book, Escape Velocity: Free Your Company’s Future from the Pull of the Past. Then on Sunday, the New York Times published this article about salesforce.com: A Leader in the Cloud Gains Rivals. Clearly “The Big Switch” is on. With this as a backdrop, the need for a comprehensive cloud data management strategy has surfaced as a top IT imperative heading into the New Year – How and when do you plan to move data to the cloud? How will you prevent SaaS silos? How will you ensure your cloud data is trustworthy, relevant and complete? What is your plan for longer-term cloud governance and control?

These are just a few of the questions you need to think through as you develop your short, medium and long-term cloud strategy. Here are my predictions for what else should be on your 2012 cloud integration radar.  (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business/IT Collaboration, CIO, Cloud Computing, Data Integration, Data Migration, Data Quality, Data Synchronization, Master Data Management, PaaS, SaaS | Tagged , , , , , , , , , , , , , , , , , | 3 Comments

Does Asia Get IT?

Tony Young, CIO in Hong Kong

I recently returned from China and Hong Kong after having met with several CIOs, media and analysts, as well as delivering keynotes focused on customer centricity. When I return to the US after traveling, I’m often asked about the state of IT in the geography I was just in. I’ve been to both China and Hong Kong several times over the past few years, and from my perspective, IT is maturing at a very rapid pace in that region.

During prior trips to Asia, it felt like the old days of data processing. I would speak with senior IT leaders and they were more concerned with the “blocking and tackling” of IT, and not looking at how IT can provide a strategic competitive advantage. Specifically in China, IT leadership was comfortable scaling by applying people to the problem rather than using commercial software. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in CIO, Customer Acquisition & Retention, Master Data Management | Tagged , , , , , | Leave a comment