Category Archives: Data Integration

Just In Time For the Holidays: How The FTC Defines Reasonable Security

Reasonable Security

How The FTC Defines Reasonable Security

Recently the International Association of Privacy Professionals (IAPP, www.privacyassociation.org ) published a white paper that analyzed the Federal Trade Commission’s (FTC) data security/breach enforcement. These enforcements include organizations from the finance, retail, technology and healthcare industries within the United States.

From this analysis in “What’s Reasonable Security? A Moving Target,” IAPP extrapolated the best practices from the FTC’s enforcement actions.

While the white paper and article indicate that “reasonable security” is a moving target it does provide recommendations that will help organizations access and baseline their current data security efforts.  Interesting is the focus on data centric security, from overall enterprise assessment to the careful control of access of employees and 3rd parties.  Here some of the recommendations derived from the FTC’s enforcements that call for Data Centric Security:

  • Perform assessments to identify reasonably foreseeable risks to the security, integrity, and confidentiality of personal information collected and stored on the network, online or in paper files.
  • Limited access policies curb unnecessary security risks and minimize the number and type of network access points that an information security team must monitor for potential violations.
  • Limit employee access to (and copying of) personal information, based on employee’s role.
  • Implement and monitor compliance with policies and procedures for rendering information unreadable or otherwise secure in the course of disposal. Securely disposed information must not practicably be read or reconstructed.
  • Restrict third party access to personal information based on business need, for example, by restricting access based on IP address, granting temporary access privileges, or similar procedures.

How does Data Centric Security help organizations achieve this inferred baseline? 

  1. Data Security Intelligence (Secure@Source coming Q2 2015), provides the ability to “…identify reasonably foreseeable risks.”
  2. Data Masking (Dynamic and Persistent Data Masking)  provides the controls to limit access of information to employees and 3rd parties.
  3. Data Archiving provides the means for the secure disposal of information.

Other data centric security controls would include encryption for data at rest/motion and tokenization for securing payment card data.  All of the controls help organizations secure their data, whether a threat originates internally or externally.   And based on the never ending news of data breaches and attacks this year, it is a matter of when, not if your organization will be significantly breached.

For 2015, “Reasonable Security” will require ongoing analysis of sensitive data and the deployment of reciprocal data centric security controls to ensure that the organizations keep pace with this “Moving Target.”

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data masking, Data Privacy, Data Security | Tagged , , , | Leave a comment

Empowering Your Organization with 3 Views of Customer Data

According to Accenture – 2013 Global Consumer Pulse Survey, “85 percent of customers are frustrated by dealing with a company that does not make it easy to do business with them, 84 percent by companies promising one thing, but delivering another; and 58 percent are frustrated with inconsistent experiences from channel to channel.”

Consumers expect more from the companies they do business with. In response, many companies are shifting from managing their business based on an application-, account- or product-centric approach to a customer-centric approach. And this is one of the main drivers for master data management (MDM) adoption. According to a VP of Data Strategy & Services at one of the largest insurance companies in the world, “Customer data is the lifeblood of a company that is serious about customer-centricity.” So, better managing customer data, which is what MDM enables you to do, is a key to the success of any customer-centricity initiative. MDM provides a significant competitive differentiation opportunity for any organization that’s serious about improving customer experience. It enables customer-facing teams to assess the value of any customer, at the individual, household or organization level.

Amongst the myriad business drivers of a customer-centricity initiative, key benefits include delivering an enhanced customer experience – leading to higher customer loyalty and greater share of wallet, more effective cross-sell and upsell targeting to increase revenue, and improved regulatory compliance.

To truly achieve all the benefits expected from a customer-first, customer-centric strategy, we need to look beyond the traditional approaches of data quality and MDM implementations, which often consider only one foundational (yet important) aspect of the technology solution. The primary focus has always been to consolidate and reconcile internal sources of customer data with the hope that this information brought under a single umbrella of a database and a service layer will provide the desired single view of customer. But in reality, this data integration mindset misses the goal of creating quality customer data that is free from duplication and enriched to deliver significant value to the business.

Today’s MDM implementations need to take their focus beyond mere data integration to be successful. In the following section, I will explain 3 levels of customer views which can be built incrementally to be able to make most out of your MDM solution. When implemented fully, these customer views act as key ingredients for improving the execution of your customer-centric business functions.

Trusted Customer View

The first phase of the solution should cover creation of trusted customer view. This view empowers your organization with an ability to see complete, accurate and consistent customer information.

In this stage, you take the best information from all the applications and compile it into a single golden profile. You not only use data integration technology for this, but also employ data quality tools to ensure the correctness and completeness of the customer data. Advanced matching, merging and trust framework are used to derive the most up-to-date information about your customer. You also guarantee that the golden record you create is accessible to business applications and systems of choice so everyone who has the authority can leverage the single version of the truth.

At the end of this stage, you will be able to clearly say John D. who lives at 123 Main St and Johnny Doe at 123 Main Street, who are both doing business with you, are not really two different individuals.

Customer data

Customer Relationships View

The next level of visibility is about providing a view into the customer’s relationships. It takes advantage of the single customer view and layers in all valuable family and business relationships as well as account and product information. Revealing these relationships is where the real value of multidomain MDM technology comes into action.

At the end of this phase, you not only see John Doe’s golden profile, but the products he has. He might have a personal checking from the Retail Bank, a mortgage from the Mortgage line of business, and brokerage and trust account with the Wealth Management division. You can see that John has his own consulting firm. You can see he has a corporate credit card and checking account with the Commercial division under the name John Doe Consulting Company.

At the end of this phase, you will have a consolidated view of all important relationship information that will help you evaluate the true value of each customer to your organization.

Customer Interactions and Transactions View

The third level of visibility is in the form of your customer’s interactions and transactions with your organization.

During this phase, you tie transactional information, historical data and social interactions your customer has with your organization to further enhance the system. Building this view provides you a whole new world of opportunities because you can see everything related to your customer in one central place. Once you have this comprehensive view, when John Doe calls your call center, you know how valuable he is to your business, which product he just bought from you (transactional data), what is the problem he is facing (social interactions).

A widely accepted rule of thumb holds that 80 percent of your company’s future revenue will come from 20 percent of your existing customers.  Many organizations are trying to ensure they are doing everything they can to retain existing customers and grow wallet share. Starting with Trusted Customer View is first step towards making your existing customers stay. Once you have established all three states discussed here, you can arm your customer-facing teams with a comprehensive view of customers so they can:

  • Deliver the best customer experiences possible at every touch point,
  • Improve customer segmentation for tailored offers, boost marketing and sales productivity,
  • Increase cross-sell and up-sell success,  and
  • Streamline regulatory reporting.

Achieving the 3 views discussed here requires a solid data management platform. You not only need an industry leading multidomain MDM technology, but also require tools which will help you integrate data, control the quality and connect all the dots. These technologies should work together seamlessly to make your implementation easier and help you gain rapid benefits. Therefore, choose your data management platform. To know more about MDM vendors, read recently released Gartner’s Magic Quadrant for MDM of Customer Data Solutions.

-Prash (@MDMGeek)

www.mdmgeek.com

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Governance, Data Integration, Data Quality, Master Data Management | Tagged , , , , | Leave a comment

10 Insights From The Road To Data Governance

10 Insights From The Road To Data Governance

10 Insights From The Road To Data Governance

I routinely have the pleasure of working with Terri Mikol, Director of Data Governance, UPMC. Terri has been spearheading data governance for three years at UPMC. As a result, she has a wealth of insights to offer on this hot topic. Enjoy her top 10 lessons learned from UPMC’s data governance journey:

1. You already have data stewards.

Commonly, health systems think they can’t staff data governance such as UPMC has becauseof a lack of funding. In reality, people are already doing data governance everywhere, across your organization! You don’t have to secure headcount; you locate these people within the business, formalize data governance as part of their job, and provide them tools to improve and manage their efforts.

2. Multiple types of data stewards ensure all governance needs are being met.

Three types of data stewards were identified and tasked across the enterprise:

I. Data Steward. Create and maintain data/business definitions. Assist with defining data and mappings along with rule definition and data integrity improvement.

II. Application Steward. One steward is named per application sourcing enterprise analytics. Populate and maintain inventory, assist with data definition and prioritize data integrity issues.

III. Analytics Steward. Named for each team providing analytics. Populate and maintain inventory, reduce duplication and define rules and self-service guidelines.

3. Establish IT as an enabler.

IT, instead of taking action on data governance or being the data governor, has become anenabler of data governance by investing in and administering tools that support metadata definition and master data management.

4. Form a governance council.

UPMC formed a governance council of 29 executives—yes, that’s a big number but UPMC is a big organization. The council is clinically led. It is co-chaired by two CMIOs and includes Marketing, Strategic Planning, Finance, Human Resources, the Health Plan, and Research. The council signs off on and prioritizes policies. Decision-making must be provided from somewhere.

5. Avoid slowing progress with process.

In these still-early days, only 15 minutes of monthly council meetings are spent on policy and guidelines; discussion and direction take priority. For example, a recent agenda item was “Length of Stay.” The council agreed a single owner would coordinate across Finance, Quality and Care Management to define and document an enterprise definition for “Length of Stay.”

6. Use examples.

Struggling to get buy-in from the business about the importance of data governance? An example everyone can relate to is “Test Patient.” For years, in her business intelligence role, Terri worked with “Test Patient.” Investigation revealed that these fake patients end up in places they should not. There was no standard for creation or removal of test patients, which meant that test patients and their costs, outcomes, etc., were included in analysis and reporting that drove decisions inside and external to UPMC. The governance program created a policy for testing in production should the need arise.

7. Make governance personal through marketing.

Terri holds monthly round tables with business and clinical constituents. These have been a game changer: Once a month, for two hours, ten business invitees meet and talk about the program. Each attendee shares a data challenge, and Terri educates them on the program and illustrates how the program will address each challenge.

8. Deliver self-service.

Providing self-service empowers your users to gain access and control to the data they need to improve their processes. The only way to deliver self-service business intelligence is to make metadata, master data, and data quality transparent and accessible across the enterprise.

9. IT can’t do it alone.

Initially, IT was resistant to giving up control, but now the team understands that it doesn’t have the knowledge or the time to effectively do data governance alone.

10. Don’t quit!

Governance can be complicated, and it may seem like little progress is being made. Terri keeps spirits high by reminding folks that the only failure is quitting.

Getting started? Assess the data governance maturity of your organization here: http://governyourdata.com/

FacebookTwitterLinkedInEmailPrintShare
Posted in Data First, Data Governance, Data Integration, Data Security | Tagged , , , | Leave a comment

Considering Data Integration? Also Consider Data Security Best Practices

Considering Data Integration? Also Consider Data Security

Consider Data Security Best Practices

It seems you can’t go a week without hearing about some major data breach, many of which make front-page news.  The most recent was from the State of California, that reported a large number of data breaches in that state alone.  “The number of personal records compromised by data breaches in California surged to 18.5 million in 2013, up more than six times from the year before, according to a report published [late October 2014] by the state’s Attorney General.”

California reported a total of 167 data breaches in 2013, which is up 28 percent from the 2012.  Two major data breaches caused most of this uptick, including the Target attack that was reported in December 2013, and the LivingSocial attack that occurred in April 2013.  This year, you can add the Home Depot data breach to that list, as well as the recent breach at the US Post Office.

So, what the heck is going on?  And how does this new impact data integration?  Should we be concerned, as we place more and more data on public clouds, or within big data systems?

Almost all of these breaches were made possible by traditional systems with security technology and security operations that fell far enough behind that outside attackers found a way in.  You can count on many more of these attacks, as enterprises and governments don’t look at security as what it is; an ongoing activity that may require massive and systemic changes to make sure the data is properly protected.

As enterprises and government agencies stand up cloud-based systems, and new big data systems, either inside (private) or outside (public) of the enterprise, there are some emerging best practices around security that those who deploy data integration should understand.  Here are a few that should be on the top of your list:

First, start with Identity and Access Management (IAM) and work your way backward.  These days, most cloud and non-cloud systems are complex distributed systems.  That means IAM is is clearly the best security model and best practice to follow with the emerging use of cloud computing.

The concept is simple; provide a security approach and technology that enables the right individuals to access the right resources, at the right times, for the right reasons.  The concept follows the principle that everything and everyone gets an identity.  This includes humans, servers, APIs, applications, data, etc..  Once that verification occurs, it’s just a matter of defining which identities can access other identities, and creating policies that define the limits of that relationship.

Second, work with your data integration provider to identify solutions that work best with their technology.  Most data integration solutions address security in one way, shape, or form.  Understanding those solutions is important to secure data at rest and in flight.

Finally, splurge on monitoring and governance.  Many of the issues around this growing number of breaches exist with the system managers’ inability to spot and stop attacks.  Creative approaches to monitoring system and network utilization, as well as data access, will allow those in IT to spot most of the attacks and correct the issues before the ‘go nuclear.’  Typically, there are an increasing number of breach attempts that lead up to the complete breach.

The issue and burden of security won’t go away.  Systems will continue to move to public and private clouds, and data will continue to migrate to distributed big data types of environments.  And that means the need data integration and data security will continue to explode.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data Privacy, Data Security | Tagged , , , | Leave a comment

Big Data Driving Data Integration at the NIH

Big Data Driving Data Integration at the NIH

Big Data Driving Data Integration at the NIH

The National Institutes of Health announced new grants to develop big data technologies and strategies.

“The NIH multi-institute awards constitute an initial investment of nearly $32 million in fiscal year 2014 by NIH’s Big Data to Knowledge (BD2K) initiative and will support development of new software, tools and training to improve access to these data and the ability to make new discoveries using them, NIH said in its announcement of the funding.”

The grants will address issues around Big Data adoption, including:

  • Locating data and the appropriate software tools to access and analyze the information.
  • Lack of data standards, or low adoption of standards across the research community.
  • Insufficient polices to facilitate data sharing while protecting privacy.
  • Unwillingness to collaborate that limits the data’s usefulness in the research community.

Among the tasks funded is the creation of a “Perturbation Data Coordination and Integration Center.”  The center will provide support for data science research that focuses on interpreting and integrating data from different data types and databases.  In other words, it will make sure the data moves to where it should move, in order to provide access to information that’s needed by the research scientist.  Fundamentally, it’s data integration practices and technologies.

This is very interesting from the standpoint that the movement into big data systems often drives the reevaluation, or even new interest in data integration.  As the data becomes strategically important, the need to provide core integration services becomes even more important.

The project at the NIH will be interesting to watch, as it progresses.  These are the guys who come up with the new paths to us being healthier and living longer.  The use of Big Data provides the researchers with the advantage of having a better understanding of patterns of data, including:

  • Patterns of symptoms that lead to the diagnosis of specific diseases and ailments.  Doctors may get these data points one at a time.  When unstructured or structured data exists, researchers can find correlations, and thus provide better guidelines to physicians who see the patients.
  • Patterns of cures that are emerging around specific treatments.  The ability to determine what treatments are most effective, by looking at the data holistically.
  • Patterns of failure.  When the outcomes are less than desirable, what seems to be a common issue that can be identified and resolved?

Of course, the uses of big data technology are limitless, when considering the value of knowledge that can be derived from petabytes of data.  However, it’s one thing to have the data, and another to have access to it.

Data integration should always be systemic to all big data strategies, and the NIH clearly understands this to be the case.  Thus, they have funded data integration along with the expansion of their big data usage.

Most enterprises will follow much the same path in the next 2 to 5 years.  Information provides a strategic advantage to businesses.  In the case of the NIH, it’s information that can save lives.  Can’t get much more important than that.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud, Cloud Data Integration, Data Integration | Tagged , , , | Leave a comment

If Data Projects Weather, Why Not Corporate Revenue?

Every fall Informatica sales leadership puts together its strategy for the following year.  The revenue target is typically a function of the number of sellers, the addressable market size and key accounts in a given territory, average spend and conversion rate given prior years’ experience, etc.  This straight forward math has not changed in probably decades, but it assumes that the underlying data are 100% correct. This data includes:

  • Number of accounts with a decision-making location in a territory
  • Related IT spend and prioritization
  • Organizational characteristics like legal ownership, industry code, credit score, annual report figures, etc.
  • Key contacts, roles and sentiment
  • Prior interaction (campaign response, etc.) and transaction (quotes, orders, payments, products, etc.) history with the firm

Every organization, no matter if it is a life insurer, a pharmaceutical manufacturer, a fashion retailer or a construction company knows this math and plans on getting somewhere above 85% achievement of the resulting target.  Office locations, support infrastructure spend, compensation and hiring plans are based on this and communicated.

data revenue

We Are Not Modeling the Global Climate Here

So why is it that when it is an open secret that the underlying data is far from perfect (accurate, current and useful) and corrupts outcomes, too few believe that fixing it has any revenue impact?  After all, we are not projecting the climate for the next hundred years here with a thousand plus variables.

If corporate hierarchies are incorrect, your spend projections based on incorrect territory targets, credit terms and discount strategy will be off.  If every client touch point does not have a complete picture of cross-departmental purchases and campaign responses, your customer acquisition cost will be too high as you will contact the wrong prospects with irrelevant offers.  If billing, tax or product codes are incorrect, your billing will be off.  This is a classic telecommunication example worth millions every month.  If your equipment location and configuration is wrong, maintenance schedules will be incorrect and every hour of production interruption will cost an industrial manufacturer of wood pellets or oil millions.

Also, if industry leaders enjoy an upsell ratio of 17%, and you experience 3%, data (assuming you have no formal upsell policy as it violates your independent middleman relationship) data will have a lot to do with it.

The challenge is not the fact that data can create revenue improvements but how much given the other factors: people and process.

Every industry laggard can identify a few FTEs who spend 25% of their time putting one-off data repositories together for some compliance, M&A customer or marketing analytics.  Organic revenue growth from net-new or previously unrealized revenue is what the focus of any data management initiative should be.  Don’t get me wrong; purposeful recruitment (people), comp plans and training (processes) are important as well.  Few people doubt that people and process drives revenue growth.  However, few believe data being fed into these processes has an impact.

This is a head scratcher for me. An IT manager at a US upstream oil firm once told me that it would be ludicrous to think data has a revenue impact.  They just fixed data because it is important so his consumers would know where all the wells are and which ones made a good profit.  Isn’t that assuming data drives production revenue? (Rhetorical question)

A CFO at a smaller retail bank said during a call that his account managers know their clients’ needs and history. There is nothing more good data can add in terms of value.  And this happened after twenty other folks at his bank including his own team delivered more than ten use cases, of which three were based on revenue.

Hard cost (materials and FTE) reduction is easy, cost avoidance a leap of faith to a degree but revenue is not any less concrete; otherwise, why not just throw the dice and see how the revenue will look like next year without a central customer database?  Let every department have each account executive get their own data, structure it the way they want and put it on paper and make hard copies for distribution to HQ.  This is not about paper versus electronic but the inability to reconcile data from many sources on paper, which is a step above electronic.

Have you ever heard of any organization move back to the Fifties and compete today?  That would be a fun exercise.  Thoughts, suggestions – I would be glad to hear them?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Big Data, Business Impact / Benefits, Business/IT Collaboration, Data Governance, Data Integration, Data Quality, Data Warehousing, Enterprise Data Management, Governance, Risk and Compliance, Master Data Management, Product Information Management | Tagged , | 1 Comment

Getting Value Out of Data Integration

The post is by Philip Howard, Research Director, Bloor Research.

Getting value out of Data Integration

Live Bloor Webinar, Nov 5

One of the standard metrics used to support buying decisions for enterprise software is total cost of ownership. Typically, the other major metric is functionality. However functionality is ephemeral. Not only does it evolve with every new release but while particular features may be relevant to today’s project there is no guarantee that those same features will be applicable to tomorrow’s needs. A broader metric than functionality is capability: how suitable is this product for a range of different project scenarios and will it support both simple and complex environments?

Earlier this year Bloor Research published some research into the data integration market, which exactly investigated these issues: how often were tools reused, how many targets and sources were involved, for what sort of projects were products deemed suitable? And then we compared these with total cost of ownership figures that we also captured in our survey. I will be discussing the results of our research live with Kristin Kokie, who is the interim CIO of Informatica, on Guy Fawkes’ day (November 5th). I don’t promise anything explosive but it should be interesting and I hope you can join us. The discussions will be vendor neutral (mostly: I expect that Kristin has a degree of bias).

To Register for the Webinar, click Here.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data Integration Platform, Data Migration | Tagged , , | Leave a comment

5 Key Factors for Successful Print Publishing

print publishing

Product Information Management Facilitates Catalog Print Publishing

In his recent article: “The catalog is dead – long live the catalog,” Informatica’s Ben Rund spoke about how printed catalogs are positioned as a piece of the omnichannel puzzle and are a valuable touch point on the connected customer’s informed purchase journey. The overall response was far greater than what we could have hoped for; we would like to thank all those that participated. Seeing how much interest this topic generated, we decided to investigate further, in order to find out which factors can help in making print publishing successful.

5 key Factors for Successful Print Publishing Projects

Today’s digital world impacts every facet of our lives. Deloitte recently reported that approximately 50% of purchases are influenced by our digital environment. Often, companies have no idea how much savings can be generated through the production of printed catalogues that leverage pre-existing data sources. The research at www.pim-roi.com talks of several such examples. After looking back at many successful projects, Michael and his team realized the potential to generate substantial savings when the focus is to
optimize “time to market.” (If, of course, business teams operate asynchronously!)

For this new blog entry, we interviewed Michael Giesen, IT Consultancy and Project Management at Laudert to get his thoughts and opinion on the key factors behind the success of print publishing projects. We asked Michael to share his experience and thoughts on the leading factors in running successful print publishing projects. Furthermore, Michael also provides insight on which steps to prioritize and which pitfalls to avoid at all costs, in order to ensure the best results.

1. Publication Analysis

How are objects in print (like products) structured today? What about individual topics and design of creative pages? How is the placement of tables, headings, prices and images organized nowadays? Are there standards? If so, what can be standardized and how? To get an overall picture, you have to thoroughly examine these points. You must do so for all the content elements involved in the layout, ensuring that, in the future, they can be used for Dynamic Publishing. It is conceivable that individual elements, such as titles or pages used in subject areas, could be excluded and reused in separate projects. Gaining the ability to automate catalog creation potentially requires to compromise in certain areas. We shall discuss this later. In the future, product information will probably be presented with very little need to apply changes, 4 instead of 24 table types, for example. Great, now we are on the right path!

2. Data Source Analysis

Where is the data used in today’s printed material being sourced from? If possible or needed, are there several data sources that require to be combined? How is pricing handled? What about product attributes or the structure of product description tables in the case of an individual item? Is all the marketing content and subsequent variations included as well? What about numerous product images or multiple languages? What about seasonally adjusted texts that pull from external sources?

This case requires a very detailed analysis, leading us to the following question:

What is the role and the value of storing product information using a standardized method in print publishing?

The benefits of utilizing such processes should be clear by now: The more standards are in place, the greater the amount of time you will save and the greater your ability to generate positive ROI. Companies that currently operate with complex systems supporting well-structured data are already ahead in the game. Furthermore, yielding positive results doesn’t necessarily require them to start from scratch and rebuild from the ground up. As a matter of fact, companies that have already invested in database systems (E.g. MSSQL) can leverage their existing infrastructures.

3. Process Analysis

In this section of our analysis, we will be getting right down to the details: What does the production process look like, from the initial layout phase to the final release process? Who is responsible for the “how? Who maintains the linear progression? Who has the responsibilities and release rights? Lastly, where are the bottlenecks? Are there safeguards mechanisms in place? Once all these roles and processes have been put in place and have received the right resources we can advance to the next step of our analysis. You are ready to tackle the next key factor: Implementation.

4. Implementation

Here you should be adventurous, creative and open minded, seeing that compromise might be needed. If your existing data sources do not meet the requirements, a solution must be found! A certain technical creative pragmatism may facilitate the short and medium planning (see point 2). You must extract and prepare your data sources for printed medium, such as a catalog, for example. The priint:suite of WERK II has proven itself as a robust all-round solution for Database Publishing and Web2Print. All-inclusive PIM solutions, such as Informatica PIM, already has a standard interface to priint:suite available. Depending on the specific requirements, an important decision must then be made: Is there a need for an InDesign Server? Simply put, it enables the fully automatic production of large-volume objects and offers accurate data preview. While slightly less featured, the use of WERK II PDF renderers offers similar functionalities but at a significantly more affordable price.

Based on the software and interfaces selected, an optimized process which supports your system can be developed and be structured to be fully automated if needed.
For individual groups of goods, templates can be defined, placeholders and page layouts developed. Production can start!

5. Selecting an Implementation Partner

In order to facilitate a smooth transition from day one, the support of a partner to carry out the implementation should be considered from the beginning. Since not only technology, but more importantly practical expertise provides maximum process efficiency, it is recommended that you inquire about a potential partner’s references. Getting insight from existing customers will provide you with feedback about their experience and successes. Any potential partner will be pleased to put you in touch with their existing customers.

What are Your Key Factors for Successful Print Publishing?

I would like to know what your thoughts are on this topic. Has anyone tried PDF renderers other than WERK II, such as Codeware’s XActuell? Furthermore, if there are any other factors you think are important in managing successful print publishing, feel free to mention them in the comments here. I’d be happy to discuss here or on twitter at @nicholasgoupil.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Integration, Master Data Management, PiM, Retail | Tagged , , , , , , , , | Leave a comment

More Evidence That Data Integration Is Clearly Strategic

Data Integration Is Clearly Strategic

Data Integration Is Strategic

A recent study from Epicor Software Corporation surveyed more than 300 IT and business decision-makers.  The study results highlighted the biggest challenges and opportunities facing Australian businesses. The independent research report “From Business Processes to Product Distribution” was based upon a survey of Australian organizations with more than 20 employees.

Key findings from the report include:

  • 65% of organizations cite data processing and integration as hampering distribution capability, with nearly half claiming their existing software and ERP is not suitable for distribution.
  • Nearly two-thirds of enterprises have some form of distribution process, involving products or services.
  • More than 80% of organizations have at least some problem with product or service distribution.
  • More than 50% of CIOs in organizations with distribution processes believe better distribution would increase revenue and optimize business processes, with a further 38% citing reduced operating costs.

The core findings: “With better data integration comes better automation and decision making.”

This report is one of many I’ve seen over the years that come to the same conclusion.  Most of those involved with the operations of the business don’t have access to key data points they need, thus they can’t automate tactical decisions, and also cannot “mine” the data, in terms of understanding the true state of the business.

The more businesses deal with building and moving products, the more data integration becomes an imperative value.  As stated in this survey, as well as others, the large majority cite “data processing and integration as hampering distribution capabilities.”

Of course, these issues goes well beyond Australia.  Most enterprises I’ve dealt with have some gap between the need to share key business data to support business processes, and decision support, and what current exists in terms of data integration capabilities.

The focus here is on the multiple values that data integration can bring.  This includes:

  • The ability to track everything as it moves from manufacturing, to inventory, to distribution, and beyond.  You to bind these to core business processes, such as automatic reordering of parts to make more products, to fill inventory.
  • The ability to see into the past, and to see into the future.  The emerging approaches to predictive analytics allow businesses to finally see into the future.  Also, to see what went truly right and truly wrong in the past.

While data integration technology has been around for decades, most businesses that both manufacture and distribute products have not taken full advantage of this technology.  The reasons range from perceptions around affordability, to the skills required to maintain the data integration flow.  However, the truth is that you really can’t afford to ignore data integration technology any longer.  It’s time to create and deploy a data integration strategy, using the right technology.

This survey is just an instance of a pattern.  Data integration was considered optional in the past.  With today’s emerging notions around the strategic use of data, clearly, it’s no longer an option.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data First, Data Integration, Data Integration Platform, Data Quality | Tagged , , , | Leave a comment

Ebola: Why Big Data Matters

Ebola: Why Big Data Matters

Ebola: Why Big Data Matters

The Ebola virus outbreak in West Africa has now claimed more than 4,000 lives and has entered the borders of the United States. While emergency response teams, hospitals, charities, and non-governmental organizations struggle to contain the virus, could big data analytics help?

A growing number of Data Scientists believe so.

If you recall the Cholera outbreak of Haiti in 2010 after the tragic earthquake, a joint research team from Karolinska Institute in Sweden and Columbia University in the US analyzed calling data from two million mobile phones on the Digicel Haiti network. This enabled the United Nations and other humanitarian agencies to understand population movements during the relief operations and during the subsequent cholera outbreak. They could allocate resources more efficiently and identify areas at increased risk of new cholera outbreaks.

Mobile phones, widely owned even in the poorest countries in Africa. Cell phones are also a rich source of data irrespective of which region where other reliable sources are sorely lacking. Senegal’s Orange Telecom provided Flowminder, a Swedish non-profit organization, with anonymized voice and text data from 150,000 mobile phones. Using this data, Flowminder drew up detailed maps of typical population movements in the region.

Today, authorities use this information to evaluate the best places to set up treatment centers, check-posts, and issue travel advisories in an attempt to contain the spread of the disease.

The first drawback is that this data is historic. Authorities really need to be able to map movements in real time especially since people’s movements tend to change during an epidemic.

The second drawback is, the scope of data provided by Orange Telecom is limited to a small region of West Africa.

Here is my recommendation to the Centers for Disease Control and Prevention (CDC):

  1. Increase the area for data collection to the entire region of Western Africa which covers over 2.1 million cell-phone subscribers.
  2. Collect mobile phone mast activity data to pinpoint where calls to helplines are mostly coming from, draw population heat maps, and population movement. A sharp increase in calls to a helpline is usually an early indicator of an outbreak.
  3. Overlay this data over censuses data to build up a richer picture.

The most positive impact we can have is to help emergency relief organizations and governments anticipate how a disease is likely to spread. Until now, they had to rely on anecdotal information, on-the-ground surveys, police, and hospital reports.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B Data Exchange, Big Data, Business Impact / Benefits, Business/IT Collaboration, Data Governance, Data Integration | Tagged , , , , , , , | Leave a comment