Category Archives: Business Impact / Benefits

Amazon Web Services and Informatica Deliver Data-Ready Cloud Computing Infrastructure for Every Business

At re:Invent 2014 in Las Vegas,  Informatica and AWS announced a broad strategic partnership to deliver data-ready cloud computing infrastructure to any type or size of business.

Informatica’s comprehensive portfolio across Informatica Cloud and PowerCenter solutions connect to multiple AWS Data Services including Amazon Redshift, RDS, DynamoDB, S3, EMR and Kinesis – the broadest pre-built connectivity available to AWS Data Services. Informatica and AWS offerings are pre-integrated, enabling customers to rapidly and cost-effectively implement data warehousing, large scale analytics, lift and shift, and other key use cases in cloud-first and hybrid IT environments. Now, any company can use Informatica’s portfolio to get a plug-and-play on-ramp to the cloud with AWS.

Economical and Flexible Path to the Cloud

As business information needs intensify and data environments become more complex, the combination of AWS and Informatica enables organizations to increase the flexibility and reduce the costs of their information infrastructures through:

  • More cost-effective data warehousing and analytics – Customers benefit from lower costs and increased agility when unlocking the value of their data with no on-premise data warehousing/analytics environment to design, deploy and manage.
  • Broad, easy connectivity to AWS – Customers gain full flexibility in integrating data from any Informatica-supported data source (the broadest set of sources supported by any integration vendor) through the use of pre-built connectors for AWS.
  • Seamless hybrid integration – Hybrid integration scenarios across Informatica PowerCenter and Informatica Cloud data integration deployments are able to connect seamlessly to AWS services.
  • Comprehensive use case coverage – Informatica solutions for data integration and warehousing, data archiving, data streaming and big data across cloud and on-premise applications mesh with AWS solutions such as RDS, Redshift, Kinesis, S3, DynamoDB, EMR and other AWS ecosystem services to drive new and rapid value for customers.

New Support for AWS Services

Informatica introduced a number of new Informatica Cloud integrations with AWS services, including connectors for Amazon DynamoDB, Amazon Elastic MapReduce (Amazon EMR) and Amazon Simple Storage Service (Amazon S3), to complement the existing connectors for Amazon Redshift and Amazon Relational Database Service (Amazon RDS).

Additionally, the latest Informatica PowerCenter release for Amazon Elastic Compute Cloud (Amazon EC2) includes support for:

  • PowerCenter Standard Edition and Data Quality Standard Edition
  • Scaling options – Grid, high availability, pushdown optimization, partitioning
  • Connectivity to Amazon RDS and Amazon Redshift
  • Domain and repository DB in Amazon RDS for current database PAM (policies and measures)

To learn more, try our 60-day free Informatica Cloud trial for Amazon Redshift.

If you’re in Vegas, please come by our booth at re:Invent, Nov. 11-14, in Booth #1031, Venetian / Sands, Hall.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Cloud, Cloud Application Integration, Cloud Computing | Tagged , , , | Leave a comment

Decrease Salesforce Data Prep Time With Project Springbok

Account Executives update opportunities in Salesforce all the time. As opportunities close, payment information is received in the financial system. Normally, they spend hours trying to combine the data, to prepare it for differential analysis. Often, there is a prolonged, back-and-forth dialogue with IT. This takes time and effort, and can delay the sales process.

What if you could spend less time preparing your Salesforce data and more time analyzing it?

Decrease Salesforce Data Prep Time With Project Springbok

Decrease Data Prep Time With Project Springbok

Informatica has a vision to solve this challenge by providing self-service data to non-technical users. Earlier this year, we announced our Intelligent Data Platform. One of the key projects in the IDP, code-named “Springbok“, uses an excel-like search interface to let business users find and shape the data they need.

Informatica’s Project Springbok is a faster, better and, most importantly, easier way to intelligently work with data for any purpose. Springbok guides non-technical users through a data preparation process in a self-service manner. It makes intelligent recommendations and suggestions, based on the specific data they’re using.

To see this in action, we welcome you to join us as we partner with Halak Consulting, LLC for an informative webinar. The webinar will take place on November 18th at 10am PST. You will learn from the Springbok VP of Strategy and from an experienced Springbok user about how Springbok can benefit you.

So REGISTER for the webinar today!

For another perspective, see the “Imagine Not Needing to do a VLookup ever again!” from Deepa Patel, Salesforce.com MVP.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, B2B Data Exchange, Business Impact / Benefits, Business/IT Collaboration | Tagged , , , | Leave a comment

To Determine The Business Value of Data, Don’t Talk About Data

The title of this article may seem counterintuitive, but the reality is that the business doesn’t care about data.  They care about their business processes and outcomes that generate real value for the organization. All IT professionals know there is huge value in quality data and in having it integrated and consistent across the enterprise.  The challenge is how to prove the business value of data if the business doesn’t care about it. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, Business Impact / Benefits, CIO, Data Governance, Enterprise Data Management, Integration Competency Centers | Tagged , , , , , , | Leave a comment

If Data Projects Weather, Why Not Corporate Revenue?

Every fall Informatica sales leadership puts together its strategy for the following year.  The revenue target is typically a function of the number of sellers, the addressable market size and key accounts in a given territory, average spend and conversion rate given prior years’ experience, etc.  This straight forward math has not changed in probably decades, but it assumes that the underlying data are 100% correct. This data includes:

  • Number of accounts with a decision-making location in a territory
  • Related IT spend and prioritization
  • Organizational characteristics like legal ownership, industry code, credit score, annual report figures, etc.
  • Key contacts, roles and sentiment
  • Prior interaction (campaign response, etc.) and transaction (quotes, orders, payments, products, etc.) history with the firm

Every organization, no matter if it is a life insurer, a pharmaceutical manufacturer, a fashion retailer or a construction company knows this math and plans on getting somewhere above 85% achievement of the resulting target.  Office locations, support infrastructure spend, compensation and hiring plans are based on this and communicated.

data revenue

We Are Not Modeling the Global Climate Here

So why is it that when it is an open secret that the underlying data is far from perfect (accurate, current and useful) and corrupts outcomes, too few believe that fixing it has any revenue impact?  After all, we are not projecting the climate for the next hundred years here with a thousand plus variables.

If corporate hierarchies are incorrect, your spend projections based on incorrect territory targets, credit terms and discount strategy will be off.  If every client touch point does not have a complete picture of cross-departmental purchases and campaign responses, your customer acquisition cost will be too high as you will contact the wrong prospects with irrelevant offers.  If billing, tax or product codes are incorrect, your billing will be off.  This is a classic telecommunication example worth millions every month.  If your equipment location and configuration is wrong, maintenance schedules will be incorrect and every hour of production interruption will cost an industrial manufacturer of wood pellets or oil millions.

Also, if industry leaders enjoy an upsell ratio of 17%, and you experience 3%, data (assuming you have no formal upsell policy as it violates your independent middleman relationship) data will have a lot to do with it.

The challenge is not the fact that data can create revenue improvements but how much given the other factors: people and process.

Every industry laggard can identify a few FTEs who spend 25% of their time putting one-off data repositories together for some compliance, M&A customer or marketing analytics.  Organic revenue growth from net-new or previously unrealized revenue is what the focus of any data management initiative should be.  Don’t get me wrong; purposeful recruitment (people), comp plans and training (processes) are important as well.  Few people doubt that people and process drives revenue growth.  However, few believe data being fed into these processes has an impact.

This is a head scratcher for me. An IT manager at a US upstream oil firm once told me that it would be ludicrous to think data has a revenue impact.  They just fixed data because it is important so his consumers would know where all the wells are and which ones made a good profit.  Isn’t that assuming data drives production revenue? (Rhetorical question)

A CFO at a smaller retail bank said during a call that his account managers know their clients’ needs and history. There is nothing more good data can add in terms of value.  And this happened after twenty other folks at his bank including his own team delivered more than ten use cases, of which three were based on revenue.

Hard cost (materials and FTE) reduction is easy, cost avoidance a leap of faith to a degree but revenue is not any less concrete; otherwise, why not just throw the dice and see how the revenue will look like next year without a central customer database?  Let every department have each account executive get their own data, structure it the way they want and put it on paper and make hard copies for distribution to HQ.  This is not about paper versus electronic but the inability to reconcile data from many sources on paper, which is a step above electronic.

Have you ever heard of any organization move back to the Fifties and compete today?  That would be a fun exercise.  Thoughts, suggestions – I would be glad to hear them?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Big Data, Business Impact / Benefits, Business/IT Collaboration, Data Governance, Data Integration, Data Quality, Data Warehousing, Enterprise Data Management, Governance, Risk and Compliance, Master Data Management, Product Information Management | Tagged , | 1 Comment

Is it Enterprise Architecture or Wall Art?

Enterprise ArchitectureJust last week, I visited a client for whom I had been consulting on-and-off for several years. On the meeting room wall, I saw their Enterprise Architecture portfolio, beautiful graphically designed and printed on a giant sheet of paper. My host proudly informed me how much she enjoyed putting that diagram together in 2009.

I jokingly reminded her of the famous notion of “art for art’s sake”; which is an appropriate phrase to describe what many architects are doing when populating frameworks. Indeed, when we refer to Enterprise Architecture, we must remember that the term ‘architecture’ is, itself, a metaphor.

In a tough economy, when competition is increasingly global and marketplaces are shifting, this ability to make tough decisions is going to be essential. Opportunities to save costs are going to be really valued, and architecture invariably helps companies save money. The ability to reuse, and thus rapidly seize the next related business opportunity, is also going to be highly valued.

The thing you have to be careful of is that if you see your markets disappearing, if your product is outdated, or your whole industry is redefining itself, as we have seen in things like media, you have to be ready to innovate. Architecture can restrict your innovative gene, by saying, “Wait, wait, wait. We want to slow down. We want to do things on our platform.” That can be very dangerous, if you are really facing disruptive technology or market changes.

Albert Camus wrote a famous essay exploring the Sisyphus myth called “The Myth of Sisyphus,” where he reinterpreted the central theme of the myth. Similarly, we need to challenge the myths of Enterprise Architecture and enterprise system/solution architecture in general – not meekly accept them.

IEEE says, “A key premise of this metaphor is that important decisions may be made early in system development in a manner similar to the early decision-making found in the development of civil architecture projects.”

Keep asking yourself, “When is what we built that’s stable actually constraining us too much? When is it preventing important innovation?” For many architects, that’s going to be tough, because you start to love the architecture, the standards, and the discipline. You love what you’ve created, but if it isn’t right for the market you’re facing, you have to be ready to let it go and go seize the next opportunity.

The central message is as follows: ‘documenting’ architecture in various layers of abstraction for the purposes of ‘completeness’ is plainly ridiculous. This is especially true when the effort to produce the artifacts takes such an amount of time as to make the whole collection obsolete on completion.

FacebookTwitterLinkedInEmailPrintShare
Posted in Architects, Business Impact / Benefits, CIO, Operational Efficiency | Tagged , , , , , | 2 Comments

5 Key Factors for Successful Print Publishing

print publishing

Product Information Management Facilitates Catalog Print Publishing

In his recent article: “The catalog is dead – long live the catalog,” Informatica’s Ben Rund spoke about how printed catalogs are positioned as a piece of the omnichannel puzzle and are a valuable touch point on the connected customer’s informed purchase journey. The overall response was far greater than what we could have hoped for; we would like to thank all those that participated. Seeing how much interest this topic generated, we decided to investigate further, in order to find out which factors can help in making print publishing successful.

5 key Factors for Successful Print Publishing Projects

Today’s digital world impacts every facet of our lives. Deloitte recently reported that approximately 50% of purchases are influenced by our digital environment. Often, companies have no idea how much savings can be generated through the production of printed catalogues that leverage pre-existing data sources. The research at www.pim-roi.com talks of several such examples. After looking back at many successful projects, Michael and his team realized the potential to generate substantial savings when the focus is to
optimize “time to market.” (If, of course, business teams operate asynchronously!)

For this new blog entry, we interviewed Michael Giesen, IT Consultancy and Project Management at Laudert to get his thoughts and opinion on the key factors behind the success of print publishing projects. We asked Michael to share his experience and thoughts on the leading factors in running successful print publishing projects. Furthermore, Michael also provides insight on which steps to prioritize and which pitfalls to avoid at all costs, in order to ensure the best results.

1. Publication Analysis

How are objects in print (like products) structured today? What about individual topics and design of creative pages? How is the placement of tables, headings, prices and images organized nowadays? Are there standards? If so, what can be standardized and how? To get an overall picture, you have to thoroughly examine these points. You must do so for all the content elements involved in the layout, ensuring that, in the future, they can be used for Dynamic Publishing. It is conceivable that individual elements, such as titles or pages used in subject areas, could be excluded and reused in separate projects. Gaining the ability to automate catalog creation potentially requires to compromise in certain areas. We shall discuss this later. In the future, product information will probably be presented with very little need to apply changes, 4 instead of 24 table types, for example. Great, now we are on the right path!

2. Data Source Analysis

Where is the data used in today’s printed material being sourced from? If possible or needed, are there several data sources that require to be combined? How is pricing handled? What about product attributes or the structure of product description tables in the case of an individual item? Is all the marketing content and subsequent variations included as well? What about numerous product images or multiple languages? What about seasonally adjusted texts that pull from external sources?

This case requires a very detailed analysis, leading us to the following question:

What is the role and the value of storing product information using a standardized method in print publishing?

The benefits of utilizing such processes should be clear by now: The more standards are in place, the greater the amount of time you will save and the greater your ability to generate positive ROI. Companies that currently operate with complex systems supporting well-structured data are already ahead in the game. Furthermore, yielding positive results doesn’t necessarily require them to start from scratch and rebuild from the ground up. As a matter of fact, companies that have already invested in database systems (E.g. MSSQL) can leverage their existing infrastructures.

3. Process Analysis

In this section of our analysis, we will be getting right down to the details: What does the production process look like, from the initial layout phase to the final release process? Who is responsible for the “how? Who maintains the linear progression? Who has the responsibilities and release rights? Lastly, where are the bottlenecks? Are there safeguards mechanisms in place? Once all these roles and processes have been put in place and have received the right resources we can advance to the next step of our analysis. You are ready to tackle the next key factor: Implementation.

4. Implementation

Here you should be adventurous, creative and open minded, seeing that compromise might be needed. If your existing data sources do not meet the requirements, a solution must be found! A certain technical creative pragmatism may facilitate the short and medium planning (see point 2). You must extract and prepare your data sources for printed medium, such as a catalog, for example. The priint:suite of WERK II has proven itself as a robust all-round solution for Database Publishing and Web2Print. All-inclusive PIM solutions, such as Informatica PIM, already has a standard interface to priint:suite available. Depending on the specific requirements, an important decision must then be made: Is there a need for an InDesign Server? Simply put, it enables the fully automatic production of large-volume objects and offers accurate data preview. While slightly less featured, the use of WERK II PDF renderers offers similar functionalities but at a significantly more affordable price.

Based on the software and interfaces selected, an optimized process which supports your system can be developed and be structured to be fully automated if needed.
For individual groups of goods, templates can be defined, placeholders and page layouts developed. Production can start!

5. Selecting an Implementation Partner

In order to facilitate a smooth transition from day one, the support of a partner to carry out the implementation should be considered from the beginning. Since not only technology, but more importantly practical expertise provides maximum process efficiency, it is recommended that you inquire about a potential partner’s references. Getting insight from existing customers will provide you with feedback about their experience and successes. Any potential partner will be pleased to put you in touch with their existing customers.

What are Your Key Factors for Successful Print Publishing?

I would like to know what your thoughts are on this topic. Has anyone tried PDF renderers other than WERK II, such as Codeware’s XActuell? Furthermore, if there are any other factors you think are important in managing successful print publishing, feel free to mention them in the comments here. I’d be happy to discuss here or on twitter at @nicholasgoupil.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Integration, Master Data Management, PiM, Retail | Tagged , , , , , , , , | Leave a comment

Fast and Fasterer: Screaming Streaming Data on Hadoop

Hadoop

Guest Post by Dale Kim

This is a guest blog post, written by Dale Kim, Director of Product Marketing at MapR Technologies.

Recent published research shows that “faster” is better than “slower.” The point, ladies and gentlemen, is that speed, for lack of a better word, is good. But granted, you won’t always have the need for speed. My Lamborghini is handy when I need to elude the Bakersfield fuzz on I-5, but it does nothing for my Costco trips. There, I go with capacity and haul home my 30-gallon tubs of ketchup with my Ford F150. (Note: this is a fictitious example, I don’t actually own an F150.)

But if speed is critical, like in your data streaming application, then Informatica Vibe Data Stream and the MapR Distribution including Apache™ Hadoop® are the technologies to use together. But since Vibe Data Stream works with any Hadoop distribution, my discussion here is more broadly applicable. I first discussed this topic earlier this year during my presentation at Informatica World 2014. In that talk, I also briefly described architectures that include streaming components, like the Lambda Architecture and enterprise data hubs. I recommend that any enterprise architect should become familiar with these high-level architectures.

Data streaming deals with a continuous flow of data, often at a fast rate. As you might’ve suspected by now, Vibe Data Stream, based on the Informatica Ultra Messaging technology, is great for that. With its roots in high speed trading in capital markets, Ultra Messaging quickly and reliably gets high value data from point A to point B. Vibe Data Stream adds management features to make it consumable by the rest of us, beyond stock trading. Not surprisingly, Vibe Data Stream can be used anywhere you need to quickly and reliably deliver data (just don’t use it for sharing your cat photos, please), and that’s what I discussed at Informatica World. Let me discuss two examples I gave.

Large Query Support. Let’s first look at “large queries.” I don’t mean the stuff you type on search engines, which are typically no more than 20 characters. I’m referring to an environment where the query is a huge block of data. For example, what if I have an image of an unidentified face, and I want to send it to a remote facial recognition service and immediately get the identity? The image would be the query, the facial recognition system could be run on Hadoop for fast divide-and-conquer processing, and the result would be the person’s name. There are many similar use cases that could leverage a high speed, reliable data delivery system along with a fast processing platform, to get immediate answers to a data-heavy question.

Data Warehouse Onload. For another example, we turn to our old friend the data warehouse. If you’ve been following all the industry talk about data warehouse optimization, you know pumping high speed data directly into your data warehouse is not an efficient use of your high value system. So instead, pipe your fast data streams into Hadoop, run some complex aggregations, then load that processed data into your warehouse. And you might consider freeing up large processing jobs from your data warehouse onto Hadoop. As you process and aggregate that data, you create a data flow cycle where you return enriched data back to the warehouse. This gives your end users efficient analysis on comprehensive data sets.

Hopefully this stirs up ideas on how you might deploy high speed streaming in your enterprise architecture. Expect to see many new stories of interesting streaming applications in the coming months and years, especially with the anticipated proliferation of internet-of-things and sensor data.

To learn more about Vibe Data Stream you can find it on the Informatica Marketplace .


 

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Data Services, Hadoop | Tagged , , , , | Leave a comment

Ebola: Why Big Data Matters

Ebola: Why Big Data Matters

Ebola: Why Big Data Matters

The Ebola virus outbreak in West Africa has now claimed more than 4,000 lives and has entered the borders of the United States. While emergency response teams, hospitals, charities, and non-governmental organizations struggle to contain the virus, could big data analytics help?

A growing number of Data Scientists believe so.

If you recall the Cholera outbreak of Haiti in 2010 after the tragic earthquake, a joint research team from Karolinska Institute in Sweden and Columbia University in the US analyzed calling data from two million mobile phones on the Digicel Haiti network. This enabled the United Nations and other humanitarian agencies to understand population movements during the relief operations and during the subsequent cholera outbreak. They could allocate resources more efficiently and identify areas at increased risk of new cholera outbreaks.

Mobile phones, widely owned even in the poorest countries in Africa. Cell phones are also a rich source of data irrespective of which region where other reliable sources are sorely lacking. Senegal’s Orange Telecom provided Flowminder, a Swedish non-profit organization, with anonymized voice and text data from 150,000 mobile phones. Using this data, Flowminder drew up detailed maps of typical population movements in the region.

Today, authorities use this information to evaluate the best places to set up treatment centers, check-posts, and issue travel advisories in an attempt to contain the spread of the disease.

The first drawback is that this data is historic. Authorities really need to be able to map movements in real time especially since people’s movements tend to change during an epidemic.

The second drawback is, the scope of data provided by Orange Telecom is limited to a small region of West Africa.

Here is my recommendation to the Centers for Disease Control and Prevention (CDC):

  1. Increase the area for data collection to the entire region of Western Africa which covers over 2.1 million cell-phone subscribers.
  2. Collect mobile phone mast activity data to pinpoint where calls to helplines are mostly coming from, draw population heat maps, and population movement. A sharp increase in calls to a helpline is usually an early indicator of an outbreak.
  3. Overlay this data over censuses data to build up a richer picture.

The most positive impact we can have is to help emergency relief organizations and governments anticipate how a disease is likely to spread. Until now, they had to rely on anecdotal information, on-the-ground surveys, police, and hospital reports.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B Data Exchange, Big Data, Business Impact / Benefits, Business/IT Collaboration, Data Governance, Data Integration | Tagged , , , , , , , | Leave a comment

Go On, Flip Your Division of Labor: More Time Analyzing and Less Time Prepping Data

Are you in Sales Operations, Marketing Operations, Sales Representative/Manager, or Marketing Professional? It’s no secret that if you are, you benefit greatly from the power of performing your own analysis, at your own rapid pace. When you have a hunch, you can easily test it out by visually analyzing data in Tableau without involving IT. When you are faced with tight timeframes in which to gain business insight from data, being able to do it yourself in the time you have available and without technical roadblocks makes all the difference.

Self-service Business Intelligence is powerful!  However, we all know it can be even more powerful. When needing to put together an analysis, we know that you spend about 80% of your time putting together data, and then just 20% of your time analyzing data to test out your hunch or gain your business insight. You don’t need to accept this anymore. We want you to know that there is a better way!

We want to allow you to Flip Your Division of Labor and allow you to spend more than 80% of your time analyzing data to test out your hunch or gain your business insight and less than 20% of your time putting together data for your Tableau analysis! That’s right. You like it. No, you love it. No, you are ready to run laps around your chair in sheer joy!! And you should feel this way. You now can spend more time on the higher value activity of gaining business insight from the data, and even find copious time to spend with your family. How’s that?

Project Springbok is a visionary new product designed by Informatica with the goal of making data access and data quality obstacles a thing of the past.  Springbok is meant for the Tableau user, a data person would rather spend their time visually exploring information and finding insight than struggling with complex calculations or waiting for IT. Project Springbok allows you to put together your data, rapidly, for subsequent analysis in Tableau. Project Springbok tells you things about your data that even you may not have known. It does it through Intelligent Suggestions that it presents to the User.

Let’s take a quick tour:

  • Project Springbok tells you, that you have a date column and that you likely want to obtain the Year and Quarter for your analysis (Fig 1)., And if you so wish, by a single click, voila, you have your corresponding years and even the quarters. And it all happened in mere seconds. A far cry from the 45 minutes it would have taken a fluent user of Excel to do using VLOOKUPS.

data

                                                                      Fig. 1

VALUE TO A MARKETING CAMPAIGN PROFESSIONAL: Rapidly validate and accurately complete your segmentation list, before you analyze your segments in Tableau. Base your segments on trusted data that did not take you days to validate and enrich.

  • Then Project Springbok will tell you that you have two datasets that could be joined on a common key, email for example, in each dataset, and would you like to move forward and join the datasets (Fig 2)? If you agree with Project Springbok’s suggestion, voila, dataset joined in a mere few seconds. Again, a far cry from the 45 minutes it would have taken a fluent user of Excel to do using VLOOKUPS.

Data

  Fig. 2

VALUE TO A SALES REPRESENTATIVE OR SALES MANAGER: You can now access your Salesforce.com data (Fig 3) and effortlessly combine it with ERP data to understand your true quota attainment. Never miss quota again due to a revenue split, be it territory or otherwise. Best of all, keep your attainment datatset refreshed and even know exactly what datapoint changed when your true attainment changes.

Data

Fig. 3

  • Then, if you want, Project Springbok will tell you that you have emails in the dataset, which you may or may not have known, but more importantly it will ask you if you wish to determine which emails can actually be mailed to. If you proceed, not only will Springbok check each email for correct structure (Fig 4), but will very soon determine if the email is indeed active, and one you can expect a response from. How long would that have taken you to do?

VALUE TO A TELESALES REPRESENTATIVE OR MARKETING EMAIL CAMPAIGN SPECIALIST : Ever thought you had a great email list and then found out most emails bounced? Now, confidently determine which emails are truly ones will be able to email to, before you send the message. Email prospects who you know are actually at the company and be confident you have their correct email addresses. You can then easily push the dataset into Tableau to analyze the trends in email list health.

Data

Fig. 4

 And, in case you were wondering, there is no training or install required for Project Springbok. The 80% of your time you used to spend on data preparation is now shrunk considerably, and this is after using only a few of Springbok’s capabilities. One more thing: You can even directly export from Project Springbok into Tableau via the “Export to Tableau TDE” menu item (Fig 5).  Project Springbok creates a Tableau TDE file and you just double click on it to open Tableau to test out your hunch or gain your business insight.

Data

Fig. 5

Here are some other things you should know, to convince you that you, too, can only spend no more than 20% of you time on putting together data for your subsequent Tableau analysis:

  • Springbok Sign-Up is Free
  • Springbok automatically finds problems with your data, and lets you fix them with a single click
  • Springbok suggests useful ways for you to combine different datasets, and lets you combine them effortlessly
  • Springbok suggests useful summarizations of your data, and lets you follow through on the summarizations with a single click
  • Springbok allows you to access data from your cloud or on-premise systems with a few clicks, and the automatically keep it refreshed. It will even tell you what data changed from the last time you saw it
  • Springbok allows you to collaborate by sharing your prepared data with others
  • Springbok easily exports your prepared data directly into Tableau for immediate analysis. You do not have to tell Tableau how to interpret the prepared data
  • Springbok requires no training or installation

Go on. Shift your division of labor in the right direction, fast. Sign-Up for Springbok and stop wasting precious time on data preparation. http://bit.ly/TabBlogs

———-

Are you going to be at Dreamforce this week in San Francisco?  Interested in seeing Project Springbok working with Tableau in a live demonstration?  Visit the Informatica or Tableau booths and see the power of these two solutions working hand-in-hand.Informatica is Booth #N1216 and Booth #9 in the Analytics Zone. Tableau is located in Booth N2112.

FacebookTwitterLinkedInEmailPrintShare
Posted in B2B, Big Data, Business Impact / Benefits, Business/IT Collaboration, General | Tagged , , , | Leave a comment