Tag Archives: data

Remembering Big Data Gravity – Part 1

If you’ve wondered why so many companies are eager to control data storage, the answer can be summed up in a simple term: data gravity. Ultimately, where data is determines where the money is. Services and applications are nothing without it.

big dataDave McCrory introduced his idea of Data Gravity with a blog post back in 2010. The core idea was – and is – Interesting. More recently, Data Gravity featured in this year’s EMC World keynote. But, beyond the observation that large or valuable agglomerations of data exert a pull that tends to see them grow in size or value, what is a recognition of Data Gravity actually good for?

As a concept, Data Gravity seems closely associated with current enthusiasm for Big Data. In addition, like Big Data, the term’s real-world connotations can be unhelpful almost as often as they are helpful. Big Data exhibits at least three characteristics, which are Volume, Velocity, and Variety. Various other V’s, including Value, is mentioned from time to time, but with less consistency. Yet, Big Data’s name says it’s all about size. The speed with which data must be ingested, processed, or excreted is less important. The complexity and diversity of the data doesn’t matter either.

On its own, the size of a data set is unimportant. Coping with lots of data certainly raises some not-insignificant technical challenges, but the community is actually doing a good job of coming up with technically impressive solutions. The interesting aspect of a huge data set isn’t its size, but the very different modes of working that become possible when you begin to unpick the complex interrelationships between data elements.

Sometimes, Big Data is the vehicle by which enough data is gathered about enough aspects of enough things from enough places for those interrelationships to become observable against the background noise. Other times, Big Data is the background noise, and any hope of insight is drowned beneath the unending stream of petabytes.

To a degree, Data Gravity falls into the same trap. More gravity must be good, right? And more mass leads to more gravity. Mass must be connected to volume, in some vague way that was explained when I was 11, and which involves STP. Therefore, bigger data sets have more gravity. This means that bigger data sets are better data sets. That assertion is clearly nonsense, but luckily, it’s not actually what McCrory is suggesting. His arguments are more nuanced than that, and potentially far more useful.

Instinctively, I like that the equation attempts to move attention away from ‘the application’ toward the pools of data that support many, many applications at once. The data is where the potential lies. Applications are merely the means to unlock that potential in various ways. So maybe notions of Potential Energy from elsewhere in Physics need to figure here.

But I’m wary of the emphasis given to real numbers that are simply the underlying technology’s vital statistics; network latency, bandwidth, request sizes, numbers of requests, and the rest. I realize that these are the measurable things that we have, but feel that more abstract notions of value need to figure just as prominently.

So I’m left reaffirming my original impression that Data Gravity is “interesting”. It’s also intriguing, and I keep feeling that it should be insightful. I’m just not — yet — sure exactly how. Is a resource with a Data Gravity of 6 twice as good as a resource with a Data Gravity of 3? Does a data set with a Data Gravity of 15 require three times as much investment/infrastructure/love as a data set scoring a humble 5? It’s unlikely to be that simple, but I do look forward to seeing what happens as McCrory begins to work with the parts of our industry that can lend empirical credibility to his initial dabbling in mathematics.

If real numbers show the equations to stand up, all we then need to do is work out what the numbers mean. Should an awareness of Data Gravity change our behavior, should it validate what gut feel led us to do already, or is it just another ‘interesting’ and ultimately self-evident number that doesn’t take us anywhere?

I don’t know, but I will continue to explore. You can contact me on twitter @bigdatabeat

Share
Posted in General, Hadoop | Tagged , , | Leave a comment

CFO Rising: CFO’s Show They Are Increasingly Business Oriented

The Rising CFO is Increasingly Business Oriented

CFO risingAt the CFO Rising West Conference on October 30th and 31st, there were sessions on managing capital expenditures, completing an IPO, and even managing margin and cash flow. However, the keynote presenters did not spend much of time on these topics. Instead, they focused on how CFOs need to help their firms execute better. Here is a quick summary of the suggestions made from CFOs in broadcasting, consumer goods, retail, healthcare, and medical devices.

The Modern CFO is Strategic

CFO risingThe Broadcasting CFO started his talk by saying he was not at the conference to share why CFOs need to move from being “bean counters to strategic advisors”. He said “let’s face it the modern CFO is a strategic CFO”. Agreeing with this viewpoint, the Consumer Goods CFO said that finance organizations have a major role to play in business transformation. He said that finance after all is the place to drive corporate improvement as well as business productivity and business efficiency.

CFOs Talked About Their Business’ Issues

CFO risingThe Retailer CFO talked like he was a marketing person. He said retail today is all about driving a multichannel customer experience. To do this, finance increasingly needs to provide real business value. He said, therefore, that data is critical to the retailer’s ability to serve customers better. He claimed that customers are changing how they buy, what they want to buy, and when they want to buy. We are being disrupted and it is better to understand and respond to these trends. We are trying, therefore, to build a better model of ecommerce.

Meanwhile, the Medical Devices CFO said that as a supplier to medical device vendors “what we do is compete with our customers engineering staffs”. And the Consumer Goods CFO added the importance of finance driving sustained business transformation.

CFOs Want To Improve Their Business’ Ability To Execute

CFO risingThe Medical Devices CFO said CFOs need to look for “earlier execution points”. They need to look for the drivers of behavior change. As a key element of this, he suggested that CFOs need to develop “early warning indicators”. He said CFOs need to actively look at the ability to achieve objectives. With sales, we need to ask what deals do we have in the pipe? At what size are these deals? And at what success rate will these deals be closed? Only with this information, can the CFO derive an expected company growth rate. He then asked CFOs in the room to identify themselves. With their hands in the air, he asked them are they helping to create a company that executes or not. He laid down the gauntlet for the CFOs in the room by then asserting that if you are not creating a company that executes then are going to be looking at cutting costs sooner rather than later.

The retailer CFO agreed with this CFO. He said today we need to focus on how to win a market. We need to be asking business questions including:

  • How should we deploy resources to deliver against our firm’s value proposition?
  • How do we know when we win?

CFOs Claim Ownership For Enterprise Performance Measurement

Data AnalysisThe Retail CFO said that finance needs to own “the facts for the organization”—the metrics and KPIs. This is how he claims CFOs will earn their seat at the CEOs table. He said in the past the CFO have tended to be stoic, but this now needs to change.

The Medical Devices CFO agreed and said enterprises shouldn’t be tracking 150 things—they need to pare it down to 12-15 things. They need to answer with what you measure—who, what, and when. He said in an execution culture people need to know the targets. They need measurable goals. And he asserted that business metrics are needed over financial metrics. The Consumer Goods CFO agreed by saying financial measures alone would find that “a house is on fire after half the house had already burned down”. The Healthcare CFO picked up on this idea and talked about the importance of finance driving value scorecards and monthly benchmarks of performance improvement. The broadcaster CFO went further and suggested the CFO’s role is one of a value optimizer.

CFOs Own The Data and Drive a Fact-based, Strategic Company Culture

FixThe Retail CFOs discussed the need to drive a culture of insight. This means that data absolutely matters to the CFO. Now, he honestly admits that finance organizations have not used data well enough but he claims finance needs to make the time to truly become data centric. He said I do not consider myself a data expert, but finance needs to own “enterprise data and the integrity of this data”. He said as well that finance needs to ensure there are no data silos. He summarized by saying finance needs to use data to make sure that resources are focused on the right things; decisions are based on facts; and metrics are simple and understandable. “In finance, we need use data to increasingly drive business outcomes”.

CFOs Need to Drive a Culture That Executes for Today and the Future

Honestly, I never thought that I would hear this from a group of CFOs. The Retail CFO said we need to ensure that the big ideas do not get lost. We need to speed-up the prosecuting of business activities. We need to drive more exponential things (this means we need to position our assets and resources) and we need, at the same time, to drive the linear things which can drive a 1% improvement in execution or a 1% reduction in cost. Meanwhile, our Medical Device CFO discussed the present value, for example, of a liability for rework, lawsuits, and warranty costs. He said that finance leaders need to ensure things are done right today so the business doesn’t have problems a year from today. “If you give doing it right the first time a priority, you can reduce warranty reserve and this can directly impact corporate operating income”.

CFOs need to lead on ethics and compliance

The Medical Devices CFO said that CFOs, also, need to have high ethics and drive compliance. The Retail CFO discussed how finance needs to make the business transparent. Finance needs to be transparent about what is working and what is not working. The role of the CFO, at the same time, needs to ensure the integrity of the organization. The Broadcaster CFO asserted the same thing by saying that CFOs need to take a stakeholder approach to how they do business.

Final remarks

In whole, CFOs at CFO Rising are showing the way forward for the modern CFOs. This CFO is all about the data to drive present and future performance, ethics and compliance, and business transparency. This is a big change from the historical controller approach and mentality. I once asked a boss about what I needed to be promoted to a Vice President; my boss said that I needed to move from a technical specialist to a business person. Today’s CFOs clearly show that they are a business person first.

Related links

Solution Brief: The Intelligent Data Platform

Related Blogs
CFOs Move to Chief Profitability Officer
CFOs Discuss Their Technology Priorities
The CFO Viewpoint upon Data
How CFOs can change the conversation with their CIO?
New type of CFO represents a potent CIO ally
Competing on Analytics
The Business Case for Better Data Connectivity
Twitter: @MylesSuer

Share
Posted in Data Quality, General, Governance, Risk and Compliance, Healthcare | Tagged , , , | Leave a comment

Data First: Five Tips To Reduce the Risk of A Breach

Reduce the Risk of A Breach

Reduce the Risk of A Breach

This article was originally published on www.federaltimes.com

November – that time of the year. This year, November 1 was the start of Election Day weekend and the associated endless barrage of political ads. It also marked the end of Daylight Savings Time. But, perhaps more prominently, it marked the beginning of the holiday shopping season. Winter holiday decorations erupted in stores even before Halloween decorations were taken down. There were commercials and ads, free shipping on this, sales on that, singing, and even the first appearance of Santa Claus.

However, it’s not all joy and jingle bells. The kickoff to this holiday shopping season may also remind many of the countless credit card breaches at retailers that plagued last year’s shopping season and beyond. The breaches at Target, where almost 100 million credit cards were compromised, Neiman Marcus, Home Depot and Michael’s exemplify the urgent need for retailers to aggressively protect customer information.

In addition to the holiday shopping season, November also marks the next round of open enrollment for the ACA healthcare exchanges. Therefore, to avoid falling victim to the next data breach, government organizations as much as retailers, need to have data security top of mind.

According to the New York Times (Sept. 4, 2014), “for months, cyber security professionals have been warning that the healthcare site was a ripe target for hackers eager to gain access to personal data that could be sold on the black market. A week before federal officials discovered the breach at HealthCare.gov, a hospital operator in Tennessee said that Chinese hackers had stolen personal data for 4.5 million patients.”

Acknowledging the inevitability of further attacks, companies and organizations are taking action. For example, the National Retail Federation created the NRF IT Council, which is made up of 130 technology-security experts focused on safeguarding personal and company data.

Is government doing enough to protect personal, financial and health data in light of these increasing and persistent threats? The quick answer: no. The federal government as a whole is not meeting the data privacy and security challenge. Reports of cyber attacks and breaches are becoming commonplace, and warnings of new privacy concerns in many federal agencies and programs are being discussed in Congress, Inspector General reports and the media. According to a recent Government Accountability Office report, 18 out of 24 major federal agencies in the United States reported inadequate information security controls. Further, FISMA and HIPAA are falling short and antiquated security protocols, such as encryption, are also not keeping up with the sophistication of attacks. Government must follow the lead of industry and look for new and advanced data protection technologies, such as dynamic data masking and continuous data monitoring to prevent and thwart potential attacks.

These five principles can be implemented by any agency to curb the likelihood of a breach:

1. Expand the appointment and authority of CSOs and CISOs at the agency level.

2. Centralize the agency’s data privacy policy definition and implement on an enterprise level.

3. Protect all environments from development to production, including backups and archives.

4. Data and application security must be prioritized at the same level as network and perimeter security.

5. Data security should follow data through downstream systems and reporting.

So, as the season of voting, rollbacks, on-line shopping events, free shipping, Black Friday, Cyber Monday and healthcare enrollment begins, so does the time for protecting personal identifiable information, financial information, credit cards and health information. Individuals, retailers, industry and government need to think about data first and stay vigilant and focused.

This article was originally published on www.federaltimes.com. Please view the original listing here

Share
Posted in B2B, B2B Data Exchange, Data First, Data Security, Data Services | Tagged , , , | Leave a comment

Decrease Salesforce Data Prep Time With Project Springbok

Account Executives update opportunities in Salesforce all the time. As opportunities close, payment information is received in the financial system. Normally, they spend hours trying to combine the data, to prepare it for differential analysis. Often, there is a prolonged, back-and-forth dialogue with IT. This takes time and effort, and can delay the sales process.

What if you could spend less time preparing your Salesforce data and more time analyzing it?

Decrease Salesforce Data Prep Time With Project Springbok

Decrease Data Prep Time With Project Springbok

Informatica has a vision to solve this challenge by providing self-service data to non-technical users. Earlier this year, we announced our Intelligent Data Platform. One of the key projects in the IDP, code-named “Springbok“, uses an excel-like search interface to let business users find and shape the data they need.

Informatica’s Project Springbok is a faster, better and, most importantly, easier way to intelligently work with data for any purpose. Springbok guides non-technical users through a data preparation process in a self-service manner. It makes intelligent recommendations and suggestions, based on the specific data they’re using.

To see this in action, we welcome you to join us as we partner with Halak Consulting, LLC for an informative webinar. The webinar will take place on November 18th at 10am PST. You will learn from the Springbok VP of Strategy and from an experienced Springbok user about how Springbok can benefit you.

So REGISTER for the webinar today!

For another perspective, see the “Imagine Not Needing to do a VLookup ever again!” from Deepa Patel, Salesforce.com MVP.

Share
Posted in B2B, B2B Data Exchange, Business Impact / Benefits, Business/IT Collaboration | Tagged , , , | Leave a comment

Has Hadoop Crossed The Chasm? Thoughts About Strata 2014

Well, it’s been a little over a week since the Strata conference so I thought I should give some perspective on what I learned.  I think it was summed up at my first meeting, on the first morning of the conference. The meeting was with a financial services company who has significance experience with Hadoop. The first words out of their mouths were, “Hadoop is hard.” 

Later in the conference, after a Western Union representative spoke about their Hadoop deployment, they were mobbed by end user questions and comments. The audience was thrilled to hear about an actual operational deployment: Not just a sandbox deployment, but an actual operational Hadoop deployment from a company that is over 160 years old.

The market is crossing the chasm from early adopters who love to hand code (and the macho culture of proving they can do the hard stuff) to more mainstream companies that want to use technology to solve real problems. These mainstream companies aren’t afraid to admit that it is still hard. For the early adopters, nothing is ever hard. They love hard. But the mainstream market doesn’t view it that way.  They don’t want to mess around in the bowels of enabling technology.  They want to use the technology to solve real problems.  The comment from the financial services company represents the perspective of the vast majority of organizations. It is a sign Hadoop is hitting the mainstream market.

More proof we have moved to a new phase?  Cloudera announced they were going from shipping six versions a year down to just three.  I have been saying for awhile that we will know that Hadoop is real when the distribution vendors stop shipping every 2 months and go to a more typical enterprise software release schedule.  It isn’t that Hadoop engineering efforts have slowed down.  It is still evolving very rapidly.  It is just that real customers are telling the Hadoop suppliers that they won’t upgrade as fast because they have real business projects running and they can’t do it.  So for those of you who are disappointed by the “slow down,” don’t be.  To me, this is news that Hadoop is reaching critical mass.

Technology is closing the gap to allow organizations to use Hadoop as a platform without having to actually have an army of Hadoop experts.  That is what Informatica does for data parsing, data integration,  data quality and data lineage (recent product announcement).  In fact, the number one demo at the Informatica booth at Strata was the demonstration of “end to end” data lineage for data, going from the original source all the way to how it was loaded and then transformed within Hadoop.  This is purely an enterprise-class capability that becomes more interesting and important when you actually go into true production.

Informatica’s goal is to hide the complexity of Hadoop so companies can get on with the work of using the platform with the skills they already have in house.  And from what I saw from all of the start-up companies that were doing similar things for data exploration and analytics and all the talk around the need for governance, we are finally hitting the early majority of the market.  So, for those of you who still drop down to the underlying UNIX OS that powers a Mac, the rest of us will keep using the GUI.   To the extent that there are “fit for purpose” GUIs on top of Hadoop, the technology will get used by a much larger market.

So congratulations Hadoop, you have officially crossed the chasm!

P.S. See me on theCUBE talking about a similar topic at: youtu.be/oC0_5u_0h2Q

Share
Posted in Banking & Capital Markets, Big Data, Hadoop, Informatica Events | Tagged , , , | Leave a comment

Fast and Fasterer: Screaming Streaming Data on Hadoop

Hadoop

Guest Post by Dale Kim

This is a guest blog post, written by Dale Kim, Director of Product Marketing at MapR Technologies.

Recent published research shows that “faster” is better than “slower.” The point, ladies and gentlemen, is that speed, for lack of a better word, is good. But granted, you won’t always have the need for speed. My Lamborghini is handy when I need to elude the Bakersfield fuzz on I-5, but it does nothing for my Costco trips. There, I go with capacity and haul home my 30-gallon tubs of ketchup with my Ford F150. (Note: this is a fictitious example, I don’t actually own an F150.)

But if speed is critical, like in your data streaming application, then Informatica Vibe Data Stream and the MapR Distribution including Apache™ Hadoop® are the technologies to use together. But since Vibe Data Stream works with any Hadoop distribution, my discussion here is more broadly applicable. I first discussed this topic earlier this year during my presentation at Informatica World 2014. In that talk, I also briefly described architectures that include streaming components, like the Lambda Architecture and enterprise data hubs. I recommend that any enterprise architect should become familiar with these high-level architectures.

Data streaming deals with a continuous flow of data, often at a fast rate. As you might’ve suspected by now, Vibe Data Stream, based on the Informatica Ultra Messaging technology, is great for that. With its roots in high speed trading in capital markets, Ultra Messaging quickly and reliably gets high value data from point A to point B. Vibe Data Stream adds management features to make it consumable by the rest of us, beyond stock trading. Not surprisingly, Vibe Data Stream can be used anywhere you need to quickly and reliably deliver data (just don’t use it for sharing your cat photos, please), and that’s what I discussed at Informatica World. Let me discuss two examples I gave.

Large Query Support. Let’s first look at “large queries.” I don’t mean the stuff you type on search engines, which are typically no more than 20 characters. I’m referring to an environment where the query is a huge block of data. For example, what if I have an image of an unidentified face, and I want to send it to a remote facial recognition service and immediately get the identity? The image would be the query, the facial recognition system could be run on Hadoop for fast divide-and-conquer processing, and the result would be the person’s name. There are many similar use cases that could leverage a high speed, reliable data delivery system along with a fast processing platform, to get immediate answers to a data-heavy question.

Data Warehouse Onload. For another example, we turn to our old friend the data warehouse. If you’ve been following all the industry talk about data warehouse optimization, you know pumping high speed data directly into your data warehouse is not an efficient use of your high value system. So instead, pipe your fast data streams into Hadoop, run some complex aggregations, then load that processed data into your warehouse. And you might consider freeing up large processing jobs from your data warehouse onto Hadoop. As you process and aggregate that data, you create a data flow cycle where you return enriched data back to the warehouse. This gives your end users efficient analysis on comprehensive data sets.

Hopefully this stirs up ideas on how you might deploy high speed streaming in your enterprise architecture. Expect to see many new stories of interesting streaming applications in the coming months and years, especially with the anticipated proliferation of internet-of-things and sensor data.

To learn more about Vibe Data Stream you can find it on the Informatica Marketplace .


 

Share
Posted in Big Data, Business Impact / Benefits, Data Services, Hadoop | Tagged , , , , | Leave a comment

More Evidence That Data Integration Is Clearly Strategic

Data Integration Is Clearly Strategic

Data Integration Is Strategic

A recent study from Epicor Software Corporation surveyed more than 300 IT and business decision-makers.  The study results highlighted the biggest challenges and opportunities facing Australian businesses. The independent research report “From Business Processes to Product Distribution” was based upon a survey of Australian organizations with more than 20 employees.

Key findings from the report include:

  • 65% of organizations cite data processing and integration as hampering distribution capability, with nearly half claiming their existing software and ERP is not suitable for distribution.
  • Nearly two-thirds of enterprises have some form of distribution process, involving products or services.
  • More than 80% of organizations have at least some problem with product or service distribution.
  • More than 50% of CIOs in organizations with distribution processes believe better distribution would increase revenue and optimize business processes, with a further 38% citing reduced operating costs.

The core findings: “With better data integration comes better automation and decision making.”

This report is one of many I’ve seen over the years that come to the same conclusion.  Most of those involved with the operations of the business don’t have access to key data points they need, thus they can’t automate tactical decisions, and also cannot “mine” the data, in terms of understanding the true state of the business.

The more businesses deal with building and moving products, the more data integration becomes an imperative value.  As stated in this survey, as well as others, the large majority cite “data processing and integration as hampering distribution capabilities.”

Of course, these issues goes well beyond Australia.  Most enterprises I’ve dealt with have some gap between the need to share key business data to support business processes, and decision support, and what current exists in terms of data integration capabilities.

The focus here is on the multiple values that data integration can bring.  This includes:

  • The ability to track everything as it moves from manufacturing, to inventory, to distribution, and beyond.  You to bind these to core business processes, such as automatic reordering of parts to make more products, to fill inventory.
  • The ability to see into the past, and to see into the future.  The emerging approaches to predictive analytics allow businesses to finally see into the future.  Also, to see what went truly right and truly wrong in the past.

While data integration technology has been around for decades, most businesses that both manufacture and distribute products have not taken full advantage of this technology.  The reasons range from perceptions around affordability, to the skills required to maintain the data integration flow.  However, the truth is that you really can’t afford to ignore data integration technology any longer.  It’s time to create and deploy a data integration strategy, using the right technology.

This survey is just an instance of a pattern.  Data integration was considered optional in the past.  With today’s emerging notions around the strategic use of data, clearly, it’s no longer an option.

Share
Posted in Data First, Data Integration, Data Integration Platform, Data Quality | Tagged , , , | Leave a comment

What is the Silver Lining in Cloud for Financial Services?

This was a great week of excitement and innovation here in San Francisco starting with the San Francisco Giants winning the National League Pennant for the 3rd time in 5 years on the same day Saleforce’s Dreamforce 2014 wrapped up their largest customer conference with over 140K+ attendees from all over the world talking about their new Customer Success Platform.

Salesforce has come a long way from their humble beginnings as the new kid on the cloud front for CRM. The integrated sales, marketing, support, collaboration, application, and analytics as part of the Salesforce Customer Success Platform exemplifies innovation and significant business value upside for various industries however I see it very promising for today’s financial services industry. However like any new business application, the value business gains from it are dependent in having the right data available for the business.

The reality is, SaaS adoption by financial institutions has not been as quick as other industries due to privacy concerns, regulations that govern what data can reside in public infrastructures, ability to customize to fit their business needs, cultural barriers within larger institutions that critical business applications must reside on-premise for control and management purposes, and the challenges of integrating data to and from existing systems with SaaS applications.  However, experts are optimistic that the industry may have turned the corner. Gartner (NYSE:IT) asserts more than 60 percent of banks worldwide will process the majority of their transactions in the cloud by 2016.  Let’s take a closer look at some of the challenges and what’s required to overcome these obstacles when adopting cloud solutions to power your business.

Challenge #1:  Integrating and sharing data between SaaS and on-premise must not be taken lightly

For most banks and insurance companies considering new SaaS based CRM, Marketing, and Support applications with solutions from Salesforce and others must consider the importance of migrating and sharing data between cloud and on-premise applications in their investment decisions.  Migrating existing customer, account, and transaction history data is often done by IT staff through the use of custom extracts, scripts, and manual data validations which can carry over invalid information from legacy systems making these new application investments useless in many cases.

For example, customer type descriptions from one or many existing systems may be correct in their respective databases however collapsing them into a common field in the target application seems easy to do. Unfortunately, these transformation rules can be complex and that complexity increases when dealing with tens if not hundreds of applications during the migration and synchronization phase. Having capable solutions to support the testing, development, quality management, validation, and delivery of existing data from old to new is not only good practice, but a proven way of avoiding costly workarounds and business pain in the future.

Challenge 2:  Managing and sharing a trusted source of shared business information across the enterprise.

As new SaaS applications are adopted, it is critical to understand how to best govern and synchronize common business information such as customer contact information (e.g. address, phone, email) across the enterprise. Most banks and insurance companies have multiple systems that create and update critical customer contact information, many of them which reside on-premise. For example, insurance customers who update contact information such as a phone number or email address while filing an insurance claim will often result in that claims specialist to enter/update only the claims system given the siloed nature of many traditional banking and insurance companies. This is the power of Master Data Management which is purposely designed to identify changes to master data including customer records in one or many systems, update the customer master record, and share that across other systems that house and require that update is essential for business continuity and success.

In conclusion, SaaS adoption will continue to grow in financial services and across other industries. The silver lining in the cloud is your data and the technology that supports the consumption and distribution of it across the enterprise. Banks and insurance companies investing in new SaaS solutions will operate in a hybrid environment made up of Cloud and core transaction systems that reside on-premise. Cloud adoption will continue to grow and to ensure investments yield value for businesses, it is important to invest in a capable and scalable data integration platform to integrate, govern, and share data in a hybrid eco-system. To learn more on how to deal with these challenges, click here and download a complimentary copy of the new “Salesforce Integration for Dummies”

Share
Posted in B2B, Cloud, Financial Services | Tagged , , , , | Leave a comment

Go On, Flip Your Division of Labor: More Time Analyzing and Less Time Prepping Data

Are you in Sales Operations, Marketing Operations, Sales Representative/Manager, or Marketing Professional? It’s no secret that if you are, you benefit greatly from the power of performing your own analysis, at your own rapid pace. When you have a hunch, you can easily test it out by visually analyzing data in Tableau without involving IT. When you are faced with tight timeframes in which to gain business insight from data, being able to do it yourself in the time you have available and without technical roadblocks makes all the difference.

Self-service Business Intelligence is powerful!  However, we all know it can be even more powerful. When needing to put together an analysis, we know that you spend about 80% of your time putting together data, and then just 20% of your time analyzing data to test out your hunch or gain your business insight. You don’t need to accept this anymore. We want you to know that there is a better way!

We want to allow you to Flip Your Division of Labor and allow you to spend more than 80% of your time analyzing data to test out your hunch or gain your business insight and less than 20% of your time putting together data for your Tableau analysis! That’s right. You like it. No, you love it. No, you are ready to run laps around your chair in sheer joy!! And you should feel this way. You now can spend more time on the higher value activity of gaining business insight from the data, and even find copious time to spend with your family. How’s that?

Project Springbok is a visionary new product designed by Informatica with the goal of making data access and data quality obstacles a thing of the past.  Springbok is meant for the Tableau user, a data person would rather spend their time visually exploring information and finding insight than struggling with complex calculations or waiting for IT. Project Springbok allows you to put together your data, rapidly, for subsequent analysis in Tableau. Project Springbok tells you things about your data that even you may not have known. It does it through Intelligent Suggestions that it presents to the User.

Let’s take a quick tour:

  • Project Springbok tells you, that you have a date column and that you likely want to obtain the Year and Quarter for your analysis (Fig 1)., And if you so wish, by a single click, voila, you have your corresponding years and even the quarters. And it all happened in mere seconds. A far cry from the 45 minutes it would have taken a fluent user of Excel to do using VLOOKUPS.

data

                                                                      Fig. 1

VALUE TO A MARKETING CAMPAIGN PROFESSIONAL: Rapidly validate and accurately complete your segmentation list, before you analyze your segments in Tableau. Base your segments on trusted data that did not take you days to validate and enrich.

  • Then Project Springbok will tell you that you have two datasets that could be joined on a common key, email for example, in each dataset, and would you like to move forward and join the datasets (Fig 2)? If you agree with Project Springbok’s suggestion, voila, dataset joined in a mere few seconds. Again, a far cry from the 45 minutes it would have taken a fluent user of Excel to do using VLOOKUPS.

Data

  Fig. 2

VALUE TO A SALES REPRESENTATIVE OR SALES MANAGER: You can now access your Salesforce.com data (Fig 3) and effortlessly combine it with ERP data to understand your true quota attainment. Never miss quota again due to a revenue split, be it territory or otherwise. Best of all, keep your attainment datatset refreshed and even know exactly what datapoint changed when your true attainment changes.

Data

Fig. 3

  • Then, if you want, Project Springbok will tell you that you have emails in the dataset, which you may or may not have known, but more importantly it will ask you if you wish to determine which emails can actually be mailed to. If you proceed, not only will Springbok check each email for correct structure (Fig 4), but will very soon determine if the email is indeed active, and one you can expect a response from. How long would that have taken you to do?

VALUE TO A TELESALES REPRESENTATIVE OR MARKETING EMAIL CAMPAIGN SPECIALIST : Ever thought you had a great email list and then found out most emails bounced? Now, confidently determine which emails are truly ones will be able to email to, before you send the message. Email prospects who you know are actually at the company and be confident you have their correct email addresses. You can then easily push the dataset into Tableau to analyze the trends in email list health.

Data

Fig. 4

 And, in case you were wondering, there is no training or install required for Project Springbok. The 80% of your time you used to spend on data preparation is now shrunk considerably, and this is after using only a few of Springbok’s capabilities. One more thing: You can even directly export from Project Springbok into Tableau via the “Export to Tableau TDE” menu item (Fig 5).  Project Springbok creates a Tableau TDE file and you just double click on it to open Tableau to test out your hunch or gain your business insight.

Data

Fig. 5

Here are some other things you should know, to convince you that you, too, can only spend no more than 20% of you time on putting together data for your subsequent Tableau analysis:

  • Springbok Sign-Up is Free
  • Springbok automatically finds problems with your data, and lets you fix them with a single click
  • Springbok suggests useful ways for you to combine different datasets, and lets you combine them effortlessly
  • Springbok suggests useful summarizations of your data, and lets you follow through on the summarizations with a single click
  • Springbok allows you to access data from your cloud or on-premise systems with a few clicks, and the automatically keep it refreshed. It will even tell you what data changed from the last time you saw it
  • Springbok allows you to collaborate by sharing your prepared data with others
  • Springbok easily exports your prepared data directly into Tableau for immediate analysis. You do not have to tell Tableau how to interpret the prepared data
  • Springbok requires no training or installation

Go on. Shift your division of labor in the right direction, fast. Sign-Up for Springbok and stop wasting precious time on data preparation. http://bit.ly/TabBlogs

———-

Are you going to be at Dreamforce this week in San Francisco?  Interested in seeing Project Springbok working with Tableau in a live demonstration?  Visit the Informatica or Tableau booths and see the power of these two solutions working hand-in-hand.Informatica is Booth #N1216 and Booth #9 in the Analytics Zone. Tableau is located in Booth N2112.

Share
Posted in B2B, Big Data, Business Impact / Benefits, Business/IT Collaboration, General | Tagged , , , | Leave a comment