Category Archives: Business Impact / Benefits

Is your social media investment hampered by your “data poverty”?

Why is nobody measuring the “bag of money” portion? Source: emediate.com

Why is nobody measuring the “bag of money” portion? Source: emediate.com

Recently, I talked with a company that had allocated millions of dollars for paid social media promotion. Their hope was that a massive investment in Twitter and Facebook campaigns would lead to “more eyeballs” for their online gambling sites. Although they had internal social media expertise, they lacked a comprehensive partnership with IT. In addition, they lacked a properly funded policy vision. As a result, when asked how much of their socially-driven traffic resulted in actual sales, their answer was a resounding “No Idea.” I attribute this to “data poverty.”

There is a key reason that they were unable to quantify the ROI of their promotion: Their business model is, by design, “data poor.”  Although a great deal of customer data was available to them, they didn’t elect to use it. They could have used available data to identify “known players” as well as individuals with “playing potential.” There was no law prohibiting them from acquiring this data. However, they were uncomfortable obtaining a higher degree of attribution beyond name, address, e-mail and age.  They feared that customers would view them as a commercial counterpart to the NSA. As a result, key data elements like net worth, life-time-value, credit risk, location, marital status, employment status, number of friends/followers and property value we not considered when targeting potential users on social media. So, though the Social Media team considered this granular targeting to be a “dream-come-true,” others within the organization considered it to be too “1984.”

In addition to a hesitation to leverage available data, they were also limited by their dependence on a 3rd party IT provider. This lack of self-sufficiency created data quality issues, which limited their productivity. Ultimately, this dependency prevented them from capitalizing on new market opportunities in a timely way.

It should have been possible for them to craft a multi-channel approach. They ought to have been able to serve up promoted Tweets, banner ads and mobile application ads. They should have been able to track the click-through, IP and timestamp information from each one. They should have been able to make a BCR for redeeming a promotional offer at a retail location.

Strategic channel allocation would certainly have triggered additional sales. In fact, when we applied click-through, CAC and conversion benchmarks to their available transactional information, we modeled over $8 million in additional sales and $3 million in customer acquisition cost savings. In addition to the financial benefits, strategic channel allocation would have generated more data (and resulting insights) about their prospects and customers than they had when they began.

But, because they were hesitant to use all the data available to them, they failed to capitalize on their opportunities. Don’t let this happen to you. Make a strategic policy change to become a data-driven company.

Beyond the revenue gains of targeted social marketing, there are other reasons to become a data-driven company. Clean data can help you correctly identify ideal channel partners. This company failed to use sufficient data to properly select and retain their partners.  Hundreds of channel partners were removed without proper, data-driven confirmation. Reasons for this removal included things like “death of owner,”“fire,” and “unknown”.  To ensure more thorough vetting, the company could have used data points like the owner’s age, past business endeavors and legal proceedings. They could also have have studied location-fenced attributes like footfall, click-throughs and sales cannibalization risk. In fact, when we modeled the potential overall annual savings, across all business scenarios, for becoming a data driven company, the potential savings amount approached $40 million dollars.

Would a $40 million dollar savings inspire you to invest in your data? Would that amount be enough to motivate you to acquire, standardize, deduplicate, link, hierarchically structure and enrich YOUR data? It’s a no brainer. But it requires a policy shift to make your data work for you. Without this, it’s all just “potential”.

Do you have stories about companies that recently switched from traditional operations to smarter, data-driven operations? If so, I’d love to hear from you.

Disclaimer: Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer  and on our observations and benchmarks.  While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warranty or representation of success, either express or implied, is made.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Data Quality, Master Data Management | Tagged , , | 2 Comments

If you Want Business to “Own” the Data, You Need to Build An Architecture For the Business

If you build an IT Architecture, it will be a constant up-hill battle to get business users and executives engaged and take ownership of data governance and data quality. In short you will struggle to maximize the information potential in your enterprise. But if you develop and Enterprise Architecture that starts with a business and operational view, the dynamics change dramatically. To make this point, let’s take a look at a case study from Cisco. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, CIO, Data Governance, Data Integration, Enterprise Data Management, Governance, Risk and Compliance, Integration Competency Centers | Tagged , , , , | Leave a comment

Retail Interview: From Product Information to Product Performance

Five questions to Arkady Kleyner, Executive VP & Co-Founder of Intricity LLC on how retailers can manage the transition from Product Information to Product Performance.

Arkady-Kleyner

Arkady-Kleyner

Arkady, you recently came back from the National Retail Federation conference.  What are some of the issues that retailers are struggling with these days?

Arkady Kleyner: There are some interesting trends happening right now in retail.  Amazon’s presence is creating a lot of disruption which is pushing traditional retailers to modernize their customer experience strategies.  For example, most Brick and Mortar retailers have a web presence, but they’re realizing that web presence can’t just be a second arm to their business.  To succeed, they need to integrate their web presence with their stores in a very intimate way.  To make that happen, they really have to peel back the onion down to the fundamentals of how product data is shared and managed.

In the good old days, Brick and Mortar retailers could live with a somewhat disconnected product catalog, because they were always ultimately picking from physical goods.  However in an integrated Web and Brick & Mortar environment, retailers must be far more accurate in their product catalog.  The customers entire product selection process may happen on-line but then picked up at the store.  So you can see where retailers need to be far more disciplined with their product data.  This is really where a Product Information Management tool is critical, with so many SKUs to manage, retailers really need a process that makes sense from end to end for onboarding and communicating a product to the customer.  And that is at the foundation of building an integrated customer experience.

In times of the digital customer, being online and connected always, we announced “commerce relevancy” as the next era of omnichannel and tailoring sales and marketing better to customers. What information are you seeing to be important when creating better customer shopping experience?

Arkady Kleyner:This is another paradigm in the integrated customer experience that retailers are trying to get their heads around. To appreciate how involved this is, just consider what a company like Amazon is doing.  They have millions of customers and millions of products and thousands of partners.  It’s literally a many to many to many relationship.  And this is why Amazon is eating everybody alive.  They know what products their customers like, they know how to reach those customers with those products, and they make it easy to buy it when you do.  This isn’t something that Amazon created over night, but the requirements are no different for the rest of retailers.  They need to ramp up the same type of capacity and reach.  For example if I sell jewelry I may be selling it on my own company store but I may also have 5 other partnering sites including Amazon.  Additionally, I may be using a dozen different advertising methods to drive demand.  Now multiply that times the number of jewelry products I sell and you have a massive hairball of complexity.  This is what we mean when we say that retailers need to be far more disciplined with their product data.  Having a Product Information Management process that spans the onboarding of products all the way through to the digital communication of those products is critical to a retailer staying relevant.

In which businesses do you see the need for more efficient product catalog management and channel convergence?

Arkady Kleyner: There is a huge opportunity out there for the existing Brick & Mortar retailers that embrace an integrated customer experience.  Amazon is not the de facto winner.  We see a future where the store near you actually IS the online store.  But to make that happen, Brick and Mortar retailers need to take a serious step back and treat their product data with the same reverence as they treat the product itself.  This means a well-managed process for onboarding, de-duping, and categorizing their product catalog, because all the customer marketing efforts are ultimately an extension of that catalog.

Which performance indicators are important? How can retailers profit from it?

Arkady Kleyner: There are two layers of performance indicators that are important.  The first is Operational Intelligence.  This is the intelligence that determines what product should be shown to who.  This is all based on customer profiling of purchase history.  The second is Strategic Intelligence.  This type of intelligence is the kind the helps you make overarching decisions on things like
-Maximizing the product margin by analyzing shipping and warehousing options
-Understanding product performance by demographics and regions
-Providing Flash Reports for Sales and Marketing

Which tools are needed to streamline product introduction but also achieve sales numbers?

Arkady Kleyner: Informatica is one of the few vendors that cares about data the same way retailers care about their products.  So if you’re a retailer, you really need to treat your product data with the same reverence as your physical products then you need to consider leveraging Informatica as a partner.  Their platform for managing product data is designed to encapsulate the entire process of onboarding, de-duping, categorizing, and syndicating product data.  Additionally Informatica PIM provides a platform for managing all the digital media assets so Marketing teams are able to focus on the strategy rather than tactics. We’ve also worked with Informatica’s data integration products to bring the performance data from the Point of Sale systems for both Strategic and Tactical uses. On the tactical side we’ve used this to integrate inventories between Web and Brick & Mortar so customers can have an integrated experience. On the strategic side we’ve integrated Warehouse Management Systems with Labor Cost tracking systems to provide a 360 degree view of the product costing including shipping and storage to drive a higher per unit margins.

You can hear more from Arkady in our webinarThe Streamlined SKU: Using Analytics for Quick Product Introductions” on Tuesday, March 4, 2014.

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Customer Acquisition & Retention, PiM, Product Information Management, Real-Time, Retail, Uncategorized | Leave a comment

Death of the Data Scientist: Silver Screen Fiction?

Maybe the word “death” is a bit strong, so let’s say “demise” instead.  Recently I read an article in the Harvard Business Review around how Big Data and Data Scientists will rule the world of the 21st century corporation and how they have to operate for maximum value.  The thing I found rather disturbing was that it takes a PhD – probably a few of them – in a variety of math areas to give executives the necessary insight to make better decisions ranging from what product to develop next to who to sell it to and where.

Who will walk the next long walk.... (source: Wikipedia)

Who will walk the next long walk…. (source: Wikipedia)

Don’t get me wrong – this is mixed news for any enterprise software firm helping businesses locate, acquire, contextually link, understand and distribute high-quality data.  The existence of such a high-value role validates product development but it also limits adoption.  It is also great news that data has finally gathered the attention it deserves.  But I am starting to ask myself why it always takes individuals with a “one-in-a-million” skill set to add value.  What happened to the democratization  of software?  Why is the design starting point for enterprise software not always similar to B2C applications, like an iPhone app, i.e. simpler is better?  Why is it always such a gradual “Cold War” evolution instead of a near-instant French Revolution?

Why do development environments for Big Data not accommodate limited or existing skills but always accommodate the most complex scenarios?  Well, the answer could be that the first customers will be very large, very complex organizations with super complex problems, which they were unable to solve so far.  If analytical apps have become a self-service proposition for business users, data integration should be as well.  So why does access to a lot of fast moving and diverse data require scarce PIG or Cassandra developers to get the data into an analyzable shape and a PhD to query and interpret patterns?

I realize new technologies start with a foundation and as they spread supply will attempt to catch up to create an equilibrium.  However, this is about a problem, which has existed for decades in many industries, such as the oil & gas, telecommunication, public and retail sector. Whenever I talk to architects and business leaders in these industries, they chuckle at “Big Data” and tell me “yes, we got that – and by the way, we have been dealing with this reality for a long time”.  By now I would have expected that the skill (cost) side of turning data into a meaningful insight would have been driven down more significantly.

Informatica has made a tremendous push in this regard with its “Map Once, Deploy Anywhere” paradigm.  I cannot wait to see what’s next – and I just saw something recently that got me very excited.  Why you ask? Because at some point I would like to have at least a business-super user pummel terabytes of transaction and interaction data into an environment (Hadoop cluster, in memory DB…) and massage it so that his self-created dashboard gets him/her where (s)he needs to go.  This should include concepts like; “where is the data I need for this insight?’, “what is missing and how do I get to that piece in the best way?”, “how do I want it to look to share it?” All that is required should be a semi-experienced knowledge of Excel and PowerPoint to get your hands on advanced Big Data analytics.  Don’t you think?  Do you believe that this role will disappear as quickly as it has surfaced?

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, CIO, Customer Acquisition & Retention, Customer Services, Data Aggregation, Data Integration, Data Integration Platform, Data Quality, Data Warehousing, Enterprise Data Management, Financial Services, Healthcare, Life Sciences, Manufacturing, Master Data Management, Operational Efficiency, Profiling, Scorecarding, Telecommunications, Transportation, Uncategorized, Utilities & Energy, Vertical | Tagged , , , , | 1 Comment

And now for the rest of the data…

In the first two issues I spent time looking at the need for states to pay attention to the digital health and safety of their citizens, followed by the oft forgotten need to understand and protect the non-production data. This is data than has often proliferated and also ignored or forgotten about.

In many ways, non-production data is simpler to protect. Development and test systems can usually work effectively with realistic but not real PII data and realistic but not real volumes of data. On the other hand, production systems need the real production data complete with the wealth of information that enables individuals to be identified – and therefore presents a huge risk. If and when that data is compromised either deliberately or accidentally the consequences can be enormous; in the impact on the individual citizens and also the cost of remediation on the state. Many will remember the massive South Carolina data breach of late 2012 when over the course of 2 days a 74 GB database was downloaded and stolen, around 3.8 million payers and 1.9 million dependents had their social security information stolen and 3.3 million “lost” bank account details. The citizens’ pain didn’t end there, as the company South Carolina picked to help its citizens seems to have tried to exploit the situation.

encryption protects against theft - unless the key is stolen too

encryption protects against theft – unless the key is stolen too

The biggest problem with securing production data is that there are numerous legitimate users and uses of that data, and most often just a small number of potentially malicious or accidental attempts of inappropriate or dangerous access. So the question is… how does a state agency protect its citizens’ sensitive data while at the same time ensuring that legitimate uses and users continues – without performance impacts or any disruption of access? Obviously each state needs to make its own determination as to what approach works best for them.

This video does a good job at explaining the scope of the overall data privacy/security problems and also reviews a number of successful approaches to protecting sensitive data in both production and non-production environments. What you’ll find is that database encryption is just the start and is fine if the database is “stolen” (unless of course the key is stolen along with the data! Encryption locks the data away in the same way that a safe protects physical assets – but the same problem exists. If the key is stolen with the safe then all bets are off. Legitimate users are usually easily able deliberately breach and steal the sensitive contents, and it’s these latter occasions we need to understand and protect against. Given that the majority of data breaches are “inside jobs” we need to ensure that authorized users (end-users, DBAs, system administrators and so on) that have legitimate access only have access to the data they absolutely need, no more and no less.

So we have reached the end of the first series. In the first blog we looked at the need for states to place the same emphasis on the digital health and welfare of their citizens as they do on their physical and mental health. In the second we looked at the oft-forgotten area of non-production (development, testing, QA etc.) data. In this third and final piece we looked at the need to and some options for providing the complete protection of non-production data.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Business Impact / Benefits, Data masking, Data Privacy, Enterprise Data Management, Public Sector | Tagged , , , , | Leave a comment

History Repeats Itself Through Business Intelligence (Part 2)

liftcar

In a previous blog post, I wrote about when business “history” is reported via Business Intelligence (BI) systems, it’s usually too late to make a real difference.  In this post, I’m going to talk about how business history becomes much more useful when combined operationally and in real time.

E. P. Thompson, a historian pointed out that all history is the history of unintended consequences.  His idea / theory was that history is not always recorded in documents, but instead is ultimately derived from examining cultural meanings as well as the structures of society  through hermeneutics (interpretation of texts) semiotics and in many forms and signs of the times, and concludes that history is created by people’s subjectivity and therefore is ultimately represented as they REALLY live.

The same can be extrapolated for businesses.  However, the BI systems of today only capture a miniscule piece of the larger pie of knowledge representation that may be gained from things like meetings, videos, sales calls, anecdotal win / loss reports, shadow IT projects, 10Ks and Qs, even company blog posts ;-)   – the point is; how can you better capture the essence of meaning and perhaps importance out of the everyday non-database events taking place in your company and its activities – in other words, how it REALLY operates.

One of the keys to figuring out how businesses really operate is identifying and utilizing those undocumented RULES that are usually underlying every business.  Select company employees, often veterans, know these rules intuitively. If you watch them, and every company has them, they just have a knack for getting projects pushed through the system, or making customers happy, or diagnosing a problem in a short time and with little fanfare.  They just know how things work and what needs to be done.

These rules have been, and still are difficult to quantify and apply or “Data-ify” if you will. Certain companies (and hopefully Informatica) will end up being major players in the race to datify these non-traditional rules and events, in addition to helping companies make sense out of big data in a whole new way. But in daydreaming about it, it’s not hard to imagine business systems that will eventually be able to understand the optimization rules of a business, accounting for possible unintended scenarios or consequences, and then apply them in the time when they are most needed.  Anyhow, that’s the goal of a new generation of Operational Intelligence systems.

In my final post on the subject, I’ll explain how it works and business problems it solves (in a nutshell). And if I’ve managed to pique your curiosity and you want to hear about Operational Intelligence sooner, tune in to to a webinar we’re having TODAY at 10 AM PST. Here’s the link.

http://www.informatica.com/us/company/informatica-talks/?commid=97187

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Complex Event Processing, Data Integration Platform, Operational Efficiency, Real-Time, SOA, Ultra Messaging | Tagged , , , , | 1 Comment

Murphy’s First Law of Bad Data – If You Make A Small Change Without Involving Your Client – You Will Waste Heaps Of Money

I have not used my personal encounter with bad data management for over a year but a couple of weeks ago I was compelled to revive it.  Why you ask? Well, a complete stranger started to receive one of my friend’s text messages – including mine – and it took days for him to detect it and a week later nobody at this North American wireless operator had been able to fix it.  This coincided with a meeting I had with a European telco’s enterprise architecture team.  There was no better way to illustrate to them how a customer reacts and the risk to their operations, when communication breaks down due to just one tiny thing changing – say, his address (or in the SMS case, some random SIM mapping – another type of address).

Imagine the cost of other bad data (thecodeproject.com)

Imagine the cost of other bad data (thecodeproject.com)

In my case, I  moved about 250 miles within the United States a couple of years ago and this seemingly common experience triggered a plethora of communication screw ups across every merchant a residential household engages with frequently, e.g. your bank, your insurer, your wireless carrier, your average retail clothing store, etc.

For more than two full years after my move to a new state, the following things continued to pop up on a monthly basis due to my incorrect customer data:

  • In case of my old satellite TV provider they got to me (correct person) but with a misspelled last name at my correct, new address.
  • My bank put me in a bit of a pickle as they sent “important tax documentation”, which I did not want to open as my new tenants’ names (in the house I just vacated) was on the letter but with my new home’s address.
  • My mortgage lender sends me a refinancing offer to my new address (right person & right address) but with my wife’s as well as my name completely butchered.
  • My wife’s airline, where she enjoys the highest level of frequent flyer status, continually mails her offers duplicating her last name as her first name.
  • A high-end furniture retailer sends two 100-page glossy catalogs probably costing $80 each to our address – one for me, one for her.
  • A national health insurer sends “sensitive health information” (disclosed on envelope) to my new residence’s address but for the prior owner.
  • My legacy operator turns on the wrong premium channels on half my set-top boxes.
  • The same operator sends me a SMS the next day thanking me for switching to electronic billing as part of my move, which I did not sign up for, followed by payment notices (as I did not get my invoice in the mail).  When I called this error out for the next three months by calling their contact center and indicating how much revenue I generate for them across all services, they counter with “sorry, we don’t have access to the wireless account data”, “you will see it change on the next bill cycle” and “you show as paper billing in our system today”.

Ignoring the potential for data privacy law suits, you start wondering how long you have to be a customer and how much money you need to spend with a merchant (and they need to waste) for them to take changes to your data more seriously.  And this are not even merchants to whom I am brand new – these guys have known me and taken my money for years!

One thing I nearly forgot…these mailings all happened at least once a month on average, sometimes twice over 2 years.  If I do some pigeon math here, I would have estimated the postage and production cost alone to run in the hundreds of dollars.

However, the most egregious trespass though belonged to my home owner’s insurance carrier (HOI), who was also my mortgage broker.  They had a double whammy in store for me.  First, I received a cancellation notice from the HOI for my old residence indicating they had cancelled my policy as the last payment was not received and that any claims will be denied as a consequence.  Then, my new residence’s HOI advised they added my old home’s HOI to my account.

After wondering what I could have possibly done to trigger this, I called all four parties (not three as the mortgage firm did not share data with the insurance broker side – surprise, surprise) to find out what had happened.

It turns out that I had to explain and prove to all of them how one party’s data change during my move erroneously exposed me to liability.  It felt like the old days, when seedy telco sales people needed only your name and phone number and associate it with some sort of promotion (back of a raffle card to win a new car), you never took part in, to switch your long distance carrier and present you with a $400 bill the coming month.  Yes, that also happened to me…many years ago.  Here again, the consumer had to do all the legwork when someone (not an automatic process!) switched some entry without any oversight or review triggering hours of wasted effort on their and my side.

We can argue all day long if these screw ups are due to bad processes or bad data, but in all reality, even processes are triggered from some sort of underlying event, which is something as mundane as a database field’s flag being updated when your last purchase puts you in a new marketing segment.

Now imagine you get married and you wife changes her name. With all these company internal (CRM, Billing, ERP),  free public (property tax), commercial (credit bureaus, mailing lists) and social media data sources out there, you would think such everyday changes could get picked up quicker and automatically.  If not automatically, then should there not be some sort of trigger to kick off a “governance” process; something along the lines of “email/call the customer if attribute X has changed” or “please log into your account and update your information – we heard you moved”.  If American Express was able to detect ten years ago that someone purchased $500 worth of product with your credit card at a gas station or some lingerie website, known for fraudulent activity, why not your bank or insurer, who know even more about you? And yes, that happened to me as well.

Tell me about one of your “data-driven” horror scenarios?

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Business Impact / Benefits, Business/IT Collaboration, Complex Event Processing, Customer Acquisition & Retention, Customer Services, Customers, Data Aggregation, Data Governance, Data Privacy, Data Quality, Enterprise Data Management, Financial Services, Governance, Risk and Compliance, Healthcare, Master Data Management, Retail, Telecommunications, Uncategorized, Vertical | Tagged , , , , , , , , , | Leave a comment

5 Wishes for the New Year

  1. Business and IT: Stop dissing each other.  We all do it.  Despite any platitudes about business-IT alignment, there is always griping behind closed doors.  Let’s all promise to go the entire month of January without saying anything negative about the other team, and on a weekly basis express gratitude or provide positive feedback.
  2. Don’t let the hype fool youBig DataInternet of ThingsCloud/Social/Mobile (which has seemingly morphed into a single word).   Hype?  Definitely yes.  Vaporware?  Sometimes.  Ignore it until “it’s real”?  Definitely not.  There are kernels of reality hidden in most of the hype.  You have to find those kernels, and then let your mind open up to what the potential is in your own realm.
  3. Marry right brain with left brain.  Most of us are heavy left brain people when we’re on the job.  And while being data-driven, analytical and methodical are important, what separates the innovators from the followers is the spark of intuition, wisdom or creativity that is based on facts and knowledge, but not bound by it.
  4. Use social to discuss issues and gain knowledge rather than while away time.  Social media has been extremely powerful for connecting people. But a shockingly high percentage of the social content is trivial—following celebrities; sharing selfies; updating friends on the latest meal eaten; lodging complaints about various first world problems.  What if we diverted 25% of the social media time we spend on frivolous trivia to intellectual engagement and intelligent discussions about real issues?  What could we change in our society if that power was unleashed?
  5. Use data for good. There are many uses for all the data flowing around us. Many of them are transformative— changing business models, revolutionizing industries, in some cases changing society. However, most of the ones being discussed today focus on corporate profit as opposed to societal good—think of all the investment in better targeting marketing offers to consumers.  There is absolutely nothing wrong with utilizing data to grow business, and healthy businesses provide jobs, foster innovation and drive economic growth.  However, business profit should not be our only goal. A few people and organizations (such as DataKind) are thinking about how to use data for good. Meaning societal good. If a few more of us carve out a portion of our time and brain power to focus on potential ways data can be harnessed to benefit our broader community, imagine the impact we could have on education, healthcare, the environment, economic hardship and the other myriad challenges we face around the world. Perhaps this wish is the most pollyanaish of them all, but I’ll keep doing what I can to forward the cause.
FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Cloud Computing | Leave a comment

True Facts About Informatica RulePoint Real-Time Integration

Cabralia_computer_center

Shhhh… RulePoint Programmer Hard at Work

End of year.  Out with the old, in with the new.  A time where everyone gets their ducks in order, clears the pipe and gets ready for the New Year. For R&D, one of the gating events driving the New Year is the annual sales kickoff event where we present to Sales the new features so they can better communicate a products’ road map and value to potential buyers.  All well and good.  But part of the process is to fill out a Q and A that explains the product “Value Prop” and they only gave us 4 lines. I think the answer also helps determine speaking slots and priority.

So here’s the question I had to fill out -

FOR SALES TO UNDERSTAND THE PRODUCT BETTER, WE ASK THAT YOU ANSWER THE FOLLOWING QUESTION:

WHAT IS THE PRODUCT VALUE PROPOSITION AND ARE THERE ANY SIGNIFICANT DEPLOYMENTS OR OTHER CUSTOMER EXPERIENCES YOU HAVE HAD THAT HAVE HELPED TO DEFINE THE PRODUCT OFFERING?

Here’s what I wrote:

Informatica RULEPOINT is a real-time integration and event processing software product that is deployed very innovatively by many businesses and vertical industries.  Its value proposition is that it helps large enterprises discover important situations from their droves of data and events and then enables users to take timely action on discovered business opportunities as well as stop problems while or before they happen.

Here’s what I wanted to write:

RulePoint is scalable, low latency, flexible and extensible and was born in the pure and exotic wilds of the Amazon from the minds of natives that have never once spoken out loud – only programmed.  RulePoint captures the essence of true wisdom of the greatest sages of yesteryear. It is the programming equivalent and captures what Esperanto linguistically tried to do but failed to accomplish.

As to high availability, (HA) there has never been anything in the history of software as available as RulePoint. Madonna’s availability only pales in comparison to RulePoint’s availability.  We are talking 8 Nines cubed and then squared ( ;-) ). Oracle = Unavailable. IBM = Unavailable. Informatica RulePoint = Available.

RulePoint works hard, but plays hard too.  When not solving those mission critical business problems, RulePoint creates Arias worthy of Grammy nominations. In the wee hours of the AM, RulePoint single-handedly prevented the outbreak and heartbreak of psoriasis in East Angola.

One of the little known benefits of RulePoint is its ability to train the trainer, coach the coach and play the player. Via chalk talks? No, RulePoint uses mind melds instead.  Much more effective. RulePoint knows Chuck Norris.  How do you think Chuck Norris became so famous in the first place? Yes, RulePoint. Greenpeace used RulePoint to save dozens of whales, 2 narwhal, a polar bear and a few collateral penguins (the bear was about to eat the penguins).  RulePoint has been banned in 16 countries because it was TOO effective.  “Veni, Vidi, RulePoint Vici” was Julius Caesar’s actual quote.

The inspiration for Gandalf in the Lord of the Rings? RulePoint. IT heads worldwide shudder with pride when they hear the name RulePoint mentioned and know that they acquired it. RulePoint is stirred but never shaken. RulePoint is used to train the Sherpas that help climbers reach the highest of heights. RulePoint cooks Minute rice in 20 seconds.

The running of the bulls in Pamplona every year -  What do you think they are running from? Yes,  RulePoint. RulePoint put the Vinyasa back into Yoga. In fact, RulePoint will eventually create a new derivative called Full Contact Vinyasa Yoga and it will eventually supplant gymnastics in the 2028 Summer Olympic games.

The laws of physics were disproved last year by RulePoint.  RulePoint was drafted in the 9th round by the LA Lakers in the 90s, but opted instead to teach math to inner city youngsters. 5 years ago, RulePoint came up with an antivenin to the Black Mamba and has yet to ask for any form of recompense. RulePoint’s rules bend but never break. The stand-in for the “Mind” in the movie “A Beautiful Mind” was RulePoint.

RulePoint will define a new category for the Turing award and will name it the 2Turing Award.  As a bonus, the 2Turing Award will then be modestly won by RulePoint and the whole category will be retired shortly thereafter.  RulePoint is… tada… the most interesting software in the world.

But I didn’t get to write any of these true facts and product differentiators on the form. No room.

Hopefully I can still get a primo slot to talk about RulePoint.

 

And so from all the RulePoint and Emerging Technologies team, including sales and marketing, here’s hoping you have great holiday season and a Happy New Year!

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Complex Event Processing, Data Integration Platform, Operational Efficiency, Uncategorized | Tagged , , , , | Leave a comment

Understand Customer Intentions To Manage The Experience

I recently had a lengthy conversation with a business executive of a European telco.  His biggest concern was to not only understand the motivations and related characteristics of consumers but to accomplish this insight much faster than before.  Given available resources and current priorities this is something unattainable for many operators.

Unlike a few years ago – remember the time before iPad – his organization today is awash with data points from millions of devices, hundreds of device types and many applications.

What will he do next?

What will he do next?

One way for him to understand consumer motivation; and therefore intentions, is to get a better view of a user’s network and all related interactions and transactions.  This includes his family household, friends and business network (also a type of household).  The purpose of householding is to capture social and commercial relationships in a grouping of individuals (or businesses or both mixed together) in order to identify patterns (context), which can be exploited to better serve a customer a new individual product or bundle upsell, to push relevant apps, audio and video content.

Let’s add another layer of complexity by understanding not only who a subscriber is, who he knows and how often he interacts with these contacts and the services he has access to via one or more devices but also where he physically is at the moment he interacts.  You may also combine this with customer service and (summarized) network performance data to understand who is high-value, high-overhead and/or high in customer experience.  Most importantly, you will also be able to assess who will do what next and why.

Some of you may be thinking “Oh gosh, the next NSA program in the making”.   Well, it may sound like it but the reality is that this data is out there today, available and interpretable if cleaned up, structured and linked and served in real time.  Not only do data quality, ETL, analytical and master data systems provide the data backbone for this reality but process-based systems dealing with the systematic real-time engagement of consumers are the tool to make it actionable.  If you add some sort of privacy rules using database or application-level masking technologies, most of us would feel more comfortable about this proposition.

This may feel like a massive project but as many things in IT life; it depends on how you scope it.  I am a big fan of incremental mastering of increasingly more attributes of certain customer segments, business units, geographies, where lessons learnt can be replicated over and over to scale.  Moreover, I am a big fan of figuring out what you are trying to achieve before even attempting to tackle it.

The beauty behind a “small” data backbone – more about “small data” in a future post – is that if a certain concept does not pan out in terms of effort or result, you have just wasted a small pile of cash instead of the $2 million for a complete throw-away.  For example: if you initially decided that the central lynch pin in your household hub & spoke is the person, who owns the most contracts with you rather than the person who pays the bills every month or who has the largest average monthly bill, moving to an alternative perspective does not impact all services, all departments and all clients.  Nevertheless, the role of each user in the network must be defined over time to achieve context, i.e. who is a contract signee, who is a payer, who is a user, who is an influencer, who is an employer, etc.

Why is this important to a business? It is because without the knowledge of who consumes, who pays for and who influences the purchase/change of a service/product, how can one create the right offers and target them to the right individual.

However, in order to make this initial call about household definition and scope or look at the options available and sensible, you have to look at social and cultural conventions, what you are trying to accomplish commercially and your current data set’s ability to achieve anything without a massive enrichment program.  A couple of years ago, at a Middle Eastern operator, it was very clear that the local patriarchal society dictated that the center of this hub and spoke model was the oldest, non-retired male in the household, as all contracts down to children of cousins would typically run under his name.  The goal was to capture extended family relationships more accurately and completely in order to create and sell new family-type bundles for greater market penetration and maximize usage given new bandwidth capacity.

As a parallel track aside from further rollout to other departments, customer segments and geos, you may also want to start thinking like another European operator I engaged a couple of years ago.  They were trying to outsource some data validation and enrichment to their subscribers, which allowed for a more accurate and timely capture of changes, often life-style changes (moves, marriages, new job).  The operator could then offer new bundles and roaming upsells. As a side effect, it also created a sense of empowerment and engagement in the client base.

I see bits and pieces of some of this being used when I switch on my home communication systems running broadband signal through my X-Box or set-top box into my TV using Netflix and Hulu and gaming.  Moreover, a US cable operator actively promotes a “moving” package to help make sure you do not miss a single minute of entertainment when relocating.

Every time now I switch on my TV, I get content suggested to me.  If telecommunication services would now be a bit more competitive in the US (an odd thing to say in every respect) and prices would come down to European levels, I would actually take advantage of the offer.  And then there is the log-on pop up asking me to subscribe (or throubleshoot) a channel I have already subscribed to.  Wonder who or what automated process switched that flag.

Ultimately, there cannot be a good customer experience without understanding customer intentions.  I would love to hear stories from other practitioners on what they have seen in such respect

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Complex Event Processing, Customer Acquisition & Retention, Customer Services, Customers, Data Integration, Data Quality, Master Data Management, Profiling, Real-Time, Telecommunications, Vertical | Tagged , , , , , , , , , | Leave a comment