Tag Archives: Big Data

When It Comes to Data Integration Skills, Big Data and Cloud Projects Need the Most Expertise

Looking for a data integration expert? Join the club. As cloud computing and big data become more desirable within the Global 2000, an abundance of data integration talent is required to make both cloud and big data work properly.

The fact of the matter is that you can’t deploy a cloud-based system without some sort of data integration as part of the solution. Either from on-premise to cloud, cloud-to-cloud, or even intra-company use of private clouds, these projects need someone who knows what they are doing when it comes to data integration.

linthicum

While many cloud projects were launched without a clear understanding of the role of data integration, most people understand it now. As companies become more familiar with the could, they learn that data integration is key to the solution. For this reason, it’s important for teams to have at least some data integration talent.

The same goes for big data projects. Massive amounts of data need to be loaded into massive databases. You can’t do these projects using ad-hoc technologies anymore. The team needs someone with integration knowledge, including what technologies to bring to the project.

Generally speaking, big data systems are built around data integration solutions. Similar to cloud, the use of data integration architectural expertise should be a core part of the project. I see big data projects succeed and fail, and the biggest cause of failure is the lack of data integration expertise.

The demand for data integration talent has exploded with the growth of both big data and cloud computing. A week does not go by that I’m not asked for the names of people who have data integration, cloud computing and big data systems skills. I know several people who fit that bill, however they all have jobs and recently got raises.

The scary thing is, if these jobs go unfilled by qualified personnel, project directors may hire individuals without the proper skills and experience. Or worse, they may not hire anyone at all. If they plod along without the expertise required, in a year they’ll wonder why the systems are not sharing data the way they should, resulting in a big failure.

So, what can organizations do? You can find or build the talent you need before starting important projects. Thus, now is the time to begin the planning process, including how to find and hire the right resources. This might even mean internal training, hiring mentors or outside consultants, or working with data integration technology providers. Do everything necessary to make sure you get data integration done right the first time.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Data Integration, Operational Efficiency | Tagged , , | Leave a comment

That’s not ‘Big Data’, that’s MY DATA!

Personal DataFor the past few years, the press has been buzzing about the potential value of Big Data. However, there is little coverage focusing on the data itself – how do you get it, is it accurate, and who can be trusted with it?

We are the source of data that is often spoken about – our children, friends and relatives and especially those people we know on Facebook or LinkedIn.  Over 40% of Big Data projects are in the sales and marketing arena – relying on personal data as a driving force. While machines have no choice but to provide data when requested, people do have a choice. We can choose not to provide data, or to purposely obscure our data, or to make it up entirely.

So, how can you ensure that your organization is receiving real information? Active participation is needed to ensure a constant flow of accurate data to feed your data-hungry algorithms and processes. While click-stream analysis does not require individual identification, follow-up sales & marketing campaigns will have limited value if the public at large is using false names and pretend information.

BCG has identified a link between trust and data sharing: 

“We estimate that those that manage this issue well [creating trust] should be able to increase the amount of consumer data they can access by at least five to ten times in most countries.”[i]

With that in mind, how do you create the trust that will entice people to share data? The principles behind common data privacy laws provide guidelines. These include:  accountability, purpose identification and disclosure, collection with knowledge and consent, data accuracy, individual access and correction, as well as the right to be forgotten.

But there are challenges in personal data stewardship – in part because the current world of Big Data analysis is far from stable.  In the ongoing search for the value of Big Data, new technologies, tools and approaches are being piloted. Experimentation is still required which means moving data around between data storage technologies and analytical tools, and giving unprecedented access to data in terms of quantity, detail and variety to ever growing teams of analysts. This experimentation should not be discouraged, but it must not degrade the accuracy or security of your customers’ personal data.

How do you measure up? If I made contact and asked for the sum total of what you knew about me, and how my data was being used – how long would it take to provide this information? Would I be able to correct my information? How many of your analysts can view my personal data and how many copies have you distributed in your IT landscape? Are these copies even accurate?

Through our data quality, data mastering and data masking tools, Informatica can deliver a coordinated approach to managing your customer’s personal data and build trust by ensuring the safety and accuracy of that data. With Informatica managing your customer’s data, your internal team can focus their attention on analytics. Analytics from accurate data can help develop the customer loyalty and engagement that is vital to both the future security of your business and continued collection of accurate data to feed your Big Data analysis.


[i] The Trust Advantage: How to Win with Big Data; bcg.perspectives November 2013

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data masking, Master Data Management | Tagged , , | Leave a comment

Talk Amongst Yourselves: Why Twitter Needs #DataChat

Listeng to #DataChatWithin Government organizations, technologists are up against a wall of sound. In one ear, they hear consumers cry for faster, better service.

In the other, they hear administrative talk of smaller budgets and scarcer resources.

As stringent requirements for both transparency and accountability grow, this paradox of pressure increases.

Sometimes, the best way to cope is to TALK to somebody.

What if you could ask other data technologists candid questions like:

  • Do you think government regulation helps or hurts the sharing of data?
  • Do you think government regulators balance the privacy needs of the public with commercial needs?
  • What are the implications of big data government regulation, especially for users?
  • How can businesses expedite the government adoption of the cloud?
  • How can businesses aid in the government overcoming the security risks associated with the cloud?
  • How should the policy frameworks for handling big data differ between the government and the private sector?

What if you could tell someone who understood? What if they had sweet suggestions, terrific tips, stellar strategies for success? We think you can. We think they will.

That’s why Twitter needs a #DataChat.

Twitter Needs #DataChat

Third Thursdays, 3:00 PM EST

What on earth is a #DataChat?
Good question. It’s a Twitter Chat – A public dialog, at a set time, on a set topic. It’s something like a crowd-sourced discussion. Any Twitter user can participate simply by including the applicable hashtag in each tweet. Our hashtag is #DataChat. We’ll connect on Twitter, on the third Thursday of each month to share struggles, victories and advice about data governance. We’re going to begin this week, Thursday April 17, at 3:00 PM Eastern Time. For our first chat, we are going to discuss topics that relate to data technologies in government organizations.

What don’t you join us? Tell us about it. Mark your calendar. Bring a friend.

Because, sometimes, you just need someone to talk to.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Governance, Governance, Risk and Compliance, Public Sector | Tagged , , , , | Leave a comment

Should Legal (Or Anyone Else Outside of IT) Be in Charge of Big Data?

shutterstock_145160692A few years back, there was a movement in some businesses to establish “data stewards” – individuals who would sit at the hearts of the enterprise and make it their job to assure that data being consumed by the organization is of the highest possible quality, is secure, is contextually relevant, and capable of interoperating across any applications that need to consume it. While the data steward concept came along when everything was relational and structured, these individuals are now earning their pay when it comes to managing the big data boom.

The rise of big data is creating more than simple headaches for data stewards, it is creating turf wars across enterprises.  As pointed out in a recent article in The Wall Street Journal, there isn’t yet a lot of clarity as to who owns and cares for such data. Is it IT?  Is it lines of business?  Is it legal? There are arguments that can be made for all jurisdictions.

In organizations these days, for example, marketing executives are generating, storing and analyzing large volumes of their own data within content management systems and social media analysis solutions. Many marketing departments even have their own IT budgets. Along with marketing, of course, everyone else within enterprises is seeking to pursue data analytics to better run their operations as well as foresee trends.

Typically, data has been under the domain of the CIO,  the person who oversaw the collection, management and storage of information. In the Wall Street Journal article, however, it’s suggested that legal departments may be the best caretakers of big data, since big data poses a “liability exposure,” and legal departments are “better positioned to understand how to use big data without violating vendor contracts and joint-venture agreements, as well as keeping trade secrets.”

However, legal being legal, it’s likely that insightful data may end up getting locked away, never to see the light of day. Others may argue IT department needs to retain control, but there again, IT isn’t trained to recognize information that may set the business on a new course.

Focusing on big data ownership isn’t just an academic exercise. The future of the business may depend on the ability to get on top of big data. Gartner, for one, predicts that within the next three years, at least of a third of Fortune 100 organizations will experience an information crisis, “due to their inability to effectively value, govern and trust their enterprise information.”

This ability to “value, govern and trust” goes way beyond the traditional maintenance of data assets that IT has specialized in over the past few decades. As Gartner’s Andrew White put it: “Business leaders need to manage information, rather than just maintain it. When we say ‘manage,’ we mean ‘manage information for business advantage,’ as opposed to just maintaining data and its physical or virtual storage needs. In a digital economy, information is becoming the competitive asset to drive business advantage, and it is the critical connection that links the value chain of organizations.”

For starters, then, it is important that the business have full say over what data needs to be brought in, what data is important for further analysis, and what should be done with data once it gains in maturity. IT, however, needs to take a leadership role in assuring the data meets the organization’s quality standards, and that it is well-vetted so that business decision-makers can be confident in the data they are using.

The bottom line is that big data is a team effort, involving the whole enterprise. IT has a role to play, as does legal, as do the line of business.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business/IT Collaboration, Data Governance | Tagged , , | Leave a comment

9 Ways to Bring Big Data to Life for End-Users

9 Ways to Bring Big Data to Life for End-UsersLet’s face it, big data – or data in any size, format or shape – is nothing more than just a bunch of digital bits that occupy space on a disk somewhere. To be useful to the business, end-users need to be able to access it, and pull out and assemble the nuggets of information they need. Data needs to be brought to life.

That’s the theme of a webcast I recently had the opportunity to co-present with Tableau Software, titled “Making Big Data User-Centric.” In fact, there’s a lot more to it than making data user-centric – big data should be a catalyst that fires peoples’ imaginations, enabling them to explore new avenues that were never opened up before.

Many organizations are beginning their journey into the new big data analytics space, and are starting to discover all the possibilities it offers. But, in an era where data is now scaling into the petabyte range, it’s more than technology. It’s a disruptive force, and with disruption comes new opportunities for growth.

Here are nine ways to make this innovative disruption possible:

1. Remember that “data” is not “information.” Too many people think that data itself is a valuable commodity. However, that is like taking oil right out of the ground and trying to sell it at gas stations – it’s not usable. It needs to be processed, refined, and packaged for delivery. It needs to be unified for eventual delivery and presentation. And, finally, to give information its value, it needs to tell a story.

2. Make data sharable across the enterprise. Big data – like all types of data – tend to naturally drift into silos within departments across enterprises. For years, people have struggled to break down these silos and provide a single view of all relevant data. Now there’s a away to do it – through a unified service layer. Think of all the enterprisey things coming to the forefront in recent years – service oriented architecture, data virtualization, search technologies. No matter how you do it, the key is to provide a way for data to be made available across enterprise walls.

3. Use analytics to push the innovation envelope. Big data analytics enables end-users to ask questions and consider options that weren’t possible within standard, relational data environments.

4. Encourage critical thinking among data users. Business users have powerful tools at their disposal, and access to data they’ve never had before. It’s more important than ever to consider where the information came from, its context, and other potential sources that are not in the enterprise’s data stream.

5. Develop analytical skills across the board. Surveys I have conducted in partnership with Unisphere Research finds barely 10% of organizations offer self-service BI on a widespread basis. This needs to change. Everybody is working with information and data, everyone needs to understand the implications of the information and data with which they are working.

6. Promote self-service. Analytic capabilities should be delivered on a self-service basis. End-users are accustomed to information being delivered to them a Google speeds, making the processes they deal with at work – requesting reports from their IT departments, setting up queries – seem downright antiquated, as well as frustrating.

7. Make it visual. Yes, graphical displays of data have been around for more than a couple of decades now. But now, there is an emerging class of front-end visualization tools that convert data points into visual displays – often stunning – that enable users to spot anomalies or trends within seconds.

8. Make it mobile. Just about everyone now carries mobile devices from which they can access data from any place. It’s now possible to offering analytics ranging from key performance indicator marketing, drill-down navigation, data selection, data filtering, and alerts.

9. Make it social. There are two ways to look at big data analytics and social media. First, there’s the social media data itself. BI and analytics efforts would be missing a big piece of the picture if it did not address the wealth of social media data flowing through organizations. This includes sentiment analysis and other applications to monitor interactions on external social media sites, to determine reactions to new products or predict customer needs. But there’s also the collaboration aspect, the ability to share insights and discoveries with peers and partners. Either way, it takes many minds working together to effectively pull information from all that data.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data | Tagged | 4 Comments

Analytics 3.0 – The Emerging Data Economy

In recent times, the big Internet companies – the Googles, Yahoos and eBays – have proven that it is possible to build a sustainable business on data analytics, in which corporate decisions and actions are being seamlessly guided via an analytics culture, based on data, measurement and quantifiable results. Now, two of the top data analytics thinkers say we are reaching a point that non-tech, non-Internet companies are on their way to becoming analytics-driven organizations in a similar vein, as part of an emerging data economy.

Post-Big data

In a report written for the International Institute for Analytics, Thomas Davenport and Jill Dyché divulge the results of their interviews with 20 large organizations, in which they find big data analytics to be well integrated into the decision-making cycle. “Large organizations across industries are joining the data economy,” they observe. “They are not keeping traditional analytics and big data separate, but are combining them to form a new synthesis.”

Davenport and Dyché call this new state of management “Analytics 3.0, ” in which the concept and practices of competing on analytics are no longer confined to data management and IT departments or quants – analytics is embedded into all key organizational processes. That means major, transformative effects for organizations. “There is little doubt that analytics can transform organizations, and the firms that lead the 3.0 charge will seize the most value,” they write.

Analytics 3.0 is the current of three distinct phases in the way data analytics has been applied to business decision making, Davenport and Dyché say. The first two “eras” looked like this:

  1. Analytics 1.0, prevalent between 1954 and 2009, was based on relatively small and structured data sources from internal corporate sources.
  2. Analytics 2.0, which arose between 2005 and 2012, saw the rise of the big Web companies – the Googles and Yahoos and eBays – which were leveraging big data stores and employing prescriptive analytics to target customers and shape offerings. This time span was also shaped by a growing interest in competing on analytics, in which data was applied to strategic business decision-making. “However, large companies often confined their analytical efforts to basic information domains like customer or product, that were highly-structured and rarely integrated with other data,” the authors write.
  3. In the Analytics 3.0 era, analytical efforts are being integrated with other data types, across enterprises.

This emerging environment “combines the best of 1.0 and 2.0—a blend of big data and traditional analytics that yields insights and offerings with speed and impact,” Davenport and Dyché say. The key trait of Analytics 3.0  “is that not only online firms, but virtually any type of firm in any industry, can participate in the data-driven economy. Banks, industrial manufacturers, health care providers, retailers—any company in any industry that is willing to exploit the possibilities—can all develop data-based offerings for customers, as well as supporting internal decisions with big data.”

Davenport and Dyché describe how one major trucking and transportation company has been able to implement low-cost sensors for its trucks, trailers and intermodal containers, which “monitor location, driving behaviors, fuel levels and whether a trailer/container is loaded or empty. The quality of the optimized decisions [the company] makes with the sensor data – dispatching of trucks and containers, for example – is improving substantially, and the company’s use of prescriptive analytics is changing job roles and relationships.”

New technologies and methods are helping enterprises enter the Analytics 3.0 realm, including “a variety of hardware/software architectures, including clustered parallel servers using Hadoop/MapReduce, in-memory analytics, and in-database processing,” the authors adds. “All of these technologies are considerably faster than previous generations of technology for data management and analysis. Analyses that might have taken hours or days in the past can be done in seconds.”

In addition, another key characteristic of big data analytics-driven enterprises is the ability to fail fast – to deliver, with great frequency, partial outputs to project stakeholders. With the rise of new ‘agile’ analytical methods and machine learning techniques, organizations are capable of delivering “insights at a much faster rate,” and provide for “an ongoing sense of urgency.”

Perhaps most importantly, big data and analytics are integrated and embedded into corporate processes across the board. “Models in Analytics 3.0 are often being embedded into operational and decision processes, dramatically increasing their speed and impact,” Davenport and Dyché state. “Some are embedded into fully automated systems based on scoring algorithms or analytics-based rules. Some are built into consumer-oriented products and features. In any case, embedding the analytics into systems and processes not only means greater speed, but also makes it more difficult for decision-makers to avoid using analytics—usually a good thing.”

The report is available here.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data | Tagged , | Leave a comment

Are you Ready for the Massive Wave of Data?

Leo Eweani makes the case that the data tsunami is coming.  “Businesses are scrambling to respond and spending accordingly. Demand for data analysts is up by 92%; 25% of IT budgets are spent on the data integration projects required to access the value locked up in this data “ore” – it certainly seems that enterprise is doing The Right Thing – but is it?”

Data is exploding within most enterprises.  However, most enterprises have no clue how to manage this data effectively.  While you would think that an investment in data integration would be an area of focus, many enterprises don’t have a great track record in making data integration work.  “Scratch the surface, and it emerges that 83% of IT staff expect there to be no ROI at all on data integration projects and that they are notorious for being late, over-budget and incredibly risky.”

Are you Ready for the massive Wave of Data

The core message from me is that enterprises need to ‘up their game’ when it comes to data integration.  This recommendation is based upon the amount of data growth we’ve already experienced, and will experience in the near future.  Indeed, a “data tsunami” is on the horizon, and most enterprises are ill prepared for it.

So, how do you get prepared?   While many would say it’s all about buying anything and everything, when it comes to big data technology, the best approach is to splurge on planning.  This means defining exactly what data assets are in place now, and will be in place in the future, and how they should or will be leveraged.

To face the forthcoming wave of data, certain planning aspects and questions about data integration rise to the top:

Performance, including data latency.  Or, how quickly does the data need to flow from point or points A to point or points B?  As the volume of data quickly rises, the data integration engines have got to keep up.

Data security and governance.  Or, how will the data be protected both at-rest and in-flight, and how will the data be managed in terms of controls on use and change?

Abstraction, and removing data complexity.  Or, how will the enterprise remap and re-purpose key enterprise data that may not currently exist in a well-defined and functional structure?

Integration with cloud-based data.  Or, how will the enterprise link existing enterprise data assets with those that exist on remote cloud platforms?

While this may seem like a complex and risky process, think through the problems, leverage the right technology, and you can remove the risk and complexity.  The enterprises that seem to fail at data integration do not follow that advice.

I suspect the explosion of data to be the biggest challenge enterprise IT will face in many years.  While a few will take advantage of their data, most will struggle, at least initially.  Which route will you take?

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud Computing, Data Governance, Data Integration | Tagged , , , | Leave a comment

Death of the Data Scientist: Silver Screen Fiction?

Maybe the word “death” is a bit strong, so let’s say “demise” instead.  Recently I read an article in the Harvard Business Review around how Big Data and Data Scientists will rule the world of the 21st century corporation and how they have to operate for maximum value.  The thing I found rather disturbing was that it takes a PhD – probably a few of them – in a variety of math areas to give executives the necessary insight to make better decisions ranging from what product to develop next to who to sell it to and where.

Who will walk the next long walk.... (source: Wikipedia)

Who will walk the next long walk…. (source: Wikipedia)

Don’t get me wrong – this is mixed news for any enterprise software firm helping businesses locate, acquire, contextually link, understand and distribute high-quality data.  The existence of such a high-value role validates product development but it also limits adoption.  It is also great news that data has finally gathered the attention it deserves.  But I am starting to ask myself why it always takes individuals with a “one-in-a-million” skill set to add value.  What happened to the democratization  of software?  Why is the design starting point for enterprise software not always similar to B2C applications, like an iPhone app, i.e. simpler is better?  Why is it always such a gradual “Cold War” evolution instead of a near-instant French Revolution?

Why do development environments for Big Data not accommodate limited or existing skills but always accommodate the most complex scenarios?  Well, the answer could be that the first customers will be very large, very complex organizations with super complex problems, which they were unable to solve so far.  If analytical apps have become a self-service proposition for business users, data integration should be as well.  So why does access to a lot of fast moving and diverse data require scarce PIG or Cassandra developers to get the data into an analyzable shape and a PhD to query and interpret patterns?

I realize new technologies start with a foundation and as they spread supply will attempt to catch up to create an equilibrium.  However, this is about a problem, which has existed for decades in many industries, such as the oil & gas, telecommunication, public and retail sector. Whenever I talk to architects and business leaders in these industries, they chuckle at “Big Data” and tell me “yes, we got that – and by the way, we have been dealing with this reality for a long time”.  By now I would have expected that the skill (cost) side of turning data into a meaningful insight would have been driven down more significantly.

Informatica has made a tremendous push in this regard with its “Map Once, Deploy Anywhere” paradigm.  I cannot wait to see what’s next – and I just saw something recently that got me very excited.  Why you ask? Because at some point I would like to have at least a business-super user pummel terabytes of transaction and interaction data into an environment (Hadoop cluster, in memory DB…) and massage it so that his self-created dashboard gets him/her where (s)he needs to go.  This should include concepts like; “where is the data I need for this insight?’, “what is missing and how do I get to that piece in the best way?”, “how do I want it to look to share it?” All that is required should be a semi-experienced knowledge of Excel and PowerPoint to get your hands on advanced Big Data analytics.  Don’t you think?  Do you believe that this role will disappear as quickly as it has surfaced?

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, CIO, Customer Acquisition & Retention, Customer Services, Data Aggregation, Data Integration, Data Integration Platform, Data Quality, Data Warehousing, Enterprise Data Management, Financial Services, Healthcare, Life Sciences, Manufacturing, Master Data Management, Operational Efficiency, Profiling, Scorecarding, Telecommunications, Transportation, Uncategorized, Utilities & Energy, Vertical | Tagged , , , , | 1 Comment

History Repeats Itself Through Business Intelligence (Part 2)

liftcar

In a previous blog post, I wrote about when business “history” is reported via Business Intelligence (BI) systems, it’s usually too late to make a real difference.  In this post, I’m going to talk about how business history becomes much more useful when combined operationally and in real time.

E. P. Thompson, a historian pointed out that all history is the history of unintended consequences.  His idea / theory was that history is not always recorded in documents, but instead is ultimately derived from examining cultural meanings as well as the structures of society  through hermeneutics (interpretation of texts) semiotics and in many forms and signs of the times, and concludes that history is created by people’s subjectivity and therefore is ultimately represented as they REALLY live.

The same can be extrapolated for businesses.  However, the BI systems of today only capture a miniscule piece of the larger pie of knowledge representation that may be gained from things like meetings, videos, sales calls, anecdotal win / loss reports, shadow IT projects, 10Ks and Qs, even company blog posts ;-)   – the point is; how can you better capture the essence of meaning and perhaps importance out of the everyday non-database events taking place in your company and its activities – in other words, how it REALLY operates.

One of the keys to figuring out how businesses really operate is identifying and utilizing those undocumented RULES that are usually underlying every business.  Select company employees, often veterans, know these rules intuitively. If you watch them, and every company has them, they just have a knack for getting projects pushed through the system, or making customers happy, or diagnosing a problem in a short time and with little fanfare.  They just know how things work and what needs to be done.

These rules have been, and still are difficult to quantify and apply or “Data-ify” if you will. Certain companies (and hopefully Informatica) will end up being major players in the race to datify these non-traditional rules and events, in addition to helping companies make sense out of big data in a whole new way. But in daydreaming about it, it’s not hard to imagine business systems that will eventually be able to understand the optimization rules of a business, accounting for possible unintended scenarios or consequences, and then apply them in the time when they are most needed.  Anyhow, that’s the goal of a new generation of Operational Intelligence systems.

In my final post on the subject, I’ll explain how it works and business problems it solves (in a nutshell). And if I’ve managed to pique your curiosity and you want to hear about Operational Intelligence sooner, tune in to to a webinar we’re having TODAY at 10 AM PST. Here’s the link.

http://www.informatica.com/us/company/informatica-talks/?commid=97187

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Complex Event Processing, Data Integration Platform, Operational Efficiency, Real-Time, SOA, Ultra Messaging | Tagged , , , , | 1 Comment

True Facts About Informatica RulePoint Real-Time Integration

Cabralia_computer_center

Shhhh… RulePoint Programmer Hard at Work

End of year.  Out with the old, in with the new.  A time where everyone gets their ducks in order, clears the pipe and gets ready for the New Year. For R&D, one of the gating events driving the New Year is the annual sales kickoff event where we present to Sales the new features so they can better communicate a products’ road map and value to potential buyers.  All well and good.  But part of the process is to fill out a Q and A that explains the product “Value Prop” and they only gave us 4 lines. I think the answer also helps determine speaking slots and priority.

So here’s the question I had to fill out -

FOR SALES TO UNDERSTAND THE PRODUCT BETTER, WE ASK THAT YOU ANSWER THE FOLLOWING QUESTION:

WHAT IS THE PRODUCT VALUE PROPOSITION AND ARE THERE ANY SIGNIFICANT DEPLOYMENTS OR OTHER CUSTOMER EXPERIENCES YOU HAVE HAD THAT HAVE HELPED TO DEFINE THE PRODUCT OFFERING?

Here’s what I wrote:

Informatica RULEPOINT is a real-time integration and event processing software product that is deployed very innovatively by many businesses and vertical industries.  Its value proposition is that it helps large enterprises discover important situations from their droves of data and events and then enables users to take timely action on discovered business opportunities as well as stop problems while or before they happen.

Here’s what I wanted to write:

RulePoint is scalable, low latency, flexible and extensible and was born in the pure and exotic wilds of the Amazon from the minds of natives that have never once spoken out loud – only programmed.  RulePoint captures the essence of true wisdom of the greatest sages of yesteryear. It is the programming equivalent and captures what Esperanto linguistically tried to do but failed to accomplish.

As to high availability, (HA) there has never been anything in the history of software as available as RulePoint. Madonna’s availability only pales in comparison to RulePoint’s availability.  We are talking 8 Nines cubed and then squared ( ;-) ). Oracle = Unavailable. IBM = Unavailable. Informatica RulePoint = Available.

RulePoint works hard, but plays hard too.  When not solving those mission critical business problems, RulePoint creates Arias worthy of Grammy nominations. In the wee hours of the AM, RulePoint single-handedly prevented the outbreak and heartbreak of psoriasis in East Angola.

One of the little known benefits of RulePoint is its ability to train the trainer, coach the coach and play the player. Via chalk talks? No, RulePoint uses mind melds instead.  Much more effective. RulePoint knows Chuck Norris.  How do you think Chuck Norris became so famous in the first place? Yes, RulePoint. Greenpeace used RulePoint to save dozens of whales, 2 narwhal, a polar bear and a few collateral penguins (the bear was about to eat the penguins).  RulePoint has been banned in 16 countries because it was TOO effective.  “Veni, Vidi, RulePoint Vici” was Julius Caesar’s actual quote.

The inspiration for Gandalf in the Lord of the Rings? RulePoint. IT heads worldwide shudder with pride when they hear the name RulePoint mentioned and know that they acquired it. RulePoint is stirred but never shaken. RulePoint is used to train the Sherpas that help climbers reach the highest of heights. RulePoint cooks Minute rice in 20 seconds.

The running of the bulls in Pamplona every year -  What do you think they are running from? Yes,  RulePoint. RulePoint put the Vinyasa back into Yoga. In fact, RulePoint will eventually create a new derivative called Full Contact Vinyasa Yoga and it will eventually supplant gymnastics in the 2028 Summer Olympic games.

The laws of physics were disproved last year by RulePoint.  RulePoint was drafted in the 9th round by the LA Lakers in the 90s, but opted instead to teach math to inner city youngsters. 5 years ago, RulePoint came up with an antivenin to the Black Mamba and has yet to ask for any form of recompense. RulePoint’s rules bend but never break. The stand-in for the “Mind” in the movie “A Beautiful Mind” was RulePoint.

RulePoint will define a new category for the Turing award and will name it the 2Turing Award.  As a bonus, the 2Turing Award will then be modestly won by RulePoint and the whole category will be retired shortly thereafter.  RulePoint is… tada… the most interesting software in the world.

But I didn’t get to write any of these true facts and product differentiators on the form. No room.

Hopefully I can still get a primo slot to talk about RulePoint.

 

And so from all the RulePoint and Emerging Technologies team, including sales and marketing, here’s hoping you have great holiday season and a Happy New Year!

 

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Complex Event Processing, Data Integration Platform, Operational Efficiency, Uncategorized | Tagged , , , , | Leave a comment