Category Archives: Business/IT Collaboration
Shhhh… RulePoint Programmer Hard at Work
End of year. Out with the old, in with the new. A time where everyone gets their ducks in order, clears the pipe and gets ready for the New Year. For R&D, one of the gating events driving the New Year is the annual sales kickoff event where we present to Sales the new features so they can better communicate a products’ road map and value to potential buyers. All well and good. But part of the process is to fill out a Q and A that explains the product “Value Prop” and they only gave us 4 lines. I think the answer also helps determine speaking slots and priority.
So here’s the question I had to fill out -
FOR SALES TO UNDERSTAND THE PRODUCT BETTER, WE ASK THAT YOU ANSWER THE FOLLOWING QUESTION:
WHAT IS THE PRODUCT VALUE PROPOSITION AND ARE THERE ANY SIGNIFICANT DEPLOYMENTS OR OTHER CUSTOMER EXPERIENCES YOU HAVE HAD THAT HAVE HELPED TO DEFINE THE PRODUCT OFFERING?
Here’s what I wrote:
Informatica RULEPOINT is a real-time integration and event processing software product that is deployed very innovatively by many businesses and vertical industries. Its value proposition is that it helps large enterprises discover important situations from their droves of data and events and then enables users to take timely action on discovered business opportunities as well as stop problems while or before they happen.
Here’s what I wanted to write:
RulePoint is scalable, low latency, flexible and extensible and was born in the pure and exotic wilds of the Amazon from the minds of natives that have never once spoken out loud – only programmed. RulePoint captures the essence of true wisdom of the greatest sages of yesteryear. It is the programming equivalent and captures what Esperanto linguistically tried to do but failed to accomplish.
As to high availability, (HA) there has never been anything in the history of software as available as RulePoint. Madonna’s availability only pales in comparison to RulePoint’s availability. We are talking 8 Nines cubed and then squared ( ). Oracle = Unavailable. IBM = Unavailable. Informatica RulePoint = Available.
RulePoint works hard, but plays hard too. When not solving those mission critical business problems, RulePoint creates Arias worthy of Grammy nominations. In the wee hours of the AM, RulePoint single-handedly prevented the outbreak and heartbreak of psoriasis in East Angola.
One of the little known benefits of RulePoint is its ability to train the trainer, coach the coach and play the player. Via chalk talks? No, RulePoint uses mind melds instead. Much more effective. RulePoint knows Chuck Norris. How do you think Chuck Norris became so famous in the first place? Yes, RulePoint. Greenpeace used RulePoint to save dozens of whales, 2 narwhal, a polar bear and a few collateral penguins (the bear was about to eat the penguins). RulePoint has been banned in 16 countries because it was TOO effective. “Veni, Vidi, RulePoint Vici” was Julius Caesar’s actual quote.
The inspiration for Gandalf in the Lord of the Rings? RulePoint. IT heads worldwide shudder with pride when they hear the name RulePoint mentioned and know that they acquired it. RulePoint is stirred but never shaken. RulePoint is used to train the Sherpas that help climbers reach the highest of heights. RulePoint cooks Minute rice in 20 seconds.
The running of the bulls in Pamplona every year - What do you think they are running from? Yes, RulePoint. RulePoint put the Vinyasa back into Yoga. In fact, RulePoint will eventually create a new derivative called Full Contact Vinyasa Yoga and it will eventually supplant gymnastics in the 2028 Summer Olympic games.
The laws of physics were disproved last year by RulePoint. RulePoint was drafted in the 9th round by the LA Lakers in the 90s, but opted instead to teach math to inner city youngsters. 5 years ago, RulePoint came up with an antivenin to the Black Mamba and has yet to ask for any form of recompense. RulePoint’s rules bend but never break. The stand-in for the “Mind” in the movie “A Beautiful Mind” was RulePoint.
RulePoint will define a new category for the Turing award and will name it the 2Turing Award. As a bonus, the 2Turing Award will then be modestly won by RulePoint and the whole category will be retired shortly thereafter. RulePoint is… tada… the most interesting software in the world.
But I didn’t get to write any of these true facts and product differentiators on the form. No room.
Hopefully I can still get a primo slot to talk about RulePoint.
And so from all the RulePoint and Emerging Technologies team, including sales and marketing, here’s hoping you have great holiday season and a Happy New Year!
A front office as defined by Wikipedia is “a business term that refers to a company’s departments that come in contact with clients, including the marketing, sales, and service departments” while a back office is “tasks dedicated to running the company….without being seen by customers.” Wikipedia goes on to say that “Back office functions can be outsourced to consultants and contractors, including ones in other countries.” Data Management was once a back office activity but in recent years it has moved to the front office. What changed? (more…)
As I continue to counsel insurers about master data, they all agree immediately that it is something they need to get their hands around fast. If you ask participants in a workshop at any carrier; no matter if life, p&c, health or excess, they all raise their hands when I ask, “Do you have broadband bundle at home for internet, voice and TV as well as wireless voice and data?”, followed by “Would you want your company to be the insurance version of this?”
Now let me be clear; while communication service providers offer very sophisticated bundles, they are also still grappling with a comprehensive view of a client across all services (data, voice, text, residential, business, international, TV, mobile, etc.) each of their touch points (website, call center, local store). They are also miles away of including any sort of meaningful network data (jitter, dropped calls, failed call setups, etc.)
Similarly, my insurance investigations typically touch most of the frontline consumer (business and personal) contact points including agencies, marketing (incl. CEM & VOC) and the service center. On all these we typically see a significant lack of productivity given that policy, billing, payments and claims systems are service line specific, while supporting functions from developing leads and underwriting to claims adjucation often handle more than one type of claim.
This lack of performance is worsened even more by the fact that campaigns have sub-optimal campaign response and conversion rates. As touchpoint-enabling CRM applications also suffer from a lack of complete or consistent contact preference information, interactions may violate local privacy regulations. In addition, service centers may capture leads only to log them into a black box AS400 policy system to disappear.
Here again we often hear that the fix could just happen by scrubbing data before it goes into the data warehouse. However, the data typically does not sync back to the source systems so any interaction with a client via chat, phone or face-to-face will not have real time, accurate information to execute a flawless transaction.
On the insurance IT side we also see enormous overhead; from scrubbing every database from source via staging to the analytical reporting environment every month or quarter to one-off clean up projects for the next acquired book-of-business. For a mid-sized, regional carrier (ca. $6B net premiums written) we find an average of $13.1 million in annual benefits from a central customer hub. This figure results in a ROI of between 600-900% depending on requirement complexity, distribution model, IT infrastructure and service lines. This number includes some baseline revenue improvements, productivity gains and cost avoidance as well as reduction.
On the health insurance side, my clients have complained about regional data sources contributing incomplete (often driven by local process & law) and incorrect data (name, address, etc.) to untrusted reports from membership, claims and sales data warehouses. This makes budgeting of such items like medical advice lines staffed by nurses, sales compensation planning and even identifying high-risk members (now driven by the Affordable Care Act) a true mission impossible, which makes the life of the pricing teams challenging.
Over in the life insurers category, whole and universal life plans now encounter a situation where high value clients first faced lower than expected yields due to the low interest rate environment on top of front-loaded fees as well as the front loading of the cost of the term component. Now, as bonds are forecast to decrease in value in the near future, publicly traded carriers will likely be forced to sell bonds before maturity to make good on term life commitments and whole life minimum yield commitments to keep policies in force.
This means that insurers need a full profile of clients as they experience life changes like a move, loss of job, a promotion or birth. Such changes require the proper mitigation strategy, which can be employed to protect a baseline of coverage in order to maintain or improve the premium. This can range from splitting term from whole life to using managed investment portfolio yields to temporarily pad premium shortfalls.
Overall, without a true, timely and complete picture of a client and his/her personal and professional relationships over time and what strategies were presented, considered appealing and ultimately put in force, how will margins improve? Surely, social media data can help here but it should be a second step after mastering what is available in-house already. What are some of your experiences how carriers have tried to collect and use core customer data?
Recommendations and illustrations contained in this post are estimates only and are based entirely upon information provided by the prospective customer and on our observations. While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
That tag line got your attention – did it not? Last week I talked about how companies are trying to squeeze more value out of their asset data (e.g. equipment of any kind) and the systems that house it. I also highlighted the fact that IT departments in many companies with physical asset-heavy business models have tried (and often failed) to create a consistent view of asset data in a new ERP or data warehouse application. These environments are neither equipped to deal with all life cycle aspects of asset information, nor are they fixing the root of the data problem in the sources, i.e. where the stuff is and what it look like. It is like a teenager whose parents have spent thousands of dollars on buying him the latest garments but he always wears the same three outfits because he cannot find the other ones in the pile he hoardes under her bed. And now they bought him a smart phone to fix it. So before you buy him the next black designer shirt, maybe it would be good to find out how many of the same designer shirts he already has, what state they are in and where they are.
Recently, I had the chance to work on a like problem with a large overseas oil & gas company and a North American utility. Both are by definition asset heavy, very conservative in their business practices, highly regulated, very much dependent on outside market forces such as the oil price and geographically very dispersed; and thus, by default a classic system integration spaghetti dish.
My challenge was to find out where the biggest opportunities were in terms of harnessing data for financial benefit.
The initial sense in oil & gas was that most of the financial opportunity hidden in asset data was in G&G (geophysical & geological) and the least on the retail side (lubricants and gas for sale at operated gas stations). On the utility side, the go to area for opportunity appeared to be maintenance operations. Let’s say that I was about right with these assertions but that there were a lot more skeletons in the closet with diamond rings on their fingers than I anticipated.
After talking extensively with a number of department heads in the oil company; starting with the IT folks running half of the 400 G&G applications, the ERP instances (turns out there were 5, not 1) and the data warehouses (3), I queried the people in charge of lubricant and crude plant operations, hydrocarbon trading, finance (tax, insurance, treasury) as well as supply chain, production management, land management and HSE (health, safety, environmental).
The net-net was that the production management people said that there is no issue as they already cleaned up the ERP instance around customer and asset (well) information. The supply chain folks also indicated that they have used another vendor’s MDM application to clean up their vendor data, which funnily enough was not put back into the procurement system responsible for ordering parts. The data warehouse/BI team was comfortable that they cleaned up any information for supply chain, production and finance reports before dimension and fact tables were populated for any data marts.
All of this was pretty much a series of denial sessions on your 12-step road to recovery as the IT folks had very little interaction with the business to get any sense of how relevant, correct, timely and useful these actions are for the end consumer of the information. They also had to run and adjust fixes every month or quarter as source systems changed, new legislation dictated adjustments and new executive guidelines were announced.
While every department tried to run semi-automated and monthly clean up jobs with scripts and some off-the-shelve software to fix their particular situation, the corporate (holding) company and any downstream consumers had no consistency to make sensible decisions on where and how to invest without throwing another legion of bodies (by now over 100 FTEs in total) at the same problem.
So at every stage of the data flow from sources to the ERP to the operational BI and lastly the finance BI environment, people repeated the same tasks: profile, understand, move, aggregate, enrich, format and load.
Despite the departmental clean-up efforts, areas like production operations did not know with certainty (even after their clean up) how many well heads and bores they had, where they were downhole and who changed a characteristic as mundane as the well name last and why (governance, location match).
Marketing (Trading) was surprisingly open about their issues. They could not process incoming, anchored crude shipments into inventory or assess who the counterparty they sold to was owned by and what payment terms were appropriate given the credit or concentration risk associated (reference data, hierarchy mgmt.). As a consequence, operating cash accuracy was low despite ongoing improvements in the process and thus, incurred opportunity cost.
Operational assets like rig equipment had excess insurance coverage (location, operational data linkage) and fines paid to local governments for incorrectly filing or not renewing work visas was not returned for up to two years incurring opportunity cost (employee reference data).
A big chunk of savings was locked up in unplanned NPT (non-production time) because inconsistent, incorrect well data triggered incorrect maintenance intervals. Similarly, OEM specific DCS (drill control system) component software was lacking a central reference data store, which did not trigger alerts before components failed. If you add on top a lack of linkage of data served by thousands of sensors via well logs and Pi historians and their ever changing roll-up for operations and finance, the resulting chaos is complete.
One approach we employed around NPT improvements was to take the revenue from production figure from their 10k and combine it with the industry benchmark related to number of NPT days per 100 day of production (typically about 30% across avg depth on & offshore types). Then you overlay it with a benchmark (if they don’t know) how many of these NPT days were due to bad data, not equipment failure or alike, and just fix a portion of that, you are getting big numbers.
When I sat back and looked at all the potential it came to more than $200 million in savings over 5 years and this before any sensor data from rig equipment, like the myriad of siloed applications running within a drill control system, are integrated and leveraged via a Hadoop cluster to influence operational decisions like drill string configuration or asmyth.
Next time I’ll share some insight into the results of my most recent utility engagement but I would love to hear from you what your experience is in these two or other similar industries.
Recommendations contained in this post are estimates only and are based entirely upon information provided by the prospective customer and on our observations. While we believe our recommendations and estimates to be sound, the degree of success achieved by the prospective customer is dependent upon a variety of factors, many of which are not under Informatica’s control and nothing in this post shall be relied upon as representative of the degree of success that may, in fact, be realized and no warrantee or representation of success, either express or implied, is made.
I believe that most in the software business believe that it is tough enough to calculate and hence financially justify the purchase or build of an application - especially middleware – to a business leader or even a CIO. Most of business-centric IT initiatives involve improving processes (order, billing, service) and visualization (scorecarding, trending) for end users to be more efficient in engaging accounts. Some of these have actually migrated to targeting improvements towards customers rather than their logical placeholders like accounts. Similar strides have been made in the realm of other party-type (vendor, employee) as well as product data. They also tackle analyzing larger or smaller data sets and providing a visual set of clues on how to interpret historical or predictive trends on orders, bills, usage, clicks, conversions, etc.
If you think this is a tough enough proposition in itself, imagine the challenge of quantifying the financial benefit derived from understanding where your “hardware” is physically located, how it is configured, who maintained it, when and how. Depending on the business model you may even have to figure out who built it or owns it. All of this has bottom-line effects on how, who and when expenses are paid and revenues get realized and recognized. And then there is the added complication that these dimensions of hardware are often fairly dynamic as they can also change ownership and/or physical location and hence, tax treatment, insurance risk, etc.
Such hardware could be a pump, a valve, a compressor, a substation, a cell tower, a truck or components within these assets. Over time, with new technologies and acquisitions coming about, the systems that plan for, install and maintain these assets become very departmentalized in terms of scope and specialized in terms of function. The same application that designs an asset for department A or region B, is not the same as the one accounting for its value, which is not the same as the one reading its operational status, which is not the one scheduling maintenance, which is not the same as the one billing for any repairs or replacement. The same folks who said the Data Warehouse is the “Golden Copy” now say the “new ERP system” is the new central source for everything. Practitioners know that this is either naiveté or maliciousness. And then there are manual adjustments….
Moreover, to truly take squeeze value out of these assets being installed and upgraded, the massive amounts of data they generate in a myriad of formats and intervals need to be understood, moved, formatted, fixed, interpreted at the right time and stored for future use in a cost-sensitive, easy-to-access and contextual meaningful way.
I wish I could tell you one application does it all but the unsurprising reality is that it takes a concoction of multiple. None or very few asset life cycle-supporting legacy applications will be retired as they often house data in formats commensurate with the age of the assets they were built for. It makes little financial sense to shut down these systems in a big bang approach but rather migrate region after region and process after process to the new system. After all, some of the assets have been in service for 50 or more years and the institutional knowledge tied to them is becoming nearly as old. Also, it is probably easier to engage in often required manual data fixes (hopefully only outliers) bit-by-bit, especially to accommodate imminent audits.
So what do you do in the meantime until all the relevant data is in a single system to get an enterprise-level way to fix your asset tower of Babel and leverage the data volume rather than treat it like an unwanted step child? Most companies, which operate in asset, fixed-cost heavy business models do not want to create a disruption but a steady tuning effect (squeezing the data orange), something rather unsexy in this internet day and age. This is especially true in “older” industries where data is still considered a necessary evil, not an opportunity ready to exploit. Fact is though; that in order to improve the bottom line, we better get going, even if it is with baby steps.
If you are aware of business models and their difficulties to leverage data, write to me. If you even know about an annoying, peculiar or esoteric data “domain”, which does not lend itself to be easily leveraged, share your thoughts. Next time, I will share some examples on how certain industries try to work in this environment, what they envision and how they go about getting there.
“Business-IT alignment.” The words have been touted so much by vendors (including Informatica), that they have become a platitude. But that doesn’t mean the concept itself isn’t still critical. This recent article in the IT Leader Potential at Work community lists the three actions IT must take to make a real impact on the business, starting by “inciting a revolution” among IT staff and turning them into business thinkers. Check out the article and share your thoughts here.
When I talk to customers about dealing with poor data quality, I consistently hear something like, “We know we have data quality problems, but we can’t get the business to help take ownership and do something about it.” I think that this is taking the easy way out. Throwing your hands up in the air doesn’t make change happen – it only prolongs the pain. If you want to affect a positive change in data quality and are looking for ways to engage the business, then you should join Barbara Latulippe, Director of Enterprise Information Management for EMC and and Kristen Kokie, VP IT Enterprise Strategic Services for Informatica for our webinar on Thursday October 24th to hear how they have dealt with data quality in their combined 40+ years in IT.
Now, understandably, tackling data quality problems is no small undertaking, and it isn’t easy. In many instances, the reason why organizations choose to do nothing about data quality is that bad data has been present for so long that manual work around efforts have become ingrained in the business processes for consuming data. In these cases, changing the way people do things becomes the largest obstacle to dealing with the root cause of the issues. But that is also where you will be able to find the costs associated with bad data: lost productivity, ineffective decision making, missed opportunities, etc..
As discussed in this previous webinar,(link to replay on the bottom of the page), successfully dealing with poor data quality takes initiative, and it takes communication. IT Departments are the engineers of the business: they are the ones who understand process and workflows; they are the ones who build the integration paths between the applications and systems. Even if they don’t own the data, they do end up owning the data driven business processes that consume data. As such, IT is uniquely positioned to provide customized suggestions based off of the insight from multiple previous interactions with the data.
Bring facts to the table when talking to the business. As those who directly interact daily with data, IT is in position to measure and monitor data quality, to identify key data quality metrics; data quality scorecards and dashboards can shine a light on bad data and directly relate it to the business via the downstream workflows and business processes. Armed with hard facts about impact on specific business processes, a Business user has an easier time affixing a dollar value on the impact of that bad data. Here’s some helpful resources where you can start to build your case for improved data quality. With these tools and insight, IT can start to affect change.
Data is becoming the lifeblood of organizations and IT organizations have a huge opportunity to get closer to the business by really knowing the data of the business. While data quality invariably involves technological intervention, it is more so a process and change management issue that ends up being critical to success. The easier it is to tie bad data to specific business processes, the more constructive the conversation can be with the Business.
Business and Technology Requirements that are Driving User Organizations Toward True Real-Time Analytics and Reporting
Can you imagine data being less important in the future? While organizations keep many data success stories out of the general press for competitive reasons, there are plenty of success stories out there and they make for interesting reading, for example the New York Times article How Companies Learn Your Secrets.
Data volumes are increasing, and the types of data we wish to analyze is becoming more varied. On the one hand, we need to process data faster. On the other hand, we have more data to process. What to do?
Ralph Kimball defined “real-time” as “faster than your current ETL architecture can deliver your data”. In the same way, I’d define Big Data as “more than your current analytic architecture can store and process”.
Smart meters, clinical trials, call centers, complex supply chain operations, logistics, changing risk exposure: the desire to be able to visualize important business processes in an up-to-the-minute fashion to make better decisions is becoming more and more important.
So, what great thing would you attempt if you knew you could not fail?
With improvements in technology and architectural approaches, most of us are held back our pre-conceived knowledge of historical limitations. New technologies and approaches are allowing us to solve some old problems, but everything has limitations. Where are the current limitations and how are those limitations changing? Where are the current opportunities?
I find that in discussions with many organizations, the difficulty is in imagining the great things that one might try to attempt. If you were able to ask any business question, or have any business knowledge at all, what would that be? Within the realms where data in some form, anywhere, exists: what would you ask? I’ve found that the organizations making the most progress in operational intelligence are the companies using the most imagination, both in terms of the business questions they are asking, and the technically new architectural approaches they are taking.
Every layer of the traditional data warehouse architecture has been affected by improved approaches in the last 10-15 years, allowing us to tackle both operational, tactical and strategic intelligence questions. After 20 years of decision support progress and new tools, real foundations have been laid for how to architect things differently, and how to collaborate differently as between business and IT.
This is the first in a series where I’ll mostly be exploring what those technically new architectural approaches are. If I had an opportunity to re-architect my old data warehouses using newer tools and approaches, with the knowledge of successful patterns I’ve developed over the years, I’d approach decision support and operational intelligence very differently. For instance, I’d populate my ODS differently. I’d use CDC differently. I’d use checksums differently. I’d use metadata, parameterization, templates, etc., differently. The benefit would not ONLY be the ability to have operational reporting and intelligence: I would also be able to do much more better, faster, and cheaper: without compromise.
In the Information Age we live and work in, where it’s hard to go even one day without a Google search, where do you turn for insights that can help you solve work challenges and progress your career? This is a tough question. How can we deal with the challenges of information overload – which some have called information pollution? (more…)
We are excited to announce the new Potential at Work Community for Application Leaders.
As an application leader, you have a very demanding job. You have to successfully manage issues such as:
- Driving the maximum business value from your company’s enterprise application investments
- Keeping all of your enterprise applications current and meeting user requirements
- Delivering on your service agreements and managing all of the “ilities.”
- Defining an enterprise application strategy that includes on-premise and cloud
- Delivering timely, authoritative and trustworthy data for your enterprise applications
This community is here to help you to do exactly that and to help you to excel in both your current job and your career ahead. Our goal is to provide tips, insights, best practices and information from experts to help you become more successful.
Our first edition is focused on the theme of managing an enterprise cloud application strategy. For those who are in the process of selecting cloud application vendors, I’ve included a very handy Vendor Selection Checklist that is used by Informatica’s Vice President of Applications.
Are we interested in your input to the community? Absolutely! If you have an idea or content to share with the community, please contact us and we will get you published.
Join the community and start unleashing your potential by clicking on this link:
Roger Nolan firstname.lastname@example.org
Julie Lockner email@example.com
Click here for more information on the Potential at Work communities.