Category Archives: Transportation

Death of the Data Scientist: Silver Screen Fiction?

Maybe the word “death” is a bit strong, so let’s say “demise” instead.  Recently I read an article in the Harvard Business Review around how Big Data and Data Scientists will rule the world of the 21st century corporation and how they have to operate for maximum value.  The thing I found rather disturbing was that it takes a PhD – probably a few of them – in a variety of math areas to give executives the necessary insight to make better decisions ranging from what product to develop next to who to sell it to and where.

Who will walk the next long walk.... (source: Wikipedia)

Who will walk the next long walk…. (source: Wikipedia)

Don’t get me wrong – this is mixed news for any enterprise software firm helping businesses locate, acquire, contextually link, understand and distribute high-quality data.  The existence of such a high-value role validates product development but it also limits adoption.  It is also great news that data has finally gathered the attention it deserves.  But I am starting to ask myself why it always takes individuals with a “one-in-a-million” skill set to add value.  What happened to the democratization  of software?  Why is the design starting point for enterprise software not always similar to B2C applications, like an iPhone app, i.e. simpler is better?  Why is it always such a gradual “Cold War” evolution instead of a near-instant French Revolution?

Why do development environments for Big Data not accommodate limited or existing skills but always accommodate the most complex scenarios?  Well, the answer could be that the first customers will be very large, very complex organizations with super complex problems, which they were unable to solve so far.  If analytical apps have become a self-service proposition for business users, data integration should be as well.  So why does access to a lot of fast moving and diverse data require scarce PIG or Cassandra developers to get the data into an analyzable shape and a PhD to query and interpret patterns?

I realize new technologies start with a foundation and as they spread supply will attempt to catch up to create an equilibrium.  However, this is about a problem, which has existed for decades in many industries, such as the oil & gas, telecommunication, public and retail sector. Whenever I talk to architects and business leaders in these industries, they chuckle at “Big Data” and tell me “yes, we got that – and by the way, we have been dealing with this reality for a long time”.  By now I would have expected that the skill (cost) side of turning data into a meaningful insight would have been driven down more significantly.

Informatica has made a tremendous push in this regard with its “Map Once, Deploy Anywhere” paradigm.  I cannot wait to see what’s next – and I just saw something recently that got me very excited.  Why you ask? Because at some point I would like to have at least a business-super user pummel terabytes of transaction and interaction data into an environment (Hadoop cluster, in memory DB…) and massage it so that his self-created dashboard gets him/her where (s)he needs to go.  This should include concepts like; “where is the data I need for this insight?’, “what is missing and how do I get to that piece in the best way?”, “how do I want it to look to share it?” All that is required should be a semi-experienced knowledge of Excel and PowerPoint to get your hands on advanced Big Data analytics.  Don’t you think?  Do you believe that this role will disappear as quickly as it has surfaced?

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business Impact / Benefits, CIO, Customer Acquisition & Retention, Customer Services, Data Aggregation, Data Integration, Data Integration Platform, Data Quality, Data Warehousing, Enterprise Data Management, Financial Services, Healthcare, Life Sciences, Manufacturing, Master Data Management, Operational Efficiency, Profiling, Scorecarding, Telecommunications, Transportation, Uncategorized, Utilities & Energy, Vertical | Tagged , , , , | 1 Comment

All Aboard! Engineer Your Mission-Critical Asset Information So It Doesn’t Go Off The Rails

Do you know what year the first steam engine locomotive was invented? 1804. It traveled 9 miles in two hours. Now, you and I would be pretty upset of we boarded a train and it took 2 hours to go 9 miles. But, 200 years ago, this was a huge innovation and led to the invention of the modern day train and railway.

Tremendous Growth In Demand for Rail Travel Puts Pressure on Rail Infrastructure
Today, Britain is experiencing tremendous growth in demand for rail travel. One million more trains and 500 million more passengers travel by train than just 5 years ago. Over the next 30 years passenger demand for rail will more than double and freight demand is expected to go up by 140%. This puts tremendous pressure on the rail infrastructure.

Is it hard to make sense of your mission-critical asset information because it's scattered across multiple applications?

Is it hard to make sense of your valuable asset information because it’s scattered across multiple applications?

Network Rail is in the modern-day rail business. Employees work day and night running, maintaining and updating Britain’s rail infrastructure, including millions of assets, such as 22,000 miles of track, 6,500 crossings, 43,000 bridges, viaducts and tunnels. Improving the rail network provides faster, more frequent and more reliable journeys between Britain’s towns and cities.

Network Rail is investing more in the rail infrastructure than in Victorian times. In the last six months, they spent about $25 million a day! In a recent news release, Patrick Bucher, group finance director said, “We continue to invest record amounts to deliver a bigger, better railway for passengers and businesses across Britain. We are also driving down the cost of running Britain’s railway to help make it more affordable in the years ahead.”

Employees Need to Trust Asset Information to Pinpoint and Fix Problems Quickly
To pinpoint and fix problems quickly, keep their operating costs low and maintain a strong safety record, Network Rail’s employees need to trust their mission-critical asset information, such as:

  1. What is the problem?
  2. Where is it?
  3. What equipment, tools and skills are needed to fix it?
  4. Who is closest to the problem that could fix it?

Difficult to Make Sense of Asset Information Scattered across Applications
Similar to many companies their size, Network Rail’s mission-critical asset information was scattered across many applications, which made it difficult for employees to make sense of asset information and the interaction between assets.

The asset information team recognized the limitations of employees depending on an application-centric view of their business. To operate more efficiently and effectively, they needed clean asset information, consistent asset information, and connected asset information.

Investing in Rail Infrastructure AND the Information Infrastructure to Support It
Network Rail now uses a combination of data integration, data quality, and master data management (MDM) to manage their mission-critical asset information in a central location on an ongoing basis, to:

  1. make sense of asset information,
  2. understand the relationships between assets, and
  3. track changes to asset information.

In a news release, Patrick Bossert Director of Network Rail’s Asset Information services business said, “With more accurate and reliable information about assets and their condition our team can make better business decisions, enable innovation in our asset management policy, planning and execution, and improve  rail-system-wide investment decisions that benefit the rail industry as a whole.”

If you work for a company that revolves around mission-critical asset information, ask yourself these questions:

  1. Can our employees makes sense of our asset information?
  2. Can they easily see relationships between assets and how they interact?
  3. Can they see the history of changes to asset information over time?

Or are are they limited by an application-centric view of the business because asset information is scattered across in multiple systems?

Have a similar story about how you are managing your mission-critical asset information? Please share it in the comments below.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data Quality, Master Data Management, Transportation, Uncategorized | Tagged , , , , , , , , , , , , , , | 2 Comments

Squeezing the Value out of the Old Annoying Orange

I believe that most in the software business believe that it is tough enough to calculate and hence financially justify the purchase or build of an application - especially middleware – to a business leader or even a CIO.  Most of business-centric IT initiatives involve improving processes (order, billing, service) and visualization (scorecarding, trending) for end users to be more efficient in engaging accounts.  Some of these have actually migrated to targeting improvements towards customers rather than their logical placeholders like accounts.  Similar strides have been made in the realm of other party-type (vendor, employee) as well as product data.  They also tackle analyzing larger or smaller data sets and providing a visual set of clues on how to interpret historical or predictive trends on orders, bills, usage, clicks, conversions, etc.

Squeeze that Orange

Squeeze that Orange

If you think this is a tough enough proposition in itself, imagine the challenge of quantifying the financial benefit derived from understanding where your “hardware” is physically located, how it is configured, who maintained it, when and how.  Depending on the business model you may even have to figure out who built it or owns it.  All of this has bottom-line effects on how, who and when expenses are paid and revenues get realized and recognized.  And then there is the added complication that these dimensions of hardware are often fairly dynamic as they can also change ownership and/or physical location and hence, tax treatment, insurance risk, etc.

Such hardware could be a pump, a valve, a compressor, a substation, a cell tower, a truck or components within these assets.  Over time, with new technologies and acquisitions coming about, the systems that plan for, install and maintain these assets become very departmentalized in terms of scope and specialized in terms of function.  The same application that designs an asset for department A or region B, is not the same as the one accounting for its value, which is not the same as the one reading its operational status, which is not the one scheduling maintenance, which is not the same as the one billing for any repairs or replacement.  The same folks who said the Data Warehouse is the “Golden Copy” now say the “new ERP system” is the new central source for everything.  Practitioners know that this is either naiveté or maliciousness. And then there are manual adjustments….

Moreover, to truly take squeeze value out of these assets being installed and upgraded, the massive amounts of data they generate in a myriad of formats and intervals need to be understood, moved, formatted, fixed, interpreted at the right time and stored for future use in a cost-sensitive, easy-to-access and contextual meaningful way.

I wish I could tell you one application does it all but the unsurprising reality is that it takes a concoction of multiple.  None or very few asset life cycle-supporting legacy applications will be retired as they often house data in formats commensurate with the age of the assets they were built for.  It makes little financial sense to shut down these systems in a big bang approach but rather migrate region after region and process after process to the new system.  After all, some of the assets have been in service for 50 or more years and the institutional knowledge tied to them is becoming nearly as old.  Also, it is probably easier to engage in often required manual data fixes (hopefully only outliers) bit-by-bit, especially to accommodate imminent audits.

So what do you do in the meantime until all the relevant data is in a single system to get an enterprise-level way to fix your asset tower of Babel and leverage the data volume rather than treat it like an unwanted step child?  Most companies, which operate in asset, fixed-cost heavy business models do not want to create a disruption but a steady tuning effect (squeezing the data orange), something rather unsexy in this internet day and age.  This is especially true in “older” industries where data is still considered a necessary evil, not an opportunity ready to exploit.  Fact is though; that in order to improve the bottom line, we better get going, even if it is with baby steps.

If you are aware of business models and their difficulties to leverage data, write to me.  If you even know about an annoying, peculiar or esoteric data “domain”, which does not lend itself to be easily leveraged, share your thoughts.  Next time, I will share some examples on how certain industries try to work in this environment, what they envision and how they go about getting there.

FacebookTwitterLinkedInEmailPrintShare
Posted in Application Retirement, Big Data, Business Impact / Benefits, Business/IT Collaboration, CIO, Customer Acquisition & Retention, Customers, Data Governance, Data Quality, Enterprise Data Management, Governance, Risk and Compliance, Healthcare, Life Sciences, Manufacturing, Master Data Management, Mergers and Acquisitions, Operational Efficiency, Product Information Management, Profiling, Telecommunications, Transportation, Utilities & Energy, Vertical | 1 Comment