Category Archives: Telecommunications
Recently, my US-based job led me to a South African hotel room, where I watched Germany play Brazil in the World Cup. The global nature of the event was familiar to me. My work covers countries like Malaysia, Thailand, Singapore, South Africa and Costa Rica. And as I pondered the stunning score (Germany won, 7 to 1), my mind was drawn to emerging markets. What defines an emerging market? In particular, what are the data-related themes common to emerging markets? Because I work with global clients in the banking, oil and gas, telecommunications, and retail industries, I have learned a great deal about this. As a result, I wanted to share my top 5 observations about data in Emerging Markets.
1) Communication Infrastructure Matters
Many of the emerging markets, particularly in Africa, jumped from one or two generations of telco infrastructure directly into 3G and fiber within a decade. However, this truth only applies to large, cosmopolitan areas. International diversification of fiber connectivity is only starting to take shape. (For example, in Southern Africa, BRICS terrestrial fiber is coming online soon.) What does this mean for data management? First, global connectivity influences domestic last mile fiber deployment to households and businesses. This, in turn, will create additional adoption of new devices. This adoption will create critical mass for higher productivity services, such as eCommerce. As web based transactions take off, better data management practices will follow. Secondly, European and South American data centers become viable legal and performance options for African organizations. This could be a game changer for software vendors dealing in cloud services for BI, CRM, HCM, BPM and ETL.
2) Competition in Telecommunication Matters
If you compare basic wireless and broadband bundle prices between the US, the UK and South Africa, for example, the lack of true competition makes further coverage upgrades, like 4G and higher broadband bandwidths, easy to digest for operators. These upgrades make telecommuting, constant social media engagement possible. Keeping prices low, like in the UK, is the flipside achieving the same result. The worst case is high prices and low bandwidth from the last mile to global nodes. This also creates low infrastructure investment and thus, fewer consumers online for fewer hours. This is often the case in geographically vast countries (Africa, Latin America) with vast rural areas. Here, data management is an afterthought for the most part. Data is intentionally kept in application silos as these are the value creators. Hand coding is pervasive to string data together to make small moves to enhance the view of a product, location, consumer or supplier.
3) A Nation’s Judicial System Matters
If you do business in nations with a long, often British judicial tradition, chances are investment will happen. If you have such a history but it is undermined by a parallel history of graft from the highest to the lowest levels because of the importance of tribal traditions, only natural resources will save your economy. Why does it matter if one of my regional markets is “linked up” but shipping logistics are burdened by this excess cost and delay? The impact on data management is a lack of use cases supporting an enterprise-wide strategy across all territories. Why invest if profits are unpredictable or too meager? This is why small Zambia or Botswana are ahead of the largest African economy, Nigeria.
4) Expertise Location Matters
Anybody can have the most advanced vision on a data-driven, event-based architecture supporting the fanciest data movement and persistence standards. Without the skill to make the case to the business it is a lost cause unless your local culture still has IT in charge of specifying requirements, running the evaluation, selecting and implementing a new technology. It is also done for if there are no leaders who have experienced how other leading firms in the same or different sector went about it (un)successfully. Lastly, if you don’t pay for skill, your project failure risk just tripled. Duh!
5) Denial is Universal
No matter if you are an Asian oil company, a regional North American bank, a Central American National Bank or an African retail conglomerate. If finance or IT invested in any technologies prior and they saw a lack of adoption, for whatever reason, they will deny data management challenges despite other departments complaining. Moreover, if system integrators or internal client staff (mis)understand data management as fixing processes (which it is not) instead of supporting transactional integrity (which it is), clients are on the wrong track. Here, data management undeservedly becomes a philosophical battleground.
This is definitely not a complete list or super-thorough analysis but I think it covers the most crucial observations from my engagements. I would love to hear about your findings in emerging markets.
Stay tuned for part 2 of this series where I will talk about the denial and embrace of corporate data challenges as it pertains to an organization’s location.
In response to the growth, organizations seek new ways to unlock the value of their data. Traditionally, data has been analyzed for a few key reasons. First, data was analyzed in order to identify ways to improve operational efficiency. Secondly, data was analyzed to identify opportunities to increase revenue.
As data expands, companies have found new uses for these growing data sets. Of late, organizations have started providing data to partners, who then sell the ‘intelligence’ they glean from within the data. Consider a coffee shop owner whose store doesn’t open until 8 AM. This owner would be interested in learning how many target customers (Perhaps people aged 25 to 45) walk past the closed shop between 6 AM and 8 AM. If this number is high enough, it may make sense to open the store earlier.
As much as organizations prioritize the value of data, customers prioritize the privacy of data. If an organization loses a customer’s data, it results in a several costs to the organization. These costs include:
- Damage to the company’s reputation
- A reduction of customer trust
- Financial costs associated with the investigation of the loss
- Possible governmental fines
- Possible restitution costs
To guard against these risks, data that organizations provide to their partners must be obfuscated. This protects customer privacy. However, data that has been obfuscated is often of a lower value to the partner. For example, if the date of birth of those passing the coffee shop has been obfuscated, the store owner may not be able to determine if those passing by are potential customers. When data is obfuscated without consideration of the analysis that needs to be done, analysis results may not be correct.
There is away to provide data privacy for the customer while simultaneously monetizing enterprise data. To do so, organizations must allow trusted partners to define masking generalizations. With sufficient data masking governance, it is indeed possible for data obfuscation and data value to coexist.
Currently, there is a great deal of research around ensuring that obfuscated data is both protected and useful. Techniques and algorithms like ‘k-Anonymity’ and ‘l-Diversity’ ensure that sensitive data is safe and secure. However, these techniques have have not yet become mainstream. Once they do, the value of big data will be unlocked.
Maybe the word “death” is a bit strong, so let’s say “demise” instead. Recently I read an article in the Harvard Business Review around how Big Data and Data Scientists will rule the world of the 21st century corporation and how they have to operate for maximum value. The thing I found rather disturbing was that it takes a PhD – probably a few of them – in a variety of math areas to give executives the necessary insight to make better decisions ranging from what product to develop next to who to sell it to and where.
Don’t get me wrong – this is mixed news for any enterprise software firm helping businesses locate, acquire, contextually link, understand and distribute high-quality data. The existence of such a high-value role validates product development but it also limits adoption. It is also great news that data has finally gathered the attention it deserves. But I am starting to ask myself why it always takes individuals with a “one-in-a-million” skill set to add value. What happened to the democratization of software? Why is the design starting point for enterprise software not always similar to B2C applications, like an iPhone app, i.e. simpler is better? Why is it always such a gradual “Cold War” evolution instead of a near-instant French Revolution?
Why do development environments for Big Data not accommodate limited or existing skills but always accommodate the most complex scenarios? Well, the answer could be that the first customers will be very large, very complex organizations with super complex problems, which they were unable to solve so far. If analytical apps have become a self-service proposition for business users, data integration should be as well. So why does access to a lot of fast moving and diverse data require scarce PIG or Cassandra developers to get the data into an analyzable shape and a PhD to query and interpret patterns?
I realize new technologies start with a foundation and as they spread supply will attempt to catch up to create an equilibrium. However, this is about a problem, which has existed for decades in many industries, such as the oil & gas, telecommunication, public and retail sector. Whenever I talk to architects and business leaders in these industries, they chuckle at “Big Data” and tell me “yes, we got that – and by the way, we have been dealing with this reality for a long time”. By now I would have expected that the skill (cost) side of turning data into a meaningful insight would have been driven down more significantly.
Informatica has made a tremendous push in this regard with its “Map Once, Deploy Anywhere” paradigm. I cannot wait to see what’s next – and I just saw something recently that got me very excited. Why you ask? Because at some point I would like to have at least a business-super user pummel terabytes of transaction and interaction data into an environment (Hadoop cluster, in memory DB…) and massage it so that his self-created dashboard gets him/her where (s)he needs to go. This should include concepts like; “where is the data I need for this insight?’, “what is missing and how do I get to that piece in the best way?”, “how do I want it to look to share it?” All that is required should be a semi-experienced knowledge of Excel and PowerPoint to get your hands on advanced Big Data analytics. Don’t you think? Do you believe that this role will disappear as quickly as it has surfaced?
Murphy’s First Law of Bad Data – If You Make A Small Change Without Involving Your Client – You Will Waste Heaps Of Money
I have not used my personal encounter with bad data management for over a year but a couple of weeks ago I was compelled to revive it. Why you ask? Well, a complete stranger started to receive one of my friend’s text messages – including mine – and it took days for him to detect it and a week later nobody at this North American wireless operator had been able to fix it. This coincided with a meeting I had with a European telco’s enterprise architecture team. There was no better way to illustrate to them how a customer reacts and the risk to their operations, when communication breaks down due to just one tiny thing changing – say, his address (or in the SMS case, some random SIM mapping – another type of address).
In my case, I moved about 250 miles within the United States a couple of years ago and this seemingly common experience triggered a plethora of communication screw ups across every merchant a residential household engages with frequently, e.g. your bank, your insurer, your wireless carrier, your average retail clothing store, etc.
For more than two full years after my move to a new state, the following things continued to pop up on a monthly basis due to my incorrect customer data:
- In case of my old satellite TV provider they got to me (correct person) but with a misspelled last name at my correct, new address.
- My bank put me in a bit of a pickle as they sent “important tax documentation”, which I did not want to open as my new tenants’ names (in the house I just vacated) was on the letter but with my new home’s address.
- My mortgage lender sends me a refinancing offer to my new address (right person & right address) but with my wife’s as well as my name completely butchered.
- My wife’s airline, where she enjoys the highest level of frequent flyer status, continually mails her offers duplicating her last name as her first name.
- A high-end furniture retailer sends two 100-page glossy catalogs probably costing $80 each to our address – one for me, one for her.
- A national health insurer sends “sensitive health information” (disclosed on envelope) to my new residence’s address but for the prior owner.
- My legacy operator turns on the wrong premium channels on half my set-top boxes.
- The same operator sends me a SMS the next day thanking me for switching to electronic billing as part of my move, which I did not sign up for, followed by payment notices (as I did not get my invoice in the mail). When I called this error out for the next three months by calling their contact center and indicating how much revenue I generate for them across all services, they counter with “sorry, we don’t have access to the wireless account data”, “you will see it change on the next bill cycle” and “you show as paper billing in our system today”.
Ignoring the potential for data privacy law suits, you start wondering how long you have to be a customer and how much money you need to spend with a merchant (and they need to waste) for them to take changes to your data more seriously. And this are not even merchants to whom I am brand new – these guys have known me and taken my money for years!
One thing I nearly forgot…these mailings all happened at least once a month on average, sometimes twice over 2 years. If I do some pigeon math here, I would have estimated the postage and production cost alone to run in the hundreds of dollars.
However, the most egregious trespass though belonged to my home owner’s insurance carrier (HOI), who was also my mortgage broker. They had a double whammy in store for me. First, I received a cancellation notice from the HOI for my old residence indicating they had cancelled my policy as the last payment was not received and that any claims will be denied as a consequence. Then, my new residence’s HOI advised they added my old home’s HOI to my account.
After wondering what I could have possibly done to trigger this, I called all four parties (not three as the mortgage firm did not share data with the insurance broker side – surprise, surprise) to find out what had happened.
It turns out that I had to explain and prove to all of them how one party’s data change during my move erroneously exposed me to liability. It felt like the old days, when seedy telco sales people needed only your name and phone number and associate it with some sort of promotion (back of a raffle card to win a new car), you never took part in, to switch your long distance carrier and present you with a $400 bill the coming month. Yes, that also happened to me…many years ago. Here again, the consumer had to do all the legwork when someone (not an automatic process!) switched some entry without any oversight or review triggering hours of wasted effort on their and my side.
We can argue all day long if these screw ups are due to bad processes or bad data, but in all reality, even processes are triggered from some sort of underlying event, which is something as mundane as a database field’s flag being updated when your last purchase puts you in a new marketing segment.
Now imagine you get married and you wife changes her name. With all these company internal (CRM, Billing, ERP), free public (property tax), commercial (credit bureaus, mailing lists) and social media data sources out there, you would think such everyday changes could get picked up quicker and automatically. If not automatically, then should there not be some sort of trigger to kick off a “governance” process; something along the lines of “email/call the customer if attribute X has changed” or “please log into your account and update your information – we heard you moved”. If American Express was able to detect ten years ago that someone purchased $500 worth of product with your credit card at a gas station or some lingerie website, known for fraudulent activity, why not your bank or insurer, who know even more about you? And yes, that happened to me as well.
Tell me about one of your “data-driven” horror scenarios?
I recently had a lengthy conversation with a business executive of a European telco. His biggest concern was to not only understand the motivations and related characteristics of consumers but to accomplish this insight much faster than before. Given available resources and current priorities this is something unattainable for many operators.
Unlike a few years ago – remember the time before iPad – his organization today is awash with data points from millions of devices, hundreds of device types and many applications.
One way for him to understand consumer motivation; and therefore intentions, is to get a better view of a user’s network and all related interactions and transactions. This includes his family household, friends and business network (also a type of household). The purpose of householding is to capture social and commercial relationships in a grouping of individuals (or businesses or both mixed together) in order to identify patterns (context), which can be exploited to better serve a customer a new individual product or bundle upsell, to push relevant apps, audio and video content.
Let’s add another layer of complexity by understanding not only who a subscriber is, who he knows and how often he interacts with these contacts and the services he has access to via one or more devices but also where he physically is at the moment he interacts. You may also combine this with customer service and (summarized) network performance data to understand who is high-value, high-overhead and/or high in customer experience. Most importantly, you will also be able to assess who will do what next and why.
Some of you may be thinking “Oh gosh, the next NSA program in the making”. Well, it may sound like it but the reality is that this data is out there today, available and interpretable if cleaned up, structured and linked and served in real time. Not only do data quality, ETL, analytical and master data systems provide the data backbone for this reality but process-based systems dealing with the systematic real-time engagement of consumers are the tool to make it actionable. If you add some sort of privacy rules using database or application-level masking technologies, most of us would feel more comfortable about this proposition.
This may feel like a massive project but as many things in IT life; it depends on how you scope it. I am a big fan of incremental mastering of increasingly more attributes of certain customer segments, business units, geographies, where lessons learnt can be replicated over and over to scale. Moreover, I am a big fan of figuring out what you are trying to achieve before even attempting to tackle it.
The beauty behind a “small” data backbone – more about “small data” in a future post – is that if a certain concept does not pan out in terms of effort or result, you have just wasted a small pile of cash instead of the $2 million for a complete throw-away. For example: if you initially decided that the central lynch pin in your household hub & spoke is the person, who owns the most contracts with you rather than the person who pays the bills every month or who has the largest average monthly bill, moving to an alternative perspective does not impact all services, all departments and all clients. Nevertheless, the role of each user in the network must be defined over time to achieve context, i.e. who is a contract signee, who is a payer, who is a user, who is an influencer, who is an employer, etc.
Why is this important to a business? It is because without the knowledge of who consumes, who pays for and who influences the purchase/change of a service/product, how can one create the right offers and target them to the right individual.
However, in order to make this initial call about household definition and scope or look at the options available and sensible, you have to look at social and cultural conventions, what you are trying to accomplish commercially and your current data set’s ability to achieve anything without a massive enrichment program. A couple of years ago, at a Middle Eastern operator, it was very clear that the local patriarchal society dictated that the center of this hub and spoke model was the oldest, non-retired male in the household, as all contracts down to children of cousins would typically run under his name. The goal was to capture extended family relationships more accurately and completely in order to create and sell new family-type bundles for greater market penetration and maximize usage given new bandwidth capacity.
As a parallel track aside from further rollout to other departments, customer segments and geos, you may also want to start thinking like another European operator I engaged a couple of years ago. They were trying to outsource some data validation and enrichment to their subscribers, which allowed for a more accurate and timely capture of changes, often life-style changes (moves, marriages, new job). The operator could then offer new bundles and roaming upsells. As a side effect, it also created a sense of empowerment and engagement in the client base.
I see bits and pieces of some of this being used when I switch on my home communication systems running broadband signal through my X-Box or set-top box into my TV using Netflix and Hulu and gaming. Moreover, a US cable operator actively promotes a “moving” package to help make sure you do not miss a single minute of entertainment when relocating.
Every time now I switch on my TV, I get content suggested to me. If telecommunication services would now be a bit more competitive in the US (an odd thing to say in every respect) and prices would come down to European levels, I would actually take advantage of the offer. And then there is the log-on pop up asking me to subscribe (or throubleshoot) a channel I have already subscribed to. Wonder who or what automated process switched that flag.
Ultimately, there cannot be a good customer experience without understanding customer intentions. I would love to hear stories from other practitioners on what they have seen in such respect
I believe that most in the software business believe that it is tough enough to calculate and hence financially justify the purchase or build of an application - especially middleware – to a business leader or even a CIO. Most of business-centric IT initiatives involve improving processes (order, billing, service) and visualization (scorecarding, trending) for end users to be more efficient in engaging accounts. Some of these have actually migrated to targeting improvements towards customers rather than their logical placeholders like accounts. Similar strides have been made in the realm of other party-type (vendor, employee) as well as product data. They also tackle analyzing larger or smaller data sets and providing a visual set of clues on how to interpret historical or predictive trends on orders, bills, usage, clicks, conversions, etc.
If you think this is a tough enough proposition in itself, imagine the challenge of quantifying the financial benefit derived from understanding where your “hardware” is physically located, how it is configured, who maintained it, when and how. Depending on the business model you may even have to figure out who built it or owns it. All of this has bottom-line effects on how, who and when expenses are paid and revenues get realized and recognized. And then there is the added complication that these dimensions of hardware are often fairly dynamic as they can also change ownership and/or physical location and hence, tax treatment, insurance risk, etc.
Such hardware could be a pump, a valve, a compressor, a substation, a cell tower, a truck or components within these assets. Over time, with new technologies and acquisitions coming about, the systems that plan for, install and maintain these assets become very departmentalized in terms of scope and specialized in terms of function. The same application that designs an asset for department A or region B, is not the same as the one accounting for its value, which is not the same as the one reading its operational status, which is not the one scheduling maintenance, which is not the same as the one billing for any repairs or replacement. The same folks who said the Data Warehouse is the “Golden Copy” now say the “new ERP system” is the new central source for everything. Practitioners know that this is either naiveté or maliciousness. And then there are manual adjustments….
Moreover, to truly take squeeze value out of these assets being installed and upgraded, the massive amounts of data they generate in a myriad of formats and intervals need to be understood, moved, formatted, fixed, interpreted at the right time and stored for future use in a cost-sensitive, easy-to-access and contextual meaningful way.
I wish I could tell you one application does it all but the unsurprising reality is that it takes a concoction of multiple. None or very few asset life cycle-supporting legacy applications will be retired as they often house data in formats commensurate with the age of the assets they were built for. It makes little financial sense to shut down these systems in a big bang approach but rather migrate region after region and process after process to the new system. After all, some of the assets have been in service for 50 or more years and the institutional knowledge tied to them is becoming nearly as old. Also, it is probably easier to engage in often required manual data fixes (hopefully only outliers) bit-by-bit, especially to accommodate imminent audits.
So what do you do in the meantime until all the relevant data is in a single system to get an enterprise-level way to fix your asset tower of Babel and leverage the data volume rather than treat it like an unwanted step child? Most companies, which operate in asset, fixed-cost heavy business models do not want to create a disruption but a steady tuning effect (squeezing the data orange), something rather unsexy in this internet day and age. This is especially true in “older” industries where data is still considered a necessary evil, not an opportunity ready to exploit. Fact is though; that in order to improve the bottom line, we better get going, even if it is with baby steps.
If you are aware of business models and their difficulties to leverage data, write to me. If you even know about an annoying, peculiar or esoteric data “domain”, which does not lend itself to be easily leveraged, share your thoughts. Next time, I will share some examples on how certain industries try to work in this environment, what they envision and how they go about getting there.
Most application owners know that as data volumes accumulate, application performance can take a major hit if the underlying infrastructure is not aligned to keep up with demand. The problem is that constantly adding hardware to manage data growth can get costly – stealing budgets away from needed innovation and modernization initiatives.
Join Julie Lockner as she reviews the Cox Communications case study on how they were able to solve an application performance problem caused by too much data with the hardware they already had by using Informatica Data Archive with Smart Partitioning. Source: TechValidate. TVID: 3A9-97F-577
One major emergence from the Big Data debate especially in the Telco Industry is the sudden elevation of the focus on Customer Experience with QoE or Quality of Experience and CEM or Customer Experience management. With new and emerging technologies such as Near Field Communication (NFC), Machine to Machine (M2M) and Mobile Social Media Apps hitting the news every day like a Reality ‘Stars’ socializing antics; we are all fascinated by how much organisations either know, can find out or deduce about our lives: what we like / dislike, how much we may be worth to those organisations, what we already own and even where we physically are or will be in the next few minutes. All the minutiae of our lives and personalities laid bare to be pawed over, analysed and used to control us and eventually sell us yet more ‘stuff’. (more…)
Remote Data Collection and Transformation – with Ultra Messaging Cache Option and B2B Data Transformation
Sometimes when I drive past an electronic tollway collection sensor, I wonder about the amount of data it must generate. I’m no expert on such technology, but at a minimum, the RFID sensor has to read the chip in your car, and log the date and time plus your RFID info, and then a camera takes a picture to catch any potential violators. Now multiply that data times the hundreds of thousands of cars that drive such roads every day, times the number of sensors they pass, and I’m quite sure this number exceeds several million messages per day. (more…)