Informatica

Informatica

Mars vs. Venus? Aligning the CMO / CIO Planets

Research firm Gartner, Inc., sent shockwaves across the technology landscape when it forecast CMOs will spend more on IT than CIOs by 2017[i]. The rationale? “We frequently hear our technology and service provider clients tell us they are dealing with business buyers more, and need to “speak the language.” Gartner itself has fueled this inferno with assertions such as, “By 2017 the CMO will spend more on IT than the CIO” (see “Webinar: By 2017 the CMO Will Spend More on IT Than the CIO”).”[ii] In the two years since Gartner first made that prediction, analysts and pundits have talked about a CIO/CMO battle for data supremacy — describing the two roles as “foes” inhabiting “separate worlds[iii]” that don’t even speak the same language.

But when CIOs are from Mars and CMOs are from Venus, their companies can end up with disjointed technologies that don’t play well together. The result? Security flaws, no single version of “truth,” and regulatory violations that can damage the business.  The trick, then, is aligning the CIO and CMO planets.

Informatica’s CMO Marge Breya and CIO Eric Johnson show how they do it.

Q: There’s been a lot of talk lately about how CMOs are now the biggest users of data. That represents a shift in how CMOs and CIOs traditionally have worked together. How do you think the roles of the CMO and CIO need to mesh?

CIO Eric JohnsonEric: As I look across the lines of business, and evaluate the level of complexity, the volume of data and the systems we’re supporting, marketing is now by far the most complex part of the business we support. The systems that they have, the data that they have, has grown exponentially over the last four or five years. Now more than ever, [CMOs and CIOs are] very much attached at the hip. We have to be working in conjunction with one another.

Informatica CMO Marge BreyaMarge: Just to add to that I’d say over the last five years, we’ve been attached to things like CRM systems, or partner relationship systems. From a marketing standpoint, it has really been about management: How do you have visibility into what’s happening with the business. But over the last couple of years it’s become increasingly more important to focus on the “R” word — the relationship: How do you look at a customer name and understand how it relates to their past buying behavior. As a result, you need to understand how information lives from system to system, all across a time series, in order to make really great decisions. The “relate’ word is probably most important, at least in my team right now, and it’s not possible for me to relate data across the organization without having a great relationship with IT.

Q: So how often do you find yourselves talking together?

Eric: We talk to each other probably weekly, and I think our teams work together daily. There’s a constant collaboration and making sure that we’re in sync. You hear about the CIO/CMO relationship. I think it should be an easy relationship because there’s so much going on technology-wise and data-wise that the CMOs are becoming much more technically knowledgeable, and CIOs are starting to understand more and more what’s going on in their business that the line between them should be all about how you work together.

Marge: Of all the business partners in the company, Eric … helps us in marketing reimagine how marketing can be done. If the two of us can go back and forth, understand what’s working and what’s not working, and reimagine how we can be far more effective, or productive or know new things — to me that’s the judge of a healthy relationship between a CIO and a CMO. And luckily, we have that.

Q: It seems as if 2013 was the year of “big data.” But a Gartner survey[iv]  said The adoption is still at the early stages with fewer than 8% of all respondents indicating their organization has deployed big data solutions. What do you think are the issues that are making it so difficult for companies?

Eric: The concept of big data is something companies want to get involved in. They want to understand how they can leverage this fast-growing volume of data from various sources. But the challenge is being able to understand what you’re looking for, and to know what kind of questions you have.

Marge: There’s a big focus on big data, almost for the sake of it in some cases. People get confused about whether it’s about the haystack, or the needle. Having a haystack for the heck of it isn’t usually what’s done. It’s for a purpose. It’s important to understand what part of that haystack is important for what part of your business. How up-to-date is it? How much can you trust the data. How much can you make real decisions from it. And frankly, who should have access to it. So much of the data we have today is sensitive, affected by privacy laws and other kinds of regulations. I think big data is appropriately a great term right now, but more importantly, it’s not just about big data, it’s about great data. How are you going to use it? And how it’s going to affect your business process.

Eric: You could go down into a rat hole if you’re chasing something and you’re not really sure what you’re going to do with it.

Marge: On the other hand, you can explore years of behavior and maybe come up with a great predictive model for what a new buying signal scoring engine could look like.

Q: One promise of big data is the ability to pull in data from so many sources. That would suggest a real need for you two to work together to ensure the quality and the integrity of the data. How do you collaborate on those issues?

Eric: There’s definitely a lot of work that has to be done working with the CMO and the marketing organization: To sit down and understand where’s this data coming from, what’s it going to be used for, and making sure you have the people and processing components. Especially with the level of complexity we have, with all the data coming in from so many sources, making sure that we really map that out, understand the data and what it looks like and what some of the challenges could be. So it’s partnering very closely with marketing to understand those processes, understand what they want to do with the data, and then putting the people, the processes and the technology in place so you can trust the data and have a single source of truth.

Marge: You hit the nail on the head with “people, process and technology.” Often, folks think of database quality or accuracy as being an IT problem. It’s a business problem. Most people know their business, they know what their data should look like. They know what revenue shapes should look like. What’s norm for the business. If the business people aren’t there from a governance standpoint, from a stewardship standpoint — literally saying “does this data make sense?” — without that partnership, forget it.

Gartner does a nice job of describing the digital landscape that marketers are facing today in its infographic below. In order to use technology as a differentiator, organizations need to get the most value from their data.  The relationships between these technology is going to make the difference between organizations that gain a competitive advantage from their operations and the laggards.

Gartner_DigitalMktgMap_650


[i] Gartner Research, December 20, 2013, “Market Trends: The Rising Importance of the Business Buyer – Fact of Fiction?” Derry N. Finkeldey

[ii] Gartner Research, December 20, 2013, “Market Trends: The Rising Importance of the Business Buyer – Fact of Fiction?” Derry N. Finkeldey

[iii] Gartner blog, January 25, 2013, “CMOs: Are You Cheating on Your CIO?”, Jennifer Beck, Vice President & Gartner Fellow

[iv] Gartner Research, September 12, 2013, “Survey Analysis: Big Data Adoption in 2013 Shows Substance Behind the Hype,” Lisa Kart, Nick Heudecker, Frank Buytendijk

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, CIO | Tagged , , , , | Leave a comment

Big data? No. Actionable Data!

Actionable Data

Guest Post by Dale Denham, CIO at Geiger

The CFO and CEO want “Big Data”. However, everyone needs actionable data, including the CFO and CEO.

The problem lies in the little data. The little data is all the data that is floating all around your company unorganized including your duplicate customer records, your unmatched product numbers, and so on.

At Crestline, we have more data coming in today than ever before. We are utilizing this data in exciting ways to better serve our customers. Yet we still have room to improve in efficiency, data quality, and better understand the story behind the data.

Recently we implemented Informatica Product Information Management (PIM) which has been a huge success for our ability to deliver new products quickly to our customers through our web site and our print catalog. In the first 3 months of 2014 using our new PIM, we’ve exceeded the total products added and updated through all of 2013! We’ll be sharing our story at Informatica world May 12th in Las Vegas along with giving out some great giveaways from our fantastic selection of promotional products.

Beyond sharing our Informatica PIM success story, I’m most excited about the data governance track as we, like most people, can do a better job with our overall data governance. Having a chance to engage with and hear success stories on data governance from expert practitioners will help deliver more value to our organization.

It will also be fun to hear legendary Ray Kurzweil paint a mind-boggling portrait of the future of humanity, but the real value is in the breakout sessions and networking. Join me and hundreds of others as we descend on Las Vegas to get more from our data and use data to support our strategic initiatives. If you do come, be sure to join me at my session “Best Practices for Product Information Management in E-Commerce” to get some great tips as well as perhaps a very cool promotional product.

  1. To learn more about the conference and keynotes, click here.
  2. To register for Informatica World, click here.

Views expressed are those of the author and do not necessarily represent those of Geiger.

FacebookTwitterLinkedInEmailPrintShare
Posted in Informatica World 2014, Product Information Management | Tagged , , | Leave a comment

INFAgraphic: The Healthcare Organization of the Future will be Data-Driven

Thumbnail_HIMSS

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration, Data Privacy, Data Quality, data replication, Healthcare | Tagged , , , | 1 Comment

INFAgraphic: Transforming Healthcare by Putting Information to Work

ebookHealthInfagraphicforblog

FacebookTwitterLinkedInEmailPrintShare
Posted in Healthcare | Tagged , , , | Leave a comment

Open Source, Next Generation Data Encoding

Today is an exciting day for technology in high performance electronic trading. By the time you read this, the CME Group, Real Logic Ltd., and Informatica will have announced a new open source initiative. I’ve been collaborating on this work for a few months and I feel it is some great technology. I hope you will agree.

Simple Binary Encoding (SBE) is an encoding for FIX that is being developed by the FIX protocol community as part of their High Performance Working Group. The goal is to produce a binary encoding representation suitable for low-latency financial trading. The CME Group, Real Logic, and Informatica have sponsored the development of an open source implementation of an early version of the SBE specification undertaken by Martin Thompson (of Real Logic, formerly of LMAX) and myself, Todd Montgomery (of Informatica). The implementation methodology has been a very high performance encoding/decoding mechanism for data layout that is tailored to not just high performance application demands in low-latency trading. But has implications for all manner of serialization and marshaling in use cases from Big Data analytics to device data capture.

Financial institutions, and other businesses, need to serialize data structures for purposes of transmission over networks as well as for storage. SBE is a developing standard for how to encode/decode FIX data structures over a binary media at high speeds with low-latency. The SBE project is most similar to Google Protocol Buffers. However, looks are quite deceiving. SBE is an order of magnitude faster and immensely more efficient for encoding and decoding. This focus on performance means application developers can turn their attention to the application logic instead of the details of serialization. There are a number of advantages to SBE beyond speed, although, speed is of primary concern.

  • SBE provides a strong typing mechanism in the form of schemas for data objects
  • SBE only generates the overhead of versioning if the schema needs to handle versioning and if so, only on decode
  • SBE uses an Intermediate Representation (IR) for decoupling schema specification, optimization, and code generation
  • SBEs use of IR will allow it to provide various data layout optimizations in the near future
  • SBE initially provides Java, C++98, and C# code generators with more on the way

What breakthrough has lead to SBE being so fast?

It isn’t new or a breakthrough. SBE has been designed and implemented with the concepts and tenants of Mechanical Sympathy. Most software is developed with abstractions to mask away the details of CPU architecture, disk access, OS concepts, etc. Not so for SBE. It’s been designed with Martin and I utilizing everything we know about how CPUs, memory, compilers, managed runtimes, etc. work and making it very fast and work _with_ the hardware instead of against it.

Martin’s Blog will have a more detailed-oriented, technical discussion sometime later on SBE. But I encourage you to look at it and try it out. The work is open to the public under an Apache Public License.

Find out more on the FIX/SBE specification and SBE on github.

———————————————–

Todd Montgomery

Todd L. Montgomery is a Vice President of Architecture for Informatica and the chief designer and implementer of the 29West low latency messaging products. The Ultra Messaging product family (formerly known as LBM) has over 190 production deployments within electronic trading across many asset classes and pioneered the broker-less messaging paradigm. In the past, Todd has held architecture positions at TIBCO and Talarian as well as lecture positions at West Virginia University, contributed to the IETF, and performed research for NASA in various software fields. With a deep background in messaging systems, high performance systems, reliable multicast, network security, congestion control, and software assurance, Todd brings a unique perspective tempered by over 20 years of practical development experience.

FacebookTwitterLinkedInEmailPrintShare
Posted in Banking & Capital Markets, Big Data | Tagged , , | Leave a comment

INFAgraphic: Data Integration Software Is Revolutionizing Airline Travel

Thumbnail_airtravel

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Data Integration, Data Integration Platform, Master Data Management | Tagged , , , , , , | Leave a comment

CIO to CIO: A New Community to Help IT Leaders Succeed in a Data-Centric World

IT is evolving unlike it has ever before. Get the best up-to date career insights & best practices to help you succeed in a data-centric world. Check out my exciting new Potential at Work Community for IT Leaders
FacebookTwitterLinkedInEmailPrintShare
Posted in CIO | Tagged , , , , , , , , , | Leave a comment

How to Improve Your Application Performance with the Hardware You Already Have

Most application owners know that as data volumes accumulate, application performance can take a major hit if the underlying infrastructure is not aligned to keep up with demand. The problem is that constantly adding hardware to manage data growth can get costly – stealing budgets away from needed innovation and modernization initiatives.

Join Julie Lockner as she reviews the Cox Communications case study on how they were able to solve an application performance problem caused by too much data with the hardware they already had by using Informatica Data Archive with Smart Partitioning. Source: TechValidate. TVID: 3A9-97F-577

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Customers, Telecommunications | Tagged , , , , , , | Leave a comment

Our Big Clean-up of Our Big Data

Our Big Clean-up of Our Big Data

Informatica, the company for which I work, deals in big data challenges every day. It’s what we DO, help customers leverage their data into actionable business insights. When I took the helm as V.P. Global Talent Acquisition I was surprised to learn that the data within the talent acquisition function was not up to the standards Informatica lives by. Clearly, talent acquisition was not seeing the huge competitive advantage that data could bring – at least not the way sales, marketing and research were viewing it. And that, to me, seemed like a major problem, but also a terrific opportunity! This is the story of how Informatica Talent Acquisition became data-centric and used that centricity to our advantage to fix the problem.

Go to the Source

No matter how big or small your company, the data related to talent comes from varied and diverse roles within the talent acquisition function. The role may be named Researcher, Sourcer, Talent Lead Generator or even Recruiter. Putting the name aside, the data comes from the first person to connect with a potential candidate. Usually that person, or in Informatica’s case, that team, is the one who finds the data and captures it. Because talent acquisition in the past was largely about making a single hire, our data was captured haphazardly and stored with….let’s say, less than best practices. In addition, we didn’t know big data was about to hit us square in the face with more social data points than yesteryear’s Talent Sourcer could believe. I went to our sourcing team as well as our research department to begin assessing how we were acquiring, storing and accessing our data.

Get Help

Data is at the heart of so many recruiting conversations today but it’s not just about the data, it’s the access to the right data at the right time by the right person, which is paramount to making good business or hiring decisions. This led me to Dave Mendoza, a Talent Acquisition Strategy consultant, who had developed a process called Talent Mapping which we applied to help us identity, retrieve and categorize our talent data. From that point he was able to create our Talent Knowledge Library. This library allows us to store, access and finally develop a talent data methodology aptly named, Future-casting. This methodology defines a process wherein Informatica can use its talent acquisition data for competitive intelligence, workforce planning and candidate nurturing.

Get Centralized

The most valuable part of our transformation process was the implementation of our Talent Knowledge Library. It was apparent that the weakest point with this new solution was not the capturing or categorizing of our data, it was that we had no central repository that would allow for unstructured data to be housed, amended and retrieved by multiple Talent Sourcers. To solve this issue we implemented a Candidate Relationship Management (CRM) application named Avature. This tool allowed us to build a talent library – a single source repository of our global talent pools, which could then be accessed by all the roles within the talent acquisition organization. Having a centralized database has improved our hiring efficiencies such as, decreasing the time and cost to fill requisitions.

Take Ownership

Because Informatica is a global company, it doesn’t make sense for us to house all of our data in a proprietary system. While the new social sourcing platforms are fast and powerful, the data doesn’t belong to the company once entered and that didn’t work for us, especially given we had teams all over the world working with different tools. With a practical approach to data capture and retrieval, we now have a central databank of very specific competitive intelligence that has the ability to withstand time because the tool can capture social and mobile data and thus is built for future proofing. Because the data is ours, we retain our competitive advantage, even during talent acquisition transition periods.

Set Standards

One truth became very clear as we took on this data-centric approach to talent acquisition – if you don’t set standards for processes and protocols around your data, you may as well use a bucket as no repository will be of much use without accurate and useable data that can be accessed consistently by everyone. Being able to search the data according to company wide standards was both obvious and mind-blowing. These four standards are what we put into place when creating our talent library:

1) Data must be usable and searchable,

2) Extraction and leverage of data must be easy,

3) Data can be migrated from multiple lead generation platforms with a “single source of truism”,

4) Data can be categorized, tagged and mapped to talent for ease of segmentation.

The goal of these standards is to match the data to each of our primary hiring verticals and to multiple social channels so that we can both attract and identify talent in a self-sustaining manner.

Embrace Social

In today’s globalized world, people frequently change their physical address, their employer and their email addresses, but they rarely change their Twitter handle or Facebook name. This is why ‘people’ data quickly turns outdated and social data is the new commodity within the enterprise. People who use social networks are leaving a living, always-fresh data shadow making it easy for us to capture their most relevant contact data. It sounds a bit like we’ve become on-line stalkers, but marketers and business development professionals have been doing it for years. And just as we move toward predictive modeling on these pieces of personal data, so too are our competitors for talent. By configuring our CRM systems to accurately capture and search these social data points our sourcing team is more efficient and effective.  It has reduced duplicate entries which caused candidate fatigue in our recruiting processes.

I think Dave says it perfectly in his recent white paper “Future-casting: How the rise of Big Social Data API is set to Transform the Business of Recruiting”: “Future-casting has the ability to review the career progression of both internal employees and external candidates. This stems directly from the ability to track candidates more accurately via their social data. Now, more than ever before, corporations and the talent acquisition professionals within them can keep fresh data on every candidate in their system, with a few simple tweaks. This new philosophy of future-casting puts dynamic data into the hands of the organization, reducing dependency on job boards and even social platforms so they can create their own convergent model that combines all three.”

Results Will Come

At Informatica we saw results very quickly because we had an expert dedicated to addressing the challenges, and we were committed to making our data work for us. But if you don’t have a global sourcing team or a full time consultant, you can still begin at the top of this list. Talk to your CRM or ATS vendors about how you can tweak your tracking systems. Assess and map your current talent process. Begin using products that allow you to own your OWN data. Finally, set standards such as the ones I mentioned previously and make sure everyone adheres to them.

 This is original content published to ERE.net on May 8, 2013, and written by Brad Cook, Vice President, Global Talent Acquisition at Informatica.

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Informatica Feature | Tagged , , , , , , | Leave a comment