Tag Archives: Big Data
I’ll get to the secret in just a minute, but first an observation about the cost of IT. Forrester has been conducting a cost of IT study for many years with the most recent results published in the 2013 IT Budget Planning Guide for CIOs. The report includes a chart of total IT spending as a percent of revenue by industry and company size. Cost as a percentage of revenue is a key performance indicator for IT efficiency as organizations increase in size. I first noticed a peculiarity in the data in the 2007 study and I was wondering if it had changed over the years – it hasn’t. The observation is this; for many industries, the cost of IT as a percent of revenue increases as organizations get larger. What is going on here? Whatever happened to “economies of scale?” Instead we seem to have “diseconomies of scale!” (more…)
Recently, the Informatica Marketplace reached a major milestone: we exceeded 1,000 Blocks (Apps). Looking back to three years ago when we started with 70 Blocks from a handful of partners, it’s an amazing achievement to have reached this volume of solutions in such a short time. For me, it speaks to the tremendous value that the Marketplace brings not only to our customers who download more than 10,000 Blocks per month, but also to our partners who have found in the Marketplace a viable route to market and a great awareness and monetization vehicle for their solutions.
There has been a lot of discussion around the explosion of data and what it means to companies trying to leverage this extremely valuable resource. Informatica has a huge part to play in helping customers solve those problems not only through the technologies we provide directly, but through the tremendous ecosystem that we have built through our partners. The Marketplace has grown to more than 165 unique partner companies, and we’re adding more every day. Blocks such as BI & Analytics sing Social Media Data from Deloitte, and Interstage XWand – XBRL Processor from Fujitsu represent offerings from large, established software companies, while Blocks such as Skybot Enterprise Job Scheduler and Undraleu Code Review Tool from Coeurdata are solutions that have been contributed by earlier stage companies that have experienced significant success and growth. It has been a pleasure helping these companies to grow and reach new customers through the Marketplace.
One of the most exciting things about reaching the 1K Block milestone is not just the amount of companies that are on the Marketplace, but the amount of solutions that have been contributed from our developer community. Blocks such as Autotype Excel Macro, Execute Workflow, and iExportNormalizer are all solutions that Informatica developers have built because it helps them in their daily activities, and through the Marketplace they have found a way to share these valuable assets with the community. In fact, over half of our solutions are free to use, which is a ringing endorsement of the power of the community and a great way to try out any number of useful solutions at no risk. By leveraging enabling technologies such as Informatica’s Cloud Platform as a Service, developers can create and share solutions more quickly and easily than ever before.
Overall, it has been an exciting ride as the Marketplace has rocketed to 1,000 Blocks in under three years, and I look forward to what the next three years has in store!
Our Big Clean-up of Our Big Data
Informatica, the company for which I work, deals in big data challenges every day. It’s what we DO, help customers leverage their data into actionable business insights. When I took the helm as V.P. Global Talent Acquisition I was surprised to learn that the data within the talent acquisition function was not up to the standards Informatica lives by. Clearly, talent acquisition was not seeing the huge competitive advantage that data could bring – at least not the way sales, marketing and research were viewing it. And that, to me, seemed like a major problem, but also a terrific opportunity! This is the story of how Informatica Talent Acquisition became data-centric and used that centricity to our advantage to fix the problem.
Go to the Source
No matter how big or small your company, the data related to talent comes from varied and diverse roles within the talent acquisition function. The role may be named Researcher, Sourcer, Talent Lead Generator or even Recruiter. Putting the name aside, the data comes from the first person to connect with a potential candidate. Usually that person, or in Informatica’s case, that team, is the one who finds the data and captures it. Because talent acquisition in the past was largely about making a single hire, our data was captured haphazardly and stored with….let’s say, less than best practices. In addition, we didn’t know big data was about to hit us square in the face with more social data points than yesteryear’s Talent Sourcer could believe. I went to our sourcing team as well as our research department to begin assessing how we were acquiring, storing and accessing our data.
Data is at the heart of so many recruiting conversations today but it’s not just about the data, it’s the access to the right data at the right time by the right person, which is paramount to making good business or hiring decisions. This led me to Dave Mendoza, a Talent Acquisition Strategy consultant, who had developed a process called Talent Mapping which we applied to help us identity, retrieve and categorize our talent data. From that point he was able to create our Talent Knowledge Library. This library allows us to store, access and finally develop a talent data methodology aptly named, Future-casting. This methodology defines a process wherein Informatica can use its talent acquisition data for competitive intelligence, workforce planning and candidate nurturing.
The most valuable part of our transformation process was the implementation of our Talent Knowledge Library. It was apparent that the weakest point with this new solution was not the capturing or categorizing of our data, it was that we had no central repository that would allow for unstructured data to be housed, amended and retrieved by multiple Talent Sourcers. To solve this issue we implemented a Candidate Relationship Management (CRM) application named Avature. This tool allowed us to build a talent library – a single source repository of our global talent pools, which could then be accessed by all the roles within the talent acquisition organization. Having a centralized database has improved our hiring efficiencies such as, decreasing the time and cost to fill requisitions.
Because Informatica is a global company, it doesn’t make sense for us to house all of our data in a proprietary system. While the new social sourcing platforms are fast and powerful, the data doesn’t belong to the company once entered and that didn’t work for us, especially given we had teams all over the world working with different tools. With a practical approach to data capture and retrieval, we now have a central databank of very specific competitive intelligence that has the ability to withstand time because the tool can capture social and mobile data and thus is built for future proofing. Because the data is ours, we retain our competitive advantage, even during talent acquisition transition periods.
One truth became very clear as we took on this data-centric approach to talent acquisition – if you don’t set standards for processes and protocols around your data, you may as well use a bucket as no repository will be of much use without accurate and useable data that can be accessed consistently by everyone. Being able to search the data according to company wide standards was both obvious and mind-blowing. These four standards are what we put into place when creating our talent library:
1) Data must be usable and searchable,
2) Extraction and leverage of data must be easy,
3) Data can be migrated from multiple lead generation platforms with a “single source of truism”,
4) Data can be categorized, tagged and mapped to talent for ease of segmentation.
The goal of these standards is to match the data to each of our primary hiring verticals and to multiple social channels so that we can both attract and identify talent in a self-sustaining manner.
In today’s globalized world, people frequently change their physical address, their employer and their email addresses, but they rarely change their Twitter handle or Facebook name. This is why ‘people’ data quickly turns outdated and social data is the new commodity within the enterprise. People who use social networks are leaving a living, always-fresh data shadow making it easy for us to capture their most relevant contact data. It sounds a bit like we’ve become on-line stalkers, but marketers and business development professionals have been doing it for years. And just as we move toward predictive modeling on these pieces of personal data, so too are our competitors for talent. By configuring our CRM systems to accurately capture and search these social data points our sourcing team is more efficient and effective. It has reduced duplicate entries which caused candidate fatigue in our recruiting processes.
I think Dave says it perfectly in his recent white paper “Future-casting: How the rise of Big Social Data API is set to Transform the Business of Recruiting”: “Future-casting has the ability to review the career progression of both internal employees and external candidates. This stems directly from the ability to track candidates more accurately via their social data. Now, more than ever before, corporations and the talent acquisition professionals within them can keep fresh data on every candidate in their system, with a few simple tweaks. This new philosophy of future-casting puts dynamic data into the hands of the organization, reducing dependency on job boards and even social platforms so they can create their own convergent model that combines all three.”
Results Will Come
At Informatica we saw results very quickly because we had an expert dedicated to addressing the challenges, and we were committed to making our data work for us. But if you don’t have a global sourcing team or a full time consultant, you can still begin at the top of this list. Talk to your CRM or ATS vendors about how you can tweak your tracking systems. Assess and map your current talent process. Begin using products that allow you to own your OWN data. Finally, set standards such as the ones I mentioned previously and make sure everyone adheres to them.
This is original content published to ERE.net on May 8, 2013, and written by Brad Cook, Vice President, Global Talent Acquisition at Informatica.
Following up on the discussion I started on GovernYourData.com (thanks to all who provided great feedback), here’s my full proposal on this topic:
We all know about the “Garbage In/Garbage Out” reality that data quality and data governance practitioners have been fighting against for decades. If you don’t trust data when it’s initially captured, how can you trust it when it’s time to consume or analyze it? But I’m also looking at the tougher problem of data degradation. The data comes into your environment just fine, but any number of actions, events – or inactions – turns that “good” data “bad”.
So far I’ve been able to hypothesize eight root causes of data degradation. I’d really love your feedback on both the validity and completeness of these categories. I’ve used similar examples across a number of these to simplify. (more…)
In my previous blog I explored the importance of a firm understanding of commercial packaged applications on data quality success. In this final post, I will examine the benefits of having operational experience as a key enabler of effective data quality delivery. (more…)
Integration technologies have been around for 20 years (as long as Informatica has been in business) and have proliferated in corporate IT. We are now at an inflection point in the business needs and maturity of integration best practices which we can call Next Generation Data Integration (DI). If we’re going to talk about the next generation, then first we need to put a stake in the ground to describe the current, or prior generation. Furthermore, for it to be a “generational” change, it needs to be a significant step-function improvement in how the work is done and in the business value generated by data assets. Or as Jim Collins said in Built to Last: Successful Habits of Visionary Companies, we need a Big Hairy Audacious Goal. (more…)
This year marks the 20th anniversary for Informatica. Twenty years of solving the problem of getting data from point A to point B, improving its quality, establishing a single view and managing it over its life-cycle. Yet after 20 years of innovation and leadership in the data integration market, when one would think the problem had been solved, all data had been extracted, transformed, cleansed and managed, it actually hasn’t — companies still need data integration. Why? Data is complicated business. And with data increasingly becoming central to business survival, organizations are constantly looking for ways to unlock new sources of it, use it as an unforeseen source of insight and do it all with greater agility and at lower cost. (more…)
What is big data? Simply put, it’s data that is big, it’s your data when it gets big. And for most of you, that’s already happened or inevitable. Regardless of how you define big data, a more important question is “Are you ready for big data?”. Without some careful consideration of your data architecture, when faced with big data challenges you may find yourself writing some big checks as you scramble to address the new demands on your business. Fortunately not all is lost. As many Informatica customers have already learned, next generation data integration can help arm you to handle big data without ripping and replacing your existing data integration architecture. (more…)
Personally Identifiable Information is under attack like never before. In the news recently two prominent organizations—institutions—were attacked. What happened:
- A data breach at a major U.S. Insurance company exposed over a million of their policyholders to identity fraud. The data stolen included Personally Identifiable information such as names, Social Security numbers, driver’s license numbers and birth dates. In addition to Nationwide paying million dollar identity fraud protection to policyholders, this breach is creating fears that class action lawsuits will follow. (more…)