Most application owners know that as data volumes accumulate, application performance can take a major hit if the underlying infrastructure is not aligned to keep up with demand. The problem is that constantly adding hardware to manage data growth can get costly – stealing budgets away from needed innovation and modernization initiatives.
Join Julie Lockner as she reviews the Cox Communications case study on how they were able to solve an application performance problem caused by too much data with the hardware they already had by using Informatica Data Archive with Smart Partitioning. Source: TechValidate. TVID: 3A9-97F-577
Our Big Clean-up of Our Big Data
Informatica, the company for which I work, deals in big data challenges every day. It’s what we DO, help customers leverage their data into actionable business insights. When I took the helm as V.P. Global Talent Acquisition I was surprised to learn that the data within the talent acquisition function was not up to the standards Informatica lives by. Clearly, talent acquisition was not seeing the huge competitive advantage that data could bring – at least not the way sales, marketing and research were viewing it. And that, to me, seemed like a major problem, but also a terrific opportunity! This is the story of how Informatica Talent Acquisition became data-centric and used that centricity to our advantage to fix the problem.
Go to the Source
No matter how big or small your company, the data related to talent comes from varied and diverse roles within the talent acquisition function. The role may be named Researcher, Sourcer, Talent Lead Generator or even Recruiter. Putting the name aside, the data comes from the first person to connect with a potential candidate. Usually that person, or in Informatica’s case, that team, is the one who finds the data and captures it. Because talent acquisition in the past was largely about making a single hire, our data was captured haphazardly and stored with….let’s say, less than best practices. In addition, we didn’t know big data was about to hit us square in the face with more social data points than yesteryear’s Talent Sourcer could believe. I went to our sourcing team as well as our research department to begin assessing how we were acquiring, storing and accessing our data.
Data is at the heart of so many recruiting conversations today but it’s not just about the data, it’s the access to the right data at the right time by the right person, which is paramount to making good business or hiring decisions. This led me to Dave Mendoza, a Talent Acquisition Strategy consultant, who had developed a process called Talent Mapping which we applied to help us identity, retrieve and categorize our talent data. From that point he was able to create our Talent Knowledge Library. This library allows us to store, access and finally develop a talent data methodology aptly named, Future-casting. This methodology defines a process wherein Informatica can use its talent acquisition data for competitive intelligence, workforce planning and candidate nurturing.
The most valuable part of our transformation process was the implementation of our Talent Knowledge Library. It was apparent that the weakest point with this new solution was not the capturing or categorizing of our data, it was that we had no central repository that would allow for unstructured data to be housed, amended and retrieved by multiple Talent Sourcers. To solve this issue we implemented a Candidate Relationship Management (CRM) application named Avature. This tool allowed us to build a talent library – a single source repository of our global talent pools, which could then be accessed by all the roles within the talent acquisition organization. Having a centralized database has improved our hiring efficiencies such as, decreasing the time and cost to fill requisitions.
Because Informatica is a global company, it doesn’t make sense for us to house all of our data in a proprietary system. While the new social sourcing platforms are fast and powerful, the data doesn’t belong to the company once entered and that didn’t work for us, especially given we had teams all over the world working with different tools. With a practical approach to data capture and retrieval, we now have a central databank of very specific competitive intelligence that has the ability to withstand time because the tool can capture social and mobile data and thus is built for future proofing. Because the data is ours, we retain our competitive advantage, even during talent acquisition transition periods.
One truth became very clear as we took on this data-centric approach to talent acquisition – if you don’t set standards for processes and protocols around your data, you may as well use a bucket as no repository will be of much use without accurate and useable data that can be accessed consistently by everyone. Being able to search the data according to company wide standards was both obvious and mind-blowing. These four standards are what we put into place when creating our talent library:
1) Data must be usable and searchable,
2) Extraction and leverage of data must be easy,
3) Data can be migrated from multiple lead generation platforms with a “single source of truism”,
4) Data can be categorized, tagged and mapped to talent for ease of segmentation.
The goal of these standards is to match the data to each of our primary hiring verticals and to multiple social channels so that we can both attract and identify talent in a self-sustaining manner.
In today’s globalized world, people frequently change their physical address, their employer and their email addresses, but they rarely change their Twitter handle or Facebook name. This is why ‘people’ data quickly turns outdated and social data is the new commodity within the enterprise. People who use social networks are leaving a living, always-fresh data shadow making it easy for us to capture their most relevant contact data. It sounds a bit like we’ve become on-line stalkers, but marketers and business development professionals have been doing it for years. And just as we move toward predictive modeling on these pieces of personal data, so too are our competitors for talent. By configuring our CRM systems to accurately capture and search these social data points our sourcing team is more efficient and effective. It has reduced duplicate entries which caused candidate fatigue in our recruiting processes.
I think Dave says it perfectly in his recent white paper “Future-casting: How the rise of Big Social Data API is set to Transform the Business of Recruiting”: “Future-casting has the ability to review the career progression of both internal employees and external candidates. This stems directly from the ability to track candidates more accurately via their social data. Now, more than ever before, corporations and the talent acquisition professionals within them can keep fresh data on every candidate in their system, with a few simple tweaks. This new philosophy of future-casting puts dynamic data into the hands of the organization, reducing dependency on job boards and even social platforms so they can create their own convergent model that combines all three.”
Results Will Come
At Informatica we saw results very quickly because we had an expert dedicated to addressing the challenges, and we were committed to making our data work for us. But if you don’t have a global sourcing team or a full time consultant, you can still begin at the top of this list. Talk to your CRM or ATS vendors about how you can tweak your tracking systems. Assess and map your current talent process. Begin using products that allow you to own your OWN data. Finally, set standards such as the ones I mentioned previously and make sure everyone adheres to them.
This is original content published to ERE.net on May 8, 2013, and written by Brad Cook, Vice President, Global Talent Acquisition at Informatica.
In Ashwin Viswanath’s previous video blog, he spoke about why it is important to have a cloud integration solution that has purpose-built integration applications. In this video, he delves deeper into the security aspects of cloud integration and how to rapidly provision integration environments for distributed business units, subsidiaries and departments in a quick and efficient manner.
In Ashwin Viswanath’s previous blog post, SaaS Data Integration for SaaS Applications, he explained how SaaS applications are much more dynamic than on-premises business applications with new fields and objects added with just a few clicks. This same agility is required when it comes to integrating SaaS applications, which is why it is important to have a hybrid IT strategy for your data integration architecture. Informatica PowerCenter together with Informatica Cloud can help you get started with such a strategy.
There are many reasons why you can’t afford to miss Informatica World 2013, and below we have selected five to highlight. Let us know which is your top reason for joining us at Informatica World 2013 in the comments below:
Get in the “data integration” know. See and hear about the latest solutions and product updates, including the Informatica Roadmap and see live demos of our latest offerings before anyone else!
Gain face time 1.0. Informatica experts will be standing by in our “Hands on Lab” offering free advice and consulting.
Make your ‘mark.’ 90% of past attendees said that what they learned at Informatica World has an immediate and positive impact on their job and their organization’s success.
Networking. Network with your peers. Informatica World has attendees from around the world.
Rick Smolan. Learn from the author of The Human Face of Big Data how big data makes a difference in your everyday life.
Next week we are giving away a copy of Rick Smolan’s The Human Face of Big Data on the Informatica World Facebook Page. Check back daily for your chance to win!
Have you registered yet? Just a few more hours for Informatica World Early Bird pricing. Just $1495, saving $400!
Ravi Shankar, vice president of Product Marketing for Master Data Management at Informatica explains why retailers need to focus on mastering their customer data in this short clip which goes into detail based on an article in Forbes: Why Retailers Need To Focus On Mastering Customer Data.
“When retailers can deepen their knowledge of the customer across all of today’s proliferating retail channels, they can strengthen customer loyalty, increase share of wallet, minimize operational costs and maximize profits. It sounds so simple – so why is it so hard?“