Category Archives: Business Impact / Benefits
As reported by the Economic Times, “In the coming years, enormous volumes of machine-generated data from the Internet of Things (IoT) will emerge. If exploited properly, this data – often dubbed machine or sensor data, and often seen as the next evolution in Big Data – can fuel a wide range of data-driven business process improvements across numerous industries.”
We can all see this happening in our personal lives. Our thermostats are connected now, our cars have been for years, even my toothbrush has a Bluetooth connection with my phone. On the industrial sides, devices have also been connected for years, tossing off megabytes of data per day that have been typically used for monitoring, with the data tossed away as quickly as it appears.
So, what changed? With the advent of big data, cheap cloud, and on-premise storage, we now have the ability to store machine or sensor data spinning out of industrial machines, airliners, health diagnostic devices, etc., and leverage that data for new and valuable uses.
For example, the ability determine the likelihood that a jet engine will fail, based upon the sensor data gathered, and how that data compared with existing known patterns of failure. Instead of getting an engine failure light on the flight deck, the pilots can see that the engine has a 20 percent likelihood of failure, and get the engine serviced before it fails completely.
The problem with all of this very cool stuff is that we need to once again rethink data integration. Indeed, if the data can’t get from the machine sensors to a persistent data store for analysis, then none of this has a chance of working.
That’s why those who are moving to IoT-based systems need to do two things. First, they must create a strategy for extracting data from devices, such as industrial robots or ann Audi A8. Second, they need a strategy to take all of this disparate data that’s firing out of devices at megabytes per second, and put it where it needs to go, and in the right native structure (or in an unstructured data lake), so it can be leveraged in useful ways, and in real time.
The challenge is that machines and devices are not traditional IT systems. I’ve built connectors for industrial applications in my career. The fact is, you need to adapt to the way that the machines and devices produce data, and not the other way around. Data integration technology needs to adapt as well, making sure that it can deal with streaming and unstructured data, including many instances where the data needs to be processed in flight as it moves from the device, to the database.
This becomes a huge opportunity for data integration providers who understand the special needs of IoT, as well as the technology that those who build IoT-based systems can leverage. However, the larger value is for those businesses that learn how to leverage IoT to provide better services to their customers by offering insights that have previously been impossible. Be it jet engine reliability, the fuel efficiency of my car, or feedback to my physician from sensors on my body, this is game changing stuff. At the heart of its ability to succeed is the ability to move data from place-to-place.
Despite spending more than $30 Billion in annual spending on Big Data, successful big data implementations elude most organizations. That’s the sobering assessment of a recent study of 226 senior executives from Capgemini, which found that only 13 percent feel they have truly have made any headway with their big data efforts.
The reasons for Big Data’s lackluster performance include the following:
- Data is in silos or legacy systems, scattered across the enterprise
- No convincing business case
- Ineffective alignment of Big Data and analytics teams across the organization
- Most data locked up in petrified, difficult to access legacy systems
- Lack of Big Data and analytics skills
Actually, there is nothing new about any of these issues – in fact, the perceived issues with Big Data initiatives so far map closely with the failed expect many other technology-driven initiatives. First, there’s the hype that tends to get way ahead of any actual well-functioning case studies. Second, there’s the notion that managers can simply take a solution of impressive magnitude and drop it on top of their organizations, expecting overnight delivery of profits and enhanced competitiveness.
Technology, and Big Data itself, is but a tool that supports the vision, well-designed plans and hard work of forward-looking organizations. Those managers seeking transformative effects need to look deep inside their organizations, at how deeply innovation is allowed to flourish, and in turn, how their employees are allowed to flourish. Think about it: if line employees suddenly have access to alternative ways of doing things, would they be allowed to run with it? If someone discovers through Big Data that customers are using a product differently than intended, do they have the latitude to promote that new use? Or do they have to go through chains of approval?
Big Data may be what everybody is after, but Big Culture is the ultimate key to success.
For its part, Capgemini provides some high-level recommendations for better baking in transformative values as part of Big Data initiatives, based on their observations of best-in-class enterprises:
The vision thing: “It all starts with vision,” says Capgemini’s Ron Tolido. “If the company executive leadership does not actively, demonstrably embrace the power of technology and data as the driver of change and future performance, nothing digitally convincing will happen. We have not even found one single exception to this rule. The CIO may live and breathe Big Data and there may even be a separate Chief Data Officer appointed – expect more of these soon – if they fail to commit their board of executives to data as the engine of success, there will be a dark void beyond the proof of concept.”
Establish a well-defined organizational structure: “Big Data initiatives are rarely, if ever, division-centric,” the Capgemini report states. “They often cut across various departments in an organization. Organizations that have clear organizational structures for managing rollout can minimize the problems of having to engage multiple stakeholders.”
Adopt a systematic implementation approach: Surprisingly, even the largest and most sophisticated organizations that do everything on process don’t necessarily approach Big Data this way, the report states. “Intuitively, it would seem that a systematic and structured approach should be the way to go in large-scale implementations. However, our survey shows that this philosophy and approach are rare. Seventy-four percent of organizations did not have well-defined criteria to identify, qualify and select Big Data use-cases. Sixty-seven percent of companies did not have clearly defined KPIs to assess initiatives. The lack of a systematic approach affects success rates.”
Adopt a “venture capitalist” approach to securing buy-in and funding: “The returns from investments in emerging digital technologies such as Big Data are often highly speculative, given the lack of historical benchmarks,” the Capgemini report points out. “Consequently, in many organizations, Big Data initiatives get stuck due to the lack of a clear and attributable business case.” To address this challenge, the report urges that Big Data leaders manage investments “by using a similar approach to venture capitalists. This involves making multiple small investments in a variety of proofs of concept, allowing rapid iteration, and then identifying PoCs that have potential and discarding those that do not.”
Leverage multiple channels to secure skills and capabilities: “The Big Data talent gap is something that organizations are increasingly coming face-to-face with. Closing this gap is a larger societal challenge. However, smart organizations realize that they need to adopt a multi-pronged strategy. They not only invest more on hiring and training, but also explore unconventional channels to source talent. Capgemini advises reaching out to partner organizations for the skills needed to develop Big Data initiatives. These can be employee exchanges, or “setting up innovation labs in high-tech hubs such as Silicon Valley.” Startups may also be another source of Big Data talent.
Informatica’s Redshift connector is a state-of-the-art Bulk-Load type connector which allows users to perform all CRUD operations on Amazon Redshift. It makes use of AWS best practices to load data at high throughput in a safe and secure manner and is available on Informatica Cloud and PowerCenter.
Today we are excited to announce the support of Amazon’s newly launched custom JDBC and ODBC drivers for Redshift. Both the drivers are certified for Linux and Windows environments.
Informatica’s Redshift connector will package the JDBC 4.1 driver which further enhances our meta-data fetch capabilities for tables and views in Redshift. That improves our overall design-time responsiveness by over 25%. It also allows us to query multiple tables/views and retrieve the result-set using primary and foreign key relationships.
Amazon’s ODBC driver enhances our FULL Push Down Optimization capabilities on Redshift. Some of the key differentiating factors are support for the SYSDATE variable, functions such as ADD_TO_DATE(), ASCII(), CONCAT(), LENGTH(), TO_DATE(), VARIANCE() etc. which weren’t possible before.
Amazon’s ODBC driver is not pre-packaged but can be directly downloaded from Amazon’s S3 store.
Once installed, the user can change the default ODBC System DSN in ODBC Data Source Administrator.
Original article can be found here, scmagazine.com
On Jan. 13 the White House announced President Barack Obama’s proposal for new data privacy legislation, the Personal Data Notification and Protection Act. Many states have laws today that require corporations and government agencies to notify consumers in the event of a breach – but it is not enough. This new proposal aims to improve cybersecurity standards nationwide with the following tactics:
Enable cyber-security information sharing between private and public sectors.
Government agencies and corporations with a vested interest in protecting our information assets need a streamlined way to communicate and share threat information. This component of the proposed legislation incents organizations that participate in knowledge-sharing with targeted liability protection, as long as they are responsible for how they share, manage and retain privacy data.
Modernize the tools law enforcement has to combat cybercrime.
Existing laws, such as the Computer Fraud and Abuse Act, need to be updated to incorporate the latest cyber-crime classifications while giving prosecutors the ability to target insiders with privileged access to sensitive and privacy data. The proposal also specifically calls out pursuing prosecution when selling privacy data nationally and internationally.
Standardize breach notification policies nationwide.
Many states have some sort of policy that requires notification of customers that their data has been compromised. Three leading examples include California , Florida’s Information Protection Act (FIPA) and Massachusetts Standards for the Protection of Personal Information of Residents of the Commonwealth. New Mexico, Alabama and South Dakota have no data breach protection legislation. Enforcing standardization and simplifying the requirement for companies to notify customers and employees when a breach occurs will ensure consistent protection no matter where you live or transact.
Invest in increasing cyber-security skill sets.
For a number of years, security professionals have reported an ever-increasing skills gap in the cybersecurity profession. In fact, in a recent Ponemon Institute report, 57 percent of respondents said a data breach incident could have been avoided if the organization had more skilled personnel with data security responsibilities. Increasingly, colleges and universities are adding cybersecurity curriculum and degrees to meet the demand. In support of this need, the proposed legislation mentions that the Department of Energy will provide $25 million in educational grants to Historically Black Colleges and Universities (HBCU) and two national labs to support a cybersecurity education consortium.
This proposal is clearly comprehensive, but it also raises the critical question: How can organizations prepare themselves for this privacy legislation?
The International Association of Privacy Professionals conducted a study of Federal Trade Commission (FTC) enforcement actions. From the report, organizations can infer best practices implied by FTC enforcement and ensure these are covered by their organization’s security architecture, policies and practices:
- Perform assessments to identify reasonably foreseeable risks to the security, integrity, and confidentiality of personal information collected and stored on the network, online or in paper files.
- Limited access policies curb unnecessary security risks and minimize the number and type of network access points that an information security team must monitor for potential violations.
- Limit employee access to (and copying of) personal information, based on employee’s role.
- Implement and monitor compliance with policies and procedures for rendering information unreadable or otherwise secure in the course of disposal. Securely disposed information must not practicably be read or reconstructed.
- Restrict third party access to personal information based on business need, for example, by restricting access based on IP address, granting temporary access privileges, or similar procedures.
The Personal Data Notification and Protection Act fills a void at the national level; most states have privacy laws with California pioneering the movement with SB 1386. However, enforcement at the state AG level has been uneven at best and absent at worse.
In preparing for this national legislation organization need to heed the policies derived from the FTC’s enforcement practices. They can also track the progress of this legislation and look for agencies such as the National Institute of Standards and Technology to issue guidance. Furthermore, organizations can encourage employees to take advantage of cybersecurity internship programs at nearby colleges and universities to avoid critical skills shortages.
With online security a clear priority for President Obama’s administration, it’s essential for organizations and consumers to understand upcoming legislation and learn the benefits/risks of sharing data. We’re looking forward to celebrating safeguarding data and enabling trust on Data Privacy Day, held annually on January 28, and hope that these tips will make 2015 your safest year yet.
Lately I have been thinking a lot about what is real and just marketing fluff with the Internet of Things (IoT). From all the stories written and people that I talk to it seems I am not alone. One day there is news of what at best is a communications company receiving +100M in funding and the next there is what amounts to a re-skinned mobile app claiming to be the real IoT.
This is the first part in a series of posts where I am going to define a framework for identifying real IoT solutions and the value that they provide. In addition I will provide actual examples of companies and solutions that fit this solution definition framework.
My main issue with the entire IoT universe is that a lot of the focus in on things that do not exist or that have been around a long time and have just been re-branded. Neither of these actually do justice to the concept of IoT that is very interesting, which is using distributed data and events to deliver totally new or dynamically better solutions (think 10x or more) compared to what exists today. We are talking revolutionary and not evolutionary.
From my point of view real IoT solutions need to address one or more of the following solution areas and I will be using these and additional criteria to build out the framework.
- Personal productivity
- Business productivity
- Business critical
- Life critical
Have another point of view? Feel free to share. My next post will focus on the segment of personal productivity.
Strata 2015 – Making Data Work for Everyone with Cloud Integration, Cloud Data Management and Cloud Machine Learning
Are you ready to answer “Yes” to the questions:
a) “Are you Cloud Ready?”
b) “Are you Machine Learning Ready?”
I meet with hundreds of Informatica Cloud customers and prospects every year. While they are investing in Cloud, and seeing the benefits, they also know that there is more innovation out there. They’re asking me, what’s next for Cloud? And specifically, what’s next for Informatica in regards to Cloud Data Integration and Cloud Data Management? I’ll share more about my response throughout this blog post.
The spotlight will be on Big Data and Cloud at the Strata + Hadoop World conference taking place in Silicon Valley from February 17-20 with the theme “Make Data Work”. I want to focus this blog post on two topics related to making data work and business insights:
- How existing cloud technologies, innovations and partnerships can help you get ready for the new era in cloud analytics.
- How you can make data work in new and advanced ways for every user in your company.
Today, Informatica is announcing the availability of its Cloud Integration Secure Agent on Microsoft Azure and Linux Virtual Machines as well as an Informatica Cloud Connector for Microsoft Azure Storage. Users of Azure data services such as Azure HDInsight, Azure Machine Learning and Azure Data Factory can make their data work with access to the broadest set of data sources including on-premises applications, databases, cloud applications and social data. Read more from Microsoft about their news at Strata, including their relationship with Informatica, here.
“Informatica, a leader in data integration, provides a key solution with its Cloud Integration Secure Agent on Azure,” said Joseph Sirosh, Corporate Vice President, Machine Learning, Microsoft. “Today’s companies are looking to gain a competitive advantage by deriving key business insights from their largest and most complex data sets. With this collaboration, Microsoft Azure and Informatica Cloud provide a comprehensive portfolio of data services that deliver a broad set of advanced cloud analytics use cases for businesses in every industry.”
Even more exciting is how quickly any user can deploy a broad spectrum of data services for cloud analytics projects. The fully-managed cloud service for building predictive analytics solutions from Azure and the wizard-based, self-service cloud integration and data management user experience of Informatica Cloud helps overcome the challenges most users have in making their data work effectively and efficiently for analytics use cases.
The new solution enables companies to bring in data from multiple sources for use in Azure data services including Azure HDInsight, Azure Machine Learning, Azure Data Factory and others – for advanced analytics.
The broad availability of Azure data services, and Azure Machine Learning in particular, is a game changer for startups and large enterprises. Startups can now access cloud-based advanced analytics with minimal cost and complexity and large businesses can use scalable cloud analytics and machine learning models to generate faster and more accurate insights from their Big Data sources.
Success in using machine learning requires not only great analytics models, but also an end-to-end cloud integration and data management capability that brings in a wide breadth of data sources, ensures that data quality and data views match the requirements for machine learning modeling, and an ease of use that facilitates speed of iteration while providing high-performance and scalable data processing.
For example, the Informatica Cloud solution on Azure is designed to deliver on these critical requirements in a complementary approach and support advanced analytics and machine learning use cases that provide customers with key business insights from their largest and most complex data sets.
Using the Informatica Cloud solution on Azure connector with Informatica Cloud Data Integration enables optimized read-write capabilities for data to blobs in Azure Storage. Customers can use Azure Storage objects as sources, lookups, and targets in data synchronization tasks and advanced mapping configuration tasks for efficient data management using Informatica’s industry leading cloud integration solution.
As Informatica fulfills the promise of “making great data ready to use” to our 5,500 customers globally, we continue to form strategic partnerships and develop next-generation solutions to stay one step ahead of the market with our Cloud offerings.
My goal in 2015 is to help each of our customers say that they are Cloud Ready! And collaborating with solutions such as Azure ensures that our joint customers are also Machine Learning Ready!
To learn more, try our free Informatica Cloud trial for Microsoft Azure data services.
How do you know if you have found ‘true love’?
Biologists and psychologists tell us that when we are struck by cupid’s arrow, our body is reacting to a set of chemicals that are released in the brain that evoke emotions and feelings of lust, attraction and attachment. When those chemicals are released, our bodies respond. Our hearts race, blood pumps through our veins, faces flush, body temperatures rise. Some say it feels like electricity is conducting all over the skin. It releases a flood of emotions that may cloud our judgment and may even cause us to make a choice considered unreasonable to others. Sound familiar?
But what causes our brains to react to one person and not another? Are we predisposed to how certain people look or smell? Do our genes play a role in determining an affinity toward a body type or shape?
Pheromone research has shown how sensors in our nose can smell whether or not someone’s immune system compliments our own based on the scent of urine and sweat. Meaning, if someone has a similar immune deficiency, that individual won’t smell good to us. We are more likely to prefer the smell of someone who has an immune system that is different. Is our genetic code programming our instincts to preselect who we should mate with so our offspring has a higher chance of surviving?
It is probably not surprising that most men are attracted to women with symmetrical faces and hourglass figures. Genetic research hints that men’s predispositions are also based on a genetic code. There is a correlation between asymmetric facial characteristics and genetic disorders as well as between waist to hip ratios and fertility. Depending on where you are in your stage in life, these characteristics could have a weighting factor in how your brain responds to the smell of the perfect pheromone and how someone appears. And, some argue it is all influenced by body language, voice tone and actual words used in dialogue.
Psychologists report it takes only two to four minutes to decide if you are falling in love with someone. Even if you dismiss some or accept all of the possibilities I am presenting, experiencing love is impacted by a variety and intensity of senses, interpretations and emotions combined together in a short period of time. If you are a data nerd like myself, variety, volume and velocity of ‘signals’ begins to sound like a Big Data marketing pitch. This really is an application of predictive analytics using different data types, large volumes of data and real-time decision making algorithms. But, I’m actually more interested in how affective computing, wearable devices and analytics could help determine whether or not what you feel is actually ’true love’ or just a bad case of indigestion.
Affective computing, according to researcher Rosalind Picard, gives a computer the ability to recognize and express emotions, develop that ability and enable it to regulate and utilize emotions. When applied to wearable devices that can listen to how you talk, measure blood pressure, detect changes in heart and respiration rate and even measure electro-dermal responses, is it possible that technology could sense when your body is responding to the chemicals of love?
What about mood rings, you may ask? Mood rings, the original form of an affective wearable device that grew in popularity in the 1970s changed color based on your mood. Unfortunately, mood rings only change based on body temperature. Through data collection and research, researchers have shown that physiology patterns cannot be determined by body temperature alone. In order to truly differentiate emotion of, let’s say ‘true love,’ you need to be able to collect multiple physiological signals and detect a pattern using multi-variant pattern recognition algorithms. And, if you only have 2-4 minutes, it pretty much needs to calculate chances of ‘true love’ in real-time to prevent making a life-altering mistake.
The evolution of wearables technology has reached medical grade, allowing parents to detect when their children are about to have an epileptic seizure or are experiencing acute levels of stress. When tuned to love-seekers’ queues, is it possible that this same technology could send an audio or visual signal to your smart phone alerting you as to whether or not this person is a ‘true love’ candidate? Or glow red if you are in the proximity of someone who is experiencing similar physiological changes? Maybe this is the next application for match-making companies such as eHarmony or Match.com?
The reality is this. Assuming that the data is clean and accurate, safe from violating any data privacy concerns and truly connected to your physiological signals, wearable device technology that could detect close proximity of ‘true love’ is probably five years out. It is more likely to show up in a popular science fiction film than at an Apple store in the near term. But, when it does, think about how the signal on your smart phone device tells you the proximity of a potential candidate, where a local flower shop is, integrated with facial recognition and Facebook photos and ‘status’ (assuming it is true), with an iTunes recommendation of ‘Love Is In The Air’ by John Paul Young, ‘True Love’ is only 2-4 minutes away.
 R. Picard. Affective Computing. Pages 227-239, MIT Press, 2000
 Cacioppa and Tassinary (1990)
Valentine’s Day is such a strange holiday. It always seems to bring up more questions than answers. And the internet always seems to have a quiz to find out the answer! There’s the “Does he have a crush on you too – 10 simple ways to find out” quiz. There’s the “What special gift should I get her this Valentine’s Day?” quiz. And the ever popular “Why am I still single on Valentine’s Day?” quiz.
Well Marketers, it’s your lucky Valentine’s Day! We have a quiz for you too! It’s about your relationship with data. Where do you stand? Are you ready to take the next step?
Question 1: Do you connect – I mean, really connect – with your data?
□ (A) Not really. We just can’t seem to get it together and really connect.
□ (B) Sometimes. We connect on some levels, but there are big gaps.
□ (C) Most of the time. We usually connect, but we miss out on some things.
□ (D) We are a perfect match! We connect about everything, no matter where, no matter when.
Translation: Data ready marketers have access to the best possible data, no matter what form it is in, no matter what system it is in. They are able to make decisions based everything the entire organization “knows” about their customer/partner/product – with a complete 360 degree view. And they are also able to connect to and integrate with data outside the bounds of their organization to achieve the sought-after 720 degree view. They can integrate and react to social media comments, trends, and feedback – in real time – and to match it with an existing record whenever possible. And they can quickly and easily bring together any third party data sources they may need.
Question 2: How good looking & clean is you data?
□ (A) Yikes, not very. But it’s what’s on the inside that counts right?
□ (B) It’s ok. We’ve both let ourselves go a bit.
□ (C) It’s pretty cute. Not supermodel hot, but definitely girl or boy next door cute.
□ (D) My data is HOT! It’s perfect in every way!
Translation: Marketers need data that is reliable and clean. According to a recent Experian study, American companies believe that 25% of their data is inaccurate, the rest of the world isn’t much more confident. 90% of respondents said they suffer from common data errors, and 78% have problems with the quality of the data they gather from disparate channels. Making marketing decisions based upon data that is inaccurate leads to poor decisions. And what’s worse, many marketers have no idea how good or bad their data is, so they have no idea what impact it is having on their marketing programs and analysis. The data ready marketer understands this and has a top tier data quality solution in place to make sure their data is in the best shape possible.
Question 3: Do you feel safe when you’re with your data?
□ (A) No, my data is pretty scary. 911 is on speed dial.
□ (B) I’m not sure actually. I think so?
□ (C) My date is mostly safe, but it’s got a little “bad boy” or “bad girl” streak.
□ (D) I protect my data, and it protects me back. We keep each other safe and secure.
Translation: Marketers need to be able to trust the quality of their data, but they also need to trust the security of their data. Is it protected or is it susceptible to theft and nefarious attacks like the ones that have been all over the news lately? Nothing keeps a CMO and their PR team up at night like worrying they are going to be the next brand on the cover of a magazine for losing millions of personal customer records. But beyond a high profile data breach, marketers need to be concerned over data privacy. Are you treating customer data in the way that is expected and demanded? Are you using protected data in your marketing practices that you really shouldn’t be? Are you marketing to people on excluded lists
Question 4: Is your data adventurous and well-traveled, or is it more of a “home-body”?
□ (A) My data is all over the place and it’s impossible to find.
□ (B) My data is all in one place. I know we’re missing out on fun and exciting options, but it’s just easier this way.
□ (C) My data is in a few places and I keep fairly good tabs on it. We can find each other when we need to, but it takes some effort.
□ (D) My data is everywhere, but I have complete faith that I can get ahold of any source I might need, when and where I need it.
Translation: Marketing data is everywhere. Your marketing data warehouse, your CRM system, your marketing automation system. It’s throughout your organization in finance, customer support, and sale systems. It’s in third party systems like social media and data aggregators. That means it’s in the cloud, it’s on premise, and everywhere in between. Marketers need to be able to get to and integrate data no matter where it “lives”.
Question 5: Does your data take forever to get ready when it’s time to go do so something together?
□ (A) It takes forever to prepare my data for each new outing. It’s definitely not “ready to go”.
□ (B) My data takes it’s time to get ready, but it’s worth the wait… usually!
□ (C) My data is fairly quick to get ready, but it does take a little time and effort.
□ (D) My data is always ready to go, whenever we need to go somewhere or do something.
Translation: One of the reasons many marketers end up in marketing is because it is fast paced and every day is different. Nothing is the same from day-to-day, so you need to be ready to act at a moment’s notice, and change course on a dime. Data ready marketers have a foundation of great data that they can point at any given problem, at any given time, without a lot of work to prepare it. If it is taking you weeks or even days to pull data together to analyze something new or test out a new hunch, it’s too late – your competitors have already done it!
Question 6: Can you believe the stories your data is telling you?
□ (A) My data is wrong a lot. It stretches the truth a lot, and I cannot rely on it.
□ (B) I really don’t know. I question these stories – dare I say excused – but haven’t been able to prove it one way or the other.
□ (C) I believe what my data says most of the time. It rarely lets me down.
□ (D) My data is very trustworthy. I believe it implicitly because we’ve earned each other’s trust.
Translation: If your data is dirty, inaccurate, and/or incomplete, it is essentially “lying” to you. And if you cannot get to all of the data sources you need, your data is telling you “white lies”! All of the work you’re putting into analysis and optimization is based on questionable data, and is giving you questionable results. Data ready marketers understand this and ensure their data is clean, safe, and connected at all times.
Question 7: Does your data help you around the house with your daily chores?
□ (A) My data just sits around on the couch watching TV.
□ (B) When I nag my data will help out occasionally.
□ (C) My data is pretty good about helping out. It doesn’t take imitative, but it helps out whenever I ask.
□ (D) My data is amazing. It helps out whenever it can, however it can, even without being asked.
Translation: Your marketing data can do so much. It should enable you be “customer ready” – helping you to understand everything there is to know about your customers so you can design amazing personalized campaigns that speak directly to them. It should enable you to be “decision ready” – powering your analytics capabilities with great data so you can make great decisions and optimize your processes. But it should also enable you to be “showcase ready” – giving you the proof points to demonstrate marketing’s actual impact on the bottom line.
Now for the fun part… It’s time to rate your data relationship status
If you answered mostly (A): You have a rocky relationship with your data. You may need some data counseling!
If you answered mostly (B): It’s time to decide if you want this data relationship to work. There’s hope, but you’ve got some work to do.
If you answered mostly (C): You and your data are at the beginning of a beautiful love affair. Keep working at it because you’re getting close!
If you answered mostly (D): Congratulations, you have a strong data marriage that is based on clean, safe, and connected data. You are making great business decisions because you are a data ready marketer!
Do You Love Your Data?
No matter what your data relationship status, we’d love to hear from you. Please take our survey about your use of data and technology. The results are coming out soon so don’t miss your chance to be a part. https://www.surveymonkey.com/s/DataMktg
Also, follow me on twitter – The Data Ready Marketer – for some of the latest & greatest news and insights on the world of data ready marketing. And stay tuned because we have several new Data Ready Marketing pieces coming out soon – InfoGraphics, eBooks, SlideShares, and more!
This blog post was originally featured on Business.com here: Lovenomics: The Price of Love This Valentine’s Day.
After the Blue Cross sales that dominate January, Valentine’s Day offers welcome relief to the high street. Valentine’s Day marks the end of Christmas sales and the first of the year’s seasonal hooks providing retailers with an opportunity to upsell. According to the National Retail Federation’s Valentine’s Day Consumer Spending Survey, American consumers plan to spend a total of $4.8 billion on jewelry and a survey high of nearly $2 billion on clothing this year. However, to successfully capture customers, retailers need to develop an omni-channel strategy designed to sell the right product.
Target the indecisive
For the most part, the majority of Valentine’s Day shoppers will be undecided when they begin their purchasing journey. Based on this assumption, a targeted sales approach at the point of interest (POI) and point of sale (POS) will be increasingly important. Not only do retailers need to track and understand the purchasing decision of every customer as they move between channels, but they also need to have a real-time view of the product lines, pricing and content that the competition is using. Once armed with this information, retailers can concentrate on delivering personalized ads or timely product placements that drive consumers to the checkout as they move across different channels.
Related Article: 11 Cheeky Business Valentine’s Day Cards for the BFF In Your Office
Start with search
Consumers will start their shopping journey with a search engine and will rarely scroll past the first page. So brands need to be prepared by turning Valentine’s Day product lines into searchable content. To capture a greater share of online traffic, retailers should concentrate on making relevant products easy to find by managing meta-information, optimizing media assets with the keywords that consumers are using, deploying rich text and automatically sending products to search engines.
Next generation loyalty
Retailers and restaurants can now integrate loyalty schemes into specialized smartphone apps, or maybe integrate customer communication to automatically deliver personalized ads (e.g., offers for last minute gifts for those who forget). However, to ensure success, brands need to know as much about their customers as consumers know about their products. By being able to monitor customers’ behavior, the information that they are looking at and the channels that they are using to interact with brands, loyalty programs can be used to deliver timely special offers or information at the right moment.
Valentine’s Day represents an opportunity to reinvent the in-store experience. By introducing digital signage for special product promotions, retailers can showcase a wide range of eclectic merchandise to showroom consumers. This could be done by targeting any smartphone consumers (who have allowed geo-located ads on their phones) with a personalized text message when they enter the store. Use this message to direct them to the most relevant areas for Valentine’s Day gifts or present them with a customized offer based on previous buying history.
Related Article: Small Business Marketing Tips for Valentine’s Day
supermarkets have become established as the one-stop shop for lovers in a rush. Last year, Tesco, a British multinational grocery and general merchandise retailer, revealed that 85 percent of all Valentine’s Day bouquets were bought on the day itself, with three-quarters of all Valentine’s Day chocolates sold on February 14.
To tap into the last-minute attitude of panicked couples searching for a gift, retailers should have a dedicated Valentine’s Day section online and provide timely offers that come with the promise of delivery in time for Valentine’s Day. For example, BCBGMAXAZRIA is using data quality services to ensure its email list is clean and updated, keeping its sender reputation high so that when they need to reach customers during critical times like Valentine’s Day, they have confidence in their data.
Alternatively, retailers can help customers by closely managing local inventory levels to offer same-day click-and-collect initiatives or showing consumers the number of items that are currently in-stock and in-store across all channels.
Valentine’s Day may seem like a minor holiday after Christmas, but for retailers it generates billions of dollars in annual spending and presents a tremendous opportunity to boost their customer base. With these tips, retailers will hopefully be able to sweeten their sales by effectively targeting customers looking for the perfect gift for their special someone.
It’s no secret that the explosion of software-as-a-service (SaaS) apps has revolutionized the way businesses operate. From humble beginnings, the titans of SaaS today include companies such as Salesforce.com, NetSuite, Marketo, and Workday that have gone public and attained multi-billion dollar valuations. The success of these SaaS leaders has had a domino effect in adjacent areas of the cloud – infrastructure, databases, and analytics.
Amazon Web Services (AWS), which originally had only six services in 2006 with the launch of Amazon EC2, now has over 30 ranging from storage, relational databases, data warehousing, Big Data, and more. Salesforce.com’s Wave platform, Tableau Software, and Qlik have made great advances in the cloud analytics arena, to give better visibility to line-of-business users. And as SaaS applications embrace new software design paradigms that extend their functionality, application performance monitoring (APM) analytics has emerged as a specialized field from vendors such as New Relic and AppDynamics.
So, how exactly did the growth of SaaS contribute to these adjacent sectors taking off?
The growth of SaaS coincided with the growth of powerful smartphones and tablets. Seeing this form factor as important to the end user, SaaS companies rushed to produce mobile apps that offered core functionality on their mobile device. Measuring adoption of these mobile apps was necessary to ensure that future releases met all the needs of the end user. Mobile apps contain a ton of information such as app responsiveness, features utilized, and data consumed. As always, there were several types of users, with some preferring a laptop form factor over a smartphone or tablet. With the ever increasing number of data points to measure within a SaaS app, the area of application performance monitoring analytics really took off.
Simultaneously, the growth of the SaaS titans cemented their reputation as not just applications for a certain line-of-business, but into full-fledged platforms. This growth emboldened a number of SaaS startups to develop apps that solved specialized or even vertical business problems in healthcare, warranty-and-repair, quote-to-cash, and banking. To get started quickly and scale rapidly, these startups leveraged AWS and its plethora of services.
The final sector that has taken off thanks to the growth of SaaS is the area of cloud analytics. SaaS grew by leaps and bounds because of its ease of use, and rapid deployment that could be achieved by business users. Cloud analytics aims to provide the same ease of use for business users when providing deep insights into data in an interactive manner.
In all these different sectors, what’s common is the fact that SaaS growth has created an uptick in the volume of data and the technologies that serve to make it easier to understand. During Informatica’s Data Mania event (March 4th, San Francisco) you’ll find several esteemed executives from Salesforce, Amazon, Adobe, Microsoft, Dun & Bradstreet, Qlik, Marketo, and AppDynamics talk about the importance of data in the world of SaaS.