Tag Archives: B2B Data Exchange
Security professionals are in dire need of a solution that provides visibility into where sensitive and confidential data resides, as well as visibility into the data’s risk. This knowledge would allow those responsible to take an effective, proactive approach to combating cybercrime. By focusing on the data, Informatica and our customers, partners and market ecosystem are collaborating to make data-centric security with Data Security Intelligence the next line of defense.
Security technologies that focus on securing the network and perimeter require additional safeguards when sensitive and confidential data traverse beyond these protective controls. Data proliferates to cloud-based applications and mobile devices. Application security and identity access management tools may lack visibility and granular control when data is replicated to Big Data and advanced analytics platforms.
Informatica is filling this need with its data-centric security portfolio, which now includes Secure@Source. Informatica Secure@Source is the industry’s first data security intelligence solution that delivers insight into where sensitive and confidential data reside, as well as the data’s risk profile.
Join us at our online launch event on April 8th where we will showcase Secure@Source and share reactions from an amazing panel including:
- Security Industry leader Anil Chakravarthy, CPO and EVP Informatica and myself, Amit Walia, GM and SVP Informatica
- Luminaries Larry Ponemon, Founder Ponemon Institute and Jeff Northrop, CTO IAPP
- CISOs Bill Burns, Informatica and Arnold Federbaum, Former CISOs and CyberSecurity Professor NYU
- Enterprise Security Architect, Linda Hewlett, Santander Holdings USA.
The opportunity for Data Security Intelligence is extensive. In a recently published report, Neuralytix defined Data-Centric Security as “an approach to security that focuses on the data itself; to cover the gaps of traditional network, host and application security solutions.” A critical element for successful data security is collecting intelligence required to prioritize where to focus security controls and efforts that mitigate risk. This is precisely what Informatica Secure@Source was designed to achieve.
What has emerged from a predominantly manual practice, the data security intelligence software market is expected to reach $800M by 2018 with a CAGR of 27.8%. We are excited about this opportunity! As a leader in data management software, we are uniquely qualified to take an active role in shaping this emerging market category.
Informatica Secure@Source addresses the need to get smarter about where our sensitive and private data reside, who is accessing it, prioritize which controls to implement, and work harmoniously with existing security architectures, policies and procedures. Our customers are asking us for data security intelligence, the industry deserves it. With more than 60% of security professionals stating their biggest challenge is not knowing where their sensitive and confidential data reside, the need for Data Security Intelligence has never been greater
Neuralytix says “data security is about protecting individual data objects that traverse across networks, in and out of a public or private cloud, from source applications to targets such as partner systems, to back office SaaS applications to data warehouses and analytics platforms”. We couldn’t agree more. We believe that the best way to incorporate a data-centric security approach is to begin with data security intelligence.
JOIN US at the online launch event on April 8th for the security industry’s most exciting new Data Security Intelligence solution, Informatica Secure@Source.
 “The State of Data Centric Security,” Ponemon Institute, sponsored by Informatica, June 2014
This is a guest author post by Philip Howard, Research Director, Bloor Research.
I recently posted a blog about an interview style webcast I was doing with Informatica on the uses and costs associated with data integration tools.
I’m not sure that the poet John Donne was right when he said that it was strange, let alone fatal. Somewhat surprisingly, I have had a significant amount of feedback following this webinar. I say “surprisingly” because the truth is that I very rarely get direct feedback. Most of it, I assume, goes to the vendor. So, when a number of people commented to me that the research we conducted was both unique and valuable, it was a bit of a thrill. (Yes, I know, I’m easily pleased).
There were a number of questions that arose as a result of our discussions. Probably the most interesting was whether moving data into Hadoop (or some other NoSQL database) should be treated as a separate use case. We certainly didn’t include it as such in our original research. In hindsight, I’m not sure that the answer I gave at the time was fully correct. I acknowledged that you certainly need some different functionality to integrate with a Hadoop environment and that some vendors have more comprehensive capabilities than others when it comes to Hadoop and the same also applies (but with different suppliers, when it comes to integrating with, say, MongoDB or Cassandra or graph databases). However, as I pointed out in my previous blog, functionality is ephemeral. And, just because a particular capability isn’t supported today, doesn’t mean it won’t be supported tomorrow. So that doesn’t really affect use cases.
However, where I was inadequate in my reply was that I only referenced Hadoop as a platform for data warehousing, stating that moving data into Hadoop was not essentially different from moving it into Oracle Exadata or Teradata or HP Vertica. And that’s true. What I forgot was the use of Hadoop as an archiving platform. As it happens we didn’t have an archiving use case in our survey either. Why not? Because archiving is essentially a form of data migration – you have some information lifecycle management and access and security issues that are relevant to archiving once it is in place but that is after the fact: the process of discovering and moving the data is exactly the same as with data migration. So: my bad.
Aside from that little caveat, I quite enjoyed the whole event. Somebody or other (there’s always one!) didn’t quite get how quantifying the number of end points in a data integration scenario was a surrogate measure for complexity (something we took into account) and so I had to explain that. Of course, it’s not perfect as a metric but it’s the only alternative to ask eye of the beholder type questions which aren’t very satisfactory.
Anyway, if you want to listen to the whole thing you can find it HERE:
If you haven’t updated your B2B integration capabilities in the past five years, are you at risk of being left behind? This is the age of superior customer experience and rapid time-to-value so speedy customer on-boarding and support of specialized integration services means the difference between winning and losing business. A health check starts with asking some simple questions: (more…)
In a recent Aberdeen research, they found that 95% of respondents (of 122 responses) replied on some level of manual processing in order to integration external data sources. Manual processing to integrate external data is time consuming, expensive and error prone so why do so many do it? Well, they often have little choice. If you look deeper, most of the time these data exchanges are with small partners and small partner enablement is a significant challenge for most organizations. For the most part, (more…)
The other week we had Ayman Taha, Director of IT for Enterprise Solutions Integration, from Avent in to talk about how he automated the processing of unstructured, non-traditional B2B exchanges with external partners. Avent is a large distributor of electronic components with a diverse ecosystem of customer and suppliers. Their B2B infrastructure reflects the complexity of their environment but, despite a sophisticated and mature EDI infrastructure, they were still manually re-keying invoices and product updates from hundreds of spreadsheets and .PDFs received from partners. This was because many of their smaller customers and partners could not send or receive EDI messages. (more…)
For once hype could be a good thing. Well it is if you’re reading the latest Gartner – Hype Cycle for Application Infrastructure published last month, in July; because in it you will see how two important technology trends – the areas of BGS (sorry, another TLA for you to learn – B2B Gateway Software (and it’s even a nested TLA!)), and Managed File Transfer (MFT) have now made it out of the Trough of Disillusionment and up onto the Slope of Enlightenment. Why do I suddenly feel like John Bunyan’s Pilgrim?
Anyway, the key points that Gartner identifies are that centrally managing B2B interactions provides:
- Economies of scale and deeper insight into the technical aspects of data integration, transaction delivery, process integration and SOA interoperability, such as consolidating, tracking, storing and auditing files, messages, process events, acknowledgments, receipts, and errors and exceptions.
The B2B communications process with your external business partners, suppliers, etc. is not a static process and you need to be able to have visibility of these communications for not only regulatory compliance and auditability issues but also to manage the dynamic process.
- A single point through which to troubleshoot B2B integration issues.
B2B Gateways are now a mature technology and like standards most organisations use a number of them. Integrating them provides significant benefit and enables the organisation to have visibility of their business relationships and transactions and also know where to go to when things go wrong and need managing as they definitely will.
- A central, reusable repository for external business partner profiles and Web services APIs. This is particularly valuable when dealing with a large number of external business partners and cloud APIs, and when multiple business units interact with the same partners or cloud services.
The number of business partners we all have to deal with is increasing rapidly as we outsource, subcontract, farm-out and generally rely more on external specialist organisations. Having visibility of these relationships and making the most from new integration methodologies and processes can generate great savings and also give visibility of our business exposure to these suppliers.
- Support for the myriad data formats, transport and communication protocols, and security standards.
As the old saying goes “I love standards, there are so many to choose from.” Well our business processes are not getting any simpler; data standards are under constant change and revision with data formats becoming increasingly complex. So to be able to handle not just a few but all key formats and to be able to reuse previous transformation experience, utilise already developed libraries and lever new complex hierarchical data structures makes the difference between a stove-piped and soon to be redundant system and one that is flexible and supports new and ever changing business requirements.
As one of the vendors identified that can actively compete with offerings positioned to address the broader set of usage scenarios, Informatica’s B2B Data Exchange solution not only supports the B2B functional requirements an organisation will have but also integrates this process and data into the wider internal data integration platform and management process. (Look at this presentation for the new 9.5 functionality of key new featues.)
So now the reality not hype of B2B solutions can be delivered.
Sources: Gartner Hype Cycle for Application Infrastructure, 2012. Published: 24 July 2012
Analyst: Jess Thompson
Gartner Disclaimer re the Hype Cycle
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
As I discussed before, it’s not enough to walk through a functional checklist for a data integration platform. It’s important to make sure that it works in the right way. In my last posting, I discussed the concept of a “unified” platform and its implications for the user experience.
The second key aspect of how a data integration platform works is its openness—how much it is designed to work with the broader IT environment. Data integration, by definition, touches a large portion of the IT environment, which could mean thousands of different applications and data sources in large organizations. Moreover, it’s not just the systems inside the firewall you need to be concerned with.
In most cases, it’s important to also support integration with the systems of B2B partners such as customers, suppliers, distributors, etc., as well as any SaaS partners. And the platform has to support any technology standard such as those for operating systems or databases which have been instituted. Frankly, there’s not much use in a data integration platform that isn’t designed to work with as broad a range of applications and systems as possible. (more…)
As I have been discussing in my previous blogs, a data integration platform has to cover a lot of ground if it’s going to be the backbone for sharing data across a company or organization. Below is a checklist of key capabilities that you should evaluate, but of course no checklist should be used blindly. So a couple comments first.
First, are all of these capabilities relevant to you? Try to think beyond the specific project you may have at hand to the different types of data integration projects others in your organization may be pursuing. After all, the point of a platform, as opposed to a tool, is that you are trying to promote reuse and standardization across the organization and across different projects or initiatives. Also, try to think of what is needed now vs. what is likely to be needed in the future.
Here are the capabilities we feel that any data integration platform worth its salt should support in order to claim that it is comprehensive: (more…)
If you say the words “data integration“, different people may think of different things. Some think of ETL (extract-transform-load) tools. Others think of enterprise application integration (EAI) technologies or message brokers. But in most cases, regardless of which tool leaps to mind, people think about how data is integrated inside of an organization, or enterprise data integration.
In other words, how data is shared between different applications and systems inside the firewall. But this is just one aspect or realm within the broader data integration discipline, albeit an important one and generally the one most people start with.
Technology vendors like to talk about platforms, because platforms imply a broader footprint both in terms of functional capabilities and in terms of implementation usage. Platforms also sound more “strategic,” even if the practical implications are vague. But the term “platform” can also be simple marketing hype. How do you know when a software “platform” is really a platform? More specifically, do data integration platforms exist now?