Tag Archives: Oracle

Who’s the Top MDM Vendor – IBM, SAP, Oracle, or Informatica?

The MDM space is filled by a number of well-known players. In addition to Informatica, you’ll also find IBM, Oracle, SAP, and a host of other vendors. But who’s the leader of this space?

The Information Difference is an analyst firm that specializes in MDM, and this firm has been watching the MDM space for years. Recently, The Information Difference compared 12 different MDM vendors, ranked them using 200 criteria, and published their findings in a report of the MDM landscape for Q2, 2013. The firm compared the vendors’ MDM offerings across 6 categories: data governance, business rules, data quality, data storage, data provision, and data movement.

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Master Data Management, PiM | Tagged , , , , , , , , , , | Leave a comment

What is In-Database Archiving in Oracle 12c and Why You Still Need a Database Archiving Solution to Complement It (Part 2)

In my last blog on this topic, I discussed several areas where a database archiving solution can complement or help you to better leverage the Oracle In-Database Archiving feature.  For an introduction of what the new In-Database Archiving feature in Oracle 12c is, refer to Part 1 of my blog on this topic.

Here, I will discuss additional areas where a database archiving solution can complement the new Oracle In-Database Archiving feature:

  • Graphical UI for ease of administration – In database archiving is currently a technical feature of Oracle database, and not easily visible or mange-able outside of the DBA persona.   This is where a database archiving solution provides a more comprehensive set of graphical user interfaces (GUI) that makes this feature easier to monitor and manage. 
  • Enabling application of In-Database Archiving for packaged applications and complex data models – Concepts of business entities or transactional records composed of related tables to maintain data and referential integrity as you archive, move, purge, and retain data, as well as business rules to determine when data has become inactive and can therefore be safely archived allow DBAs to apply this new Oracle feature to more complex data models.  Also, the availability of application accelerators (prebuilt metadata of business entities and business rules for packaged applications) enables the application of In-Database Archiving to packaged applications like Oracle E-Business Suite, PeopleSoft, Siebel, and JD Edwards

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving, Data Governance, Governance, Risk and Compliance | Tagged , , , , , , | Leave a comment

What is In-Database Archiving in Oracle 12c and Why You Still Need a Database Archiving Solution to Complement It (Part 1)

What is the new In-Database Archiving in the latest Oracle 12c release?

On June 25, 2013, Oracle introduced a new feature called In-Database Archiving with its new release of Oracle 12.  “In-Database Archiving enables you to archive rows within a table by marking them as inactive. These inactive rows are in the database and can be optimized using compression, but are not visible to an application. The data in these rows is available for compliance purposes if needed by setting a session parameter. With In-Database Archiving you can store more data for a longer period of time within a single database, without compromising application performance. Archived data can be compressed to help improve backup performance, and updates to archived data can be deferred during application upgrades to improve the performance of upgrades.”

This is an Oracle specific feature and does not apply to other databases.

(more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving, Data Governance, Governance, Risk and Compliance | Tagged , , , , , | Leave a comment

Next Generation Database Archiving for Oracle Applications at Collaborate13

The OAUG hosted its annual convention, Collaborate13, this week in Denver, Colorado. The week started out with beautiful spring weather and turned quickly into frigid temperatures with a snow flurry bonus. The rapid change in weather didn’t stop 4,000 attendees from elevating their application knowledge in the mile high city. One topic that was very well attended from our perspective was the evolution of database archiving. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Data Archiving | Tagged | 1 Comment

Informatica Educates Oracle OpenWorld 2012 Attendees On How To Balance The Big Data Balancing Act

Thousands of Oracle OpenWorld 2012 attendees visited the Informatica booth to learn how to leverage their combined investments in Oracle and Informatica technology.  Informatica delivered over 40 presentations on topics that ranged from cloud, to data security to smart partitioning. Key Informatica executives and experts, from product engineering and product management, spoke with hundreds of users on topics and answered questions on how Informatica can help them improve Oracle application performance, lower risk and costs, and reduce project timelines. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Application ILM, Big Data, Cloud Computing, Data Migration, Data Privacy, Data Quality, data replication, Master Data Management, Partners | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

Informatica 9.5 for Big Data Challenge #1: Cloud

Just five years ago, there was a perception held by many in our industry that the world of data for enterprises was simplifying. This was in large part due to the wave of consolidation among application vendors. With SAP and Oracle gobbling up the competition to build massive, monolithic application stacks, the story was that this consolidation would simplify data integration and data management. (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Cloud Computing, Informatica 9.5 | Tagged , , , , | Leave a comment

Is Building an App as Easy as Building a bear?

There were two recent events that inspired me to write this blog entry.  The first was an Informatica Users’ Group meeting where I was invited to speak about Informatica’s new offerings in the data replication  space. Like many of my presentations I like to begin by asking the audience to share their exposure to replication technologies, how they are using it, how it is working for them, etc.  After quizzing this particular audience I was astounded by the number of customers that were writing their own extraction routines to pull data from various data sources, it was over 80% of the audience. I started to ponder this concept as I delivered my presentation and tried to point out specific areas where building might not be as effective as buying. 

The second event was the Saturday after the Users’ Group meeting.   I had taken the family to Disneyland and my daughter wanted to visit Build-a-Bear. Now I ask you, how can any doting father refuse a 10-year old’s plea for the “The most special bear in the whole world, I’ll name him Daddy Bear”. Yeah I know, I should have “Sucker” plastered on my forehead.   However, as I was going through the process of building this special bear with my daughter, I started to consider correlations on how this might pertain to building a replication technology verses buying one.  The amount of staff is what first caught my attention, someone to help pick through the inventory of choices, someone to help pick a heart, a customized sound, a stuffing station, there was someone to assist in picking out accessories and clothes, I mean after all you can’t have a naked bear. Of course no bear is complete without being a member of the “hug club”. After all of this specialization was complete I ended up with a great memory and a bill in excess of $90.00.  

I started to consider the issues the customer group faced when attempting to build their own extraction routines.  Most had a variety of database sources, Oracle, DB2, MS SQL, etc.  Each required a different person with different skill sets to write the extraction routines. Given the resources this could vary from database to database and even department to department. I’m thinking the hidden cost of staffing this exercise is probably overlooked by upper management. 

There also didn’t seem to be a common approach to how the extraction process is maintained. After further analysis most elected to extract the data was through Triggers or using SQL Select routines. OUCH, that is a pretty intrusive approach to pulling the data out of any source environment. I’m thinking a membership to the “hug club” might be in order once the overhead requirements are analyzed. But there is a distinct reason for this choice; it is straight forward and easier to troubleshoot. 

 Why?  Because Build-a-Bear can quickly train new staff members on how to work each station, but specialized IT personal are harder to come by, and they come and go from organizations all the time. Writing a customized routine for extracting data might provide job security, but it might also paralyze an organization if errors are encountered after an upgrade, or change to the environment. Delays can be exacerbated if the author of the code has moved on and no longer works for the organization.

This topic has intrigued me and before I closed the presentation I inquired if anyone would be willing to participate in an ROI study to validate whether building verses buying would make sense in their organization. I had several willing candidates. Over the next several months I invite you to follow along with my blog series on this subject. I intend to document my findings and share it with a wider audience that might be considering an investment in a replication technology verses building your own. 

For those of you who will be attending Informatica World, I’d like to invite you to join me at the Demo Booths and Hands on Labs. I’ll be there all week and would love the opportunity to meet with you in person.

FacebookTwitterLinkedInEmailPrintShare
Posted in Data Integration Platform, Uncategorized | Tagged , , , , , , , , | Leave a comment

Dynamic Data Masking in a Nutshell

I’ve been asked numerous times how Dynamic Data Masking works, so here it is – The Dynamic Data Masking process. Believe me it’s simple …

The use case –IT personnel, developers, consultants and outsource support teams have  access to production business applications (SAP, PeopleSoft, Oracle) or clones/backups that contains sensitive customer information and credit card information.

We cannot block their access, as they are required to ensure application performance, but we need to secure the data they are accessing.

These are the initial installation steps required:

  1. Install Informatica Dynamic Data Masking on the database or on a dedicated server as it acts as a proxy.
  2. Import one of our predefined rule sets that has been prepared for the application/data or create your own custom rules.
  3. Define the roles/responsibilities that need to be anonymized, using predefined hooks to ActiveDirectory/LDAP and application responsibilities.

Now how does Dynamic Data Masking work?

  1. User requests are intercepted in real-time by the Dynamic Data Masking server software.
  2. User roles and responsibilities are evaluated, and if they have been specified by the rules as requiring masking, Dynamic Dasta Masking rewrites them to return masked/scrambled personal information. No application changes, no database changes – completely transparent.

Sounds simple – yes it is.

Other common use cases include protecting development and reporting tool access to production databases, anonymizing datawarehouse reports and design tools, securing production clones and training environments.

See? Simple!

FacebookTwitterLinkedInEmailPrintShare
Posted in Data masking, Data Privacy | Tagged , , , , | Leave a comment

2012 Cloud Integration Predictions – Data, MDM, BI, Platform and IT as a Service

I spent last weekend reading Geoffrey Moore’s new book, Escape Velocity: Free Your Company’s Future from the Pull of the Past. Then on Sunday, the New York Times published this article about salesforce.com: A Leader in the Cloud Gains Rivals. Clearly “The Big Switch” is on. With this as a backdrop, the need for a comprehensive cloud data management strategy has surfaced as a top IT imperative heading into the New Year – How and when do you plan to move data to the cloud? How will you prevent SaaS silos? How will you ensure your cloud data is trustworthy, relevant and complete? What is your plan for longer-term cloud governance and control?

These are just a few of the questions you need to think through as you develop your short, medium and long-term cloud strategy. Here are my predictions for what else should be on your 2012 cloud integration radar.  (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Big Data, Business/IT Collaboration, CIO, Cloud Computing, Data Integration, Data Migration, Data Quality, Data Synchronization, Master Data Management, PaaS, SaaS | Tagged , , , , , , , , , , , , , , , , , | 3 Comments

Putting Master Data In The Hands Of Business Users

As companies increasingly explore master data management (MDM), we often hear inquiries about the usability of master data by business users.

Common questions include: Do business users need to learn and use a separate MDM application? Do they need support from IT to access master data? Can master data fit into the everyday business applications they use for CRM, SFA, ERP, supply chain management, and so forth?

If your organization has ever asked these questions, you should take a look at our new white paper, “Drive Business User Adoption of Master Data.” (more…)

FacebookTwitterLinkedInEmailPrintShare
Posted in Business Impact / Benefits, Business/IT Collaboration, Customers, Data Governance, Data Quality, Enterprise Data Management, Master Data Management, Operational Efficiency | Tagged , , , , , , , , , , , , , , , , , , , , , | Leave a comment