MDM Architecture Styles – Do you have the right mix?

(Updated October 2018) What is the right MDM architecture style for your organization? It’s definitely an open-ended question that deserves an informed answer, especially before making new investment decisions. But before tackling the question, it is useful to define and understand the MDM styles themselves. Dr. Dave Waddington of The Information Difference defined four broad styles in an article published in Information Management. If you follow the MDM space, you’ll see similar descriptions from most of the MDM industry analysts.

However, most descriptions stop short of describing their applicability to specific master data domains in various industries. (Perhaps you have to pay for that?) In this blog I will summarize those styles and offer some general pros and cons for each. Subsequent blogs will cover the domains and organizational considerations in greater detail. (Hint: there likely won’t be a one-size-fits-all answer for most organizations.)

So what are the MDM architecture styles you should be considering?

Consolidated – Master data is consolidated from multiple source systems into a physical golden record for downstream consumption; however, any updates made to the master data are not returned to the original sources. Consolidated MDM hubs are quick and inexpensive to set up and offer a big return by enabling reliable enterprise-wide reporting. Data movement is typically tolerant of inter-day latency and managed through inexpensive batch processes, but it’s a one-way street from source systems.

Registry – Here the master data isn’t really consolidated but is maintained as a set of “stub” records mapped to attributes stored in the source systems. There is little data movement other than the request to create or delete global stub records in the registry. The golden record is assembled dynamically using complex queries. The upside is that a real time central reference can be made with little or no infrastructure investment. The downside is that without central governance of the data the golden record isn’t highly reliable.

Coexistence – Just like consolidated, coexistence harmonizes multiple sources into a physical golden record for downstream consumption. Coexistence adds the important step of updating the source systems. These requirements are typically high latency and can usually be met at an acceptable time and cost through bi-directional batch processes. Coexistence is a natural evolution of the consolidated architecture with the added benefit of linking centrally governed data back to the source systems. Interfacing with complex data sources, such as ERP systems, can become a costly drawback.

Centralized (Transactional) – While this approach also creates a physical golden record, the key differentiator from the consolidated and coexistence styles is the MDM hub extends enterprise governance over the source systems. Governing the MDM hub and the source systems together introduces shorter latency requirements that are typically addressed with a combination of web services integration and an authoring application that bridges centralized and application-specific governance needs. In most cases, this is a big step up in cost, complexity and implementation time. The pay-off is more comprehensive governance in real-time; however, too many complex sources can push cost and complexity beyond the value created by the architecture.

Most MDM Vendors support at least one, and often several, of these architectures. Magnitude MDM, for example, can support three (all except Registry). We often see organizations start with a consolidated style and evolve to coexistence. The advent of new products that allow simpler integration with ERPs have made Coexistence architectures far more cost effective. For example, our Innowera division can automate the update of data in SAP through their Process Runner technology.

We are now seeing more and more customers moving to the Centralized model for certain data domains. For example, we have customers using our MDM Workbench (which facilitates the use of centralized master data in day to day data entry and management) to gather and process Vendor data for ultimate use in SAP. For this implementation, Workbench was used to create custom forms, backed up by the built-in workflow, to collect, review, and approve the management of hundreds of vendors and post that data back to SAP.

This reflects the increasing value of master data to organizations. With a Consolidated model the business gets a clean and complete store of vetted reference information. This is valuable for analysis and downstream operations. As an organization moves to Coexistence the value of the master data goes up, since it is now reflected in the operational systems, improving accuracy and efficiency. The Centralized approach is the ultimate pay-off since it not only makes the data available to all phases of operations, it is also self-maintaining through live integration into all aspects of data-based operations.

We like to think of this as matching the master data process to the business process. Making the best data available to your business processes, and allowing it to be updated immediately while still enforcing all data quality and governance rules, finally delivers on the promise of master data management.

If you’re interested in learning how you can dramatically accelerate the ability to deliver business value faster through master data management solutions, request a demo today!

2 replies
  1. James Brown
    James Brown says:

    Great and informative post. Could you please point me to the next blog?

    “In my next blog I’ll discuss the circumstances when that jump from consolidated to coexistence or from coexistence to centralized might be worth considering.”

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply