Is Your Data Model Truly Change-Proof?

September 16, 2025

Change isn’t the problem. Inefficient data models are.

A data model is a representation of the way data is defined and structured in a database. The ability to change your product documentation efficiently highly depends on the data models of the tools you use. 

📄 If you are still working on a document-based approach, you need to create a new revision of the entire document and release it accordingly. You take a copy of the latest released revision of your Word document and process the update. 

🧩 On the other hand, if you’ve adopted a model-based approach and broken the document into smaller, manageable elements, such as making each requirement its own object, you can now potentially release individual requirements. 

But can you always release an individual data element, such as a requirement or a single line in the bill of materials?

👉A dataset is a set of information that must be released as a whole and can be released separately from any other dataset.

⚙️So, replacing one part in a bill of materials with another is a new revision of the BoM. Even if the BoM line is a separate data element, you cannot release it independently from the other BoM lines in the same BoM. 

🏭Or suppose you have Product Manufacturing Information (PMI) embedded in your 3D model, and the information is stored in the same revision as the geometry. In that case, you cannot revise the PMI without also revising the 3D model.

But you can change the BoM of a part without having to change its attributes. That means that the attributes of a part are not the same dataset as the BoM. However, in many PLM systems, the revision of a part or item is used to manage both the BoM and attributes. This data model makes changes less efficient. 

It is essential to consider data models in the context of change. Creating an initial version is relatively easy, but changing data is always more challenging. Design your data models in a way that allows them to support changes and the release of new revisions efficiently. 

💡 How CM2 helps:
The CM2 framework helps standardize your data by providing the blueprint for designing change-ready data models. It teaches us to separate datasets logically, so attributes, BoMs, and requirements can evolve at the pace they need, linked but without unnecessary hard-coupling. The result? Faster, cleaner changes with less disruption.

So I’ll leave you with this:

👉 Is your data model truly supporting change, or making it harder?

Check out the other How Do YOU CM2? posts.

Ready to go deeper?

Use code Martijn10 for 10% off training—and don’t forget to tell them Martijn sent you 😉.

Copyrights by the Institute for Process Excellence

This article was originally published on ipxhq.com & mdux.net.

Go to the Perspectives Page

About the Author

ALWAYS EVOLVE WITH IPX
Folge uns
auf Linked