Why did Johnston and Weis pursue this effort?
Both have extensive experience in data modeling, and both are experienced enterprise data architects. Weis, also has experience as a database analyst (DBA) and an SQL application developer. They have seen the time and money organizations spend trying to design, implement and retrofit temporal dimensions with varying degrees of success.
The first problem is trying to model time. Modeling tools do not temporal extensions built into them. Moreover, the relational model was not designed to handle time. Therefore, there are many types of jury-rigs trying to force time into models. Modelers and DBAs often argue proposed solutions to manage time.
Some solutions include adding an effective date or a surrogate representing time to the primary keys and consequently to the foreign keys. This pattern has the effect of exponentially increasing the volume of dependent (children, grandchildren and great-grandchildren) whenever a simple update is made to a parent resulting in a new version. Because of the foreign keys, this pattern requires that every dependent needs to be duplicated to tie to the new parent version to maintain referential integrity.
The second problem is that this pattern, while handling effective periods, does not handle assertion periods. The addition of assertion periods to this pattern complicate it even more because an effective period might have multiple assertion periods, causing overlapping periods.
Next, because the relation model does not include a Period datatype, the DBMS does not restrict overlapping time periods. Thus custom code is sometimes developed to try to prevent overlapping effective and assertion time periods. Yet, in many cases the application, either knowingly or ignorantly, live without temporal data integrity. This causes reporting, logic, accounting, customer service and financial problems. These can result in loss of customers, SEC sanctions, and often countless hours trying to debug the data integrity problems.
Sometimes, a solution might use logical keys rather then physical keys to try to tie related tables together so they don't have to exponentially create dependent table versions. Temporal Referential Integrity is often overlooked with these hodgepodge solutions, allowing a dependent to reference a time period that is outside the parents' valid time period. So, they might allow a policy to be tied to a client who might have a partial time period that does not contain the policy. This is often caused by a logical delete of a parent that is not properly tricked to the dependents and the dependents' dependents.
Plus, when they add an effective date, assertion date or a surrogate to a key, they also need to add it to the keys of indexes that would otherwise be unique. This will allow duplicates of an alternate (supposedly unique) key that have overlapping periods, unless special code to maintain temporal entity integrity.
Staging/Pending Tables and Batch Transaction Files
Many applications require a playground/sandbox area where systems doing data entry can prepare and stage data before moving it to the live production environment. There are many ways architects, and application designers try to solve this problem. 1) They might just use sequential, flat files where the pending data is stored waiting to be moved. This is not a relational solution, plus is generally not edited until the 'batch' is run. 2) Create Pending or Staging tables that are near duplicates of the production tables. These staging tables not only need to have the schemas maintained in duplicate, but data usually needs to be marshaled in and out to build the playground area and to try to maintain some semblance of referential integrity. Oftentimes there are very complex processes that are developed to try to maintain and marshal this staging data. Plus, there are usually duplicate applications that are used to access and edit this staging data. Then, any applications that need both the staging and production data needs to UNION the data across environments or cloned tables.
There are a few so-called best practices to maintain history in relational databases. One was is to add a flag or date to indicate when the version is no longer active, the second method is the create a clone history table, and either update it with a trigger. This is fine, except that first, you need to maintain a redundant schema, second you don't get any of the temporal benefits such as retroactive updates, proactive updates and the distinction between corrections vs new effective versions. Plus, when you query you need to need to create redundant SQL to query both the active and the history or you need to UNION Results together.
When application designers and developers to attempt to implement a robust temporal solution, they often do it one table at a time. This approach is burdensome and prone to error with inconsistent results.
There are many legal requirements for a bi-temporal solution, such as SEC, HIPAA. There are also customer service requirements for retroactive updates, proactive staging and historical inquiry.
The requirement that started the quest was a requirement in addition to those above, where the customer wanted to display any screen, view any page and reprint any report to appear exactly as it did at at ant date and time that it once appeared. We didn't want to create redundant programs or data structures to support this requirement, so we generalized it support the update requirements in addition to the inquiry and reporting requirements. Budget constraints for personnel and hardware drove the need for a common framework and the OID logical key concept. Then, we need to maintain temporal referential integrity and temporal entity integrity to maintain data quality and accuracy.