PastgreSQL: Dead data tells all tales!
Time: 09:00 - 09:50
Room: Cyril Magnin
Change tracking within the database is often implemented using table-level triggers to populate “history tables”. This process is straightforward to implement and provides an easy way to track the history of all changes made to data within the database. Recording these changes within the database is often much more expensive (in terms of operational overhead) than most people realize. However, this approach has the significant advantage of providing the ability to query, filter, aggregate and generally process both the parent and history data with all the awesome power of PostgreSQL.
However, what if we wanted to do this for our entire database? What if we have a shadow of our database within itself that tracks changes made against data in every table. Operational overheads, disk cost and maintenance due to schema evolution on parent tables become prime considerations. In this talk we inspect the design of history tables, their trigger functions and their indexing costs to present a zero maintenance, low disk-cost, high throughput (pg_bench'd) change tracking system within your database.