Data integrity and accountability are critical aspects of modern database systems, especially for applications like Odoo that manage complex business workflows. An undo-redo mechanism provides a structured way to track, revert, and reapply changes, ensuring data consistency and facilitating error recovery. This guide delves into the implementation of a robust undo-redo system in PostgreSQL tailored for Odoo. By leveraging PostgreSQL’s advanced features, such as triggers and JSONB data types, you can seamlessly capture updates and manage undo-redo operations for any table in your database.
1. Capturing Updates
A trigger function is invoked whenever a row in any table is updated.
Actions Performed by the Trigger:
1. Extract the state of the row before the update.
2. Captures the changed data during the update process.
3. Stores the captured data, along with metadata such as table name and user ID, in the undo_redo table.
2. Undo Operation
Revert the update and restore the original state of the data.
1. Retrieve the old state (updated_data) from the undo_redo table.
2. Apply the old state to the corresponding row in the original table.
3. Redo Operation
Reapply the update that was undone.
1. Reverse the undo operation.
2. Reapply the changes stored in the undo_redo table to the original row.
Creating the backup table
We create a backup table to store the pre-updated rows on every table in our database except the backup table named undo_redo.
The undo_redo table acts as a central repository for storing the details of updated rows.
CREATE TABLE undo_redo (
id SERIAL PRIMARY KEY,
table_name VARCHAR(255),
record_id INT,
updated_data JSONB,
mode VARCHAR(4) CHECK (mode IN ('undo', 'redo')),
user_id INT REFERENCES res_users(id), // foreign key references
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
log_update_data()
Create a function for comparing the column values of the row before updation and after updation and then save it into updated_field, which is a JSONB type that stores the updated column and its previous value as a key-value pair and then inserts it into the backup table called undo_redo.
The function log_update_data() is responsible for:
1. Comparing the old and new row values for each updated column.
2. Storing the differences (changed columns) in a JSONB format.
3.Inserting the change logs into the undo_redo table, which holds the backup of the previous state of the updated rows.
CREATE OR REPLACE FUNCTION log_update_data()
RETURNS TRIGGER AS $$
DECLARE
changed_fields JSONB = '{}'::JSONB;
old_row JSONB;
new_row JSONB;
column_name TEXT;
BEGIN
old_row = to_jsonb(OLD);
new_row = to_jsonb(NEW);
FOR column_name IN SELECT jsonb_object_keys(old_row)
LOOP
IF old_row ->> column_name IS DISTINCT FROM new_row ->> column_name THEN
changed_fields = jsonb_set(
changed_fields,
array[column_name],
old_row -> column_name
);
END IF;
END LOOP;
IF jsonb_typeof(changed_fields) != 'null' AND changed_fields != '{}'::JSONB THEN
BEGIN
INSERT INTO undo_redo (user_id,
table_name,
record_id,
updated_data,mode
)
VALUES (
OLD.write_uid,
TG_TABLE_NAME,
OLD.id,
changed_fields,'undo'
);
EXCEPTION WHEN OTHERS THEN
RETURN NEW;
END
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
* updated_data JSONB:
A JSONB object is initialized as an empty JSON ('{}'::JSONB). It is used to store the previous values of columns that are updated.
* old_row JSONB:
Holds the entire row's data before the update. This allows tracking the previous state of the data.
* new_row JSONB:
Stores the entire row's data after the update, capturing the current state.
* column_name TEXT:
Used in a FOR loop to iterate over the column names extracted from the row.
Key Functions Used:
to_jsonb():
Converts data (like rows, records, or structures) into the JSONB format. JSONB is a binary format of JSON, optimized for efficient storage, indexing, and manipulation.
Example:
* old_row = to_jsonb(OLD): Converts the row before the update into JSONB format and assigns it to old_row.
* new_row = to_jsonb(NEW): Converts the row after the update into JSONB format and assigns it to new_row.
jsonb_object_keys():
Extracts the keys (column names) from a JSONB object. This enables iteration over all columns in the row. Retrieves the value associated with a key in JSONB format as text. This is used to compare column values between old_row and new_row.
IS DISTINCT FROM:
Compares two values and identifies if they are different, handling NULL values properly.
Workflow
* Convert the rows before (OLD) and after (NEW) the update into JSONB format
* Iterate over the column names extracted from old_row
* Compare the column values from old_row and new_row
* If a difference is detected, store the previous value in updated_data
Here, jsonb_set() updates the updated_data object by inserting the previous value of the column. The process systematically compares old and new data for each column in a row, identifies changes, and logs the previous values in a JSONB format. This data is then stored in an undo-redo table for recovery or rollback purposes. Functions like to_jsonb(), jsonb_object_keys(), and jsonb_set() make the process efficient and maintainable, leveraging PostgreSQL's robust JSONB capabilities.
There are two types of situations
* Set the function for current tables in the database for storing the pre-updated rows in the backup table.
* Set the function for newly created tables for storing the pre-updated rows into a backup table.
Set the function for current tables in the database for storing the pre-updated rows in the backup table
DO $$
DECLARE
rec RECORD;
BEGIN
FOR rec IN
SELECT n.nspname, c.relname
FROM pg_class c
JOIN pg_namespace n ON c.relnamespace = n.oid
WHERE c.relkind = 'r'
AND n.nspname NOT IN ('pg_catalog', 'information_schema') AND c.relname != 'undo_redo'
LOOP
EXECUTE format('
CREATE TRIGGER capture_delete_dynamic
AFTER UPDATE ON %I.%I
FOR EACH ROW
EXECUTE FUNCTION log_update_data();', rec.nspname, rec.relname);
EXECUTE format('
CREATE TRIGGER log_delete_trigger
AFTER DELETE ON %I.%I
FOR EACH ROW
EXECUTE FUNCTION log_delete_data();',
rec.nspname, rec.relname);
END LOOP;
END $$;
NOTE: pg_class system catalog in PostgreSQL contains metadata about all relations in the database.
1. Identifies Tables:
* Selects all regular tables (relkind = 'r') from schemas not part of pg_catalog or information_schema.
2. Creates Triggers:
* For each table, it:
* Adds an AFTER UPDATE trigger (capture_delete_dynamic) to call the log_update_data() function, which logs updated row data.
* Adds an AFTER DELETE trigger (log_delete_trigger) to call the log_delete_data() function, which logs deleted row data.
Here we loop through the existing tables in all schemas except system tables that are stored in pg_catalog and information_schema, and we set the triggers capture_delete_dynamic and log_delete_trigger to all table’s, by looping.
Set the function for newly created tables for storing the pre-updated rows in backup table
CREATE OR REPLACE FUNCTION add_update_trigger_to_new_tables()
RETURNS VOID AS $$
DECLARE
rec RECORD;
BEGIN
FOR rec IN
SELECT n.nspname, c.relname
FROM pg_class c
JOIN pg_namespace n ON c.relnamespace = n.oid
WHERE c.relkind = 'r'
AND n.nspname NOT IN ('pg_catalog', 'information_schema')
LOOP
IF NOT EXISTS (
SELECT 1
FROM pg_trigger
WHERE tgrelid = (SELECT oid FROM pg_class WHERE relname = rec.relname AND relnamespace = (SELECT oid FROM pg_namespace WHERE nspname = rec.nspname))
AND tgname = 'capture_delete_dynamic'
) THEN
IF rec.relname != 'undo_redo' THEN
EXECUTE format('
CREATE TRIGGER capture_delete_dynamic
AFTER UPDATE ON %I.%I
FOR EACH ROW
EXECUTE FUNCTION log_update_data();', rec.nspname, rec.relname);
EXECUTE format('
CREATE TRIGGER log_delete_trigger
AFTER DELETE ON %I.%I
FOR EACH ROW
EXECUTE FUNCTION log_delete_data();',
rec.nspname, rec.relname);
END IF;
END IF;
END LOOP;
END;
$$ LANGUAGE plpgsql;
This PostgreSQL function, add_update_trigger_to_new_tables, dynamically adds triggers to user-defined tables if they do not already exist. Here's how it works:
1. Identifies Tables:
* Scans all regular tables (relkind = 'r') in non-system schemas (pg_catalog, information_schema are excluded).
2. Checks for Existing Triggers:
* For each table, it verifies whether the capture_delete_dynamic trigger is already present.
3. Creates Triggers:
* If the trigger does not exist and the table is not undo_redo:
a. Adds an AFTER UPDATE trigger (capture_delete_dynamic) to invoke the log_update_data() function for logging updated rows.
b. Adds an AFTER DELETE trigger (log_delete_trigger) to invoke the log_delete_data() function for logging deleted rows.
log_delete_data()
Designed to maintain data integrity in the undo_redo table, this function deletes backup entries corresponding to a row that has been removed from the target table. By doing so, it ensures that redundant entries in the undo_redo table are cleaned up efficiently.
CREATE OR REPLACE FUNCTION log_delete_data()
RETURNS TRIGGER AS $$
BEGIN
DELETE FROM undo_redo
WHERE record_id = OLD.id
AND table_name = TG_TABLE_NAME;
RETURN OLD;
END;
$$ LANGUAGE plpgsql;
Here we create a function log_delete_data() .The purpose of this function is that, if we delete a row from any other table except the backup table called undo_redo, if the deleted row id is exist in the backup table undo_redo, then delete that corresponding row
CREATE OR REPLACE FUNCTION trigger_on_table_creation()
RETURNS EVENT_TRIGGER AS $$
BEGIN
PERFORM add_update_trigger_to_new_tables();
END;
$$ LANGUAGE plpgsql;
The function trigger_on_table_creation is an event trigger designed to automatically invoke the add_update_trigger_to_new_tables function whenever a table is created in the database.
CREATE EVENT TRIGGER auto_add_delete_triggers
ON ddl_command_end
WHEN TAG IN ('CREATE TABLE')
EXECUTE FUNCTION trigger_on_table_creation();
* This event trigger listens for ddl_command_end events (DDL statements).
* Specifically, it reacts only to CREATE TABLE operations, as specified by WHEN TAG IN ('CREATE TABLE').
* When a CREATE TABLE command is detected, the event trigger executes the trigger_on_table_creation function.
* The trigger_on_table_creation function calls add_update_trigger_to_new_tables, which checks for the newly created table and adds the required triggers to it if they are not already present.
handle_log_reinsert()
This function implements the core undo-redo logic. It swaps column values between the target table and the undo_redo table while toggling the mode between undo and redo. This ensures a seamless rollback or reapplication of changes, enabling precise control over the data recovery process.
CREATE OR REPLACE FUNCTION handle_log_reinsert()
RETURNS TRIGGER AS $$
DECLARE
target_table TEXT;
column_list TEXT;
value_list TEXT;
column_count INT;
new_mode TEXT;
latest_id INT;
BEGIN
IF TG_TABLE_NAME = 'undo_redo' THEN
target_table := OLD.table_name;
column_list := array_to_string(
array(SELECT jsonb_object_keys(OLD.updated_data)),
', '
);
value_list := array_to_string(
array(
SELECT format('%L', OLD.updated_data->>key)
FROM jsonb_object_keys(OLD.updated_data) AS key
),
', '
);
new_mode := CASE OLD.mode
WHEN 'undo' THEN 'redo'
ELSE 'undo'
END;
column_count := array_length(string_to_array(column_list, ','), 1);
IF column_count > 1 THEN
EXECUTE format(
'UPDATE %I SET (%s) = (%s) WHERE id = %L',
target_table,
column_list,
value_list,
OLD.record_id
);
ELSIF column_count = 1 THEN
EXECUTE format(
'UPDATE %I SET %s = %s WHERE id = %L',
target_table,
column_list,
value_list,
OLD.record_id
);
END IF;
SELECT id INTO latest_id
FROM undo_redo
WHERE table_name = OLD.table_name
AND record_id = OLD.record_id
ORDER BY id DESC
LIMIT 1;
UPDATE undo_redo
SET mode = new_mode
WHERE id = latest_id;
RETURN OLD;
END IF;
END;
$$ LANGUAGE plpgsql;
This function manages the undo-redo operations for a backup table, ensuring that updates to the base table and the backup table are synchronized. Here's how it works:
Undo-Redo Mechanism:
* When a delete operation is performed on the backup table (undo_redo):
1. The updated column values from the base table and the backup table are swapped.
2. The mode column in the backup table toggles between undo and redo.
For example:
1. If the mode is undo, it changes to redo after the delete operation.
2. Similarly, if the mode is redo, it changes back to undo.
Key Variables
* target_table (TEXT): Stores the name of the backup table.
* column_list (TEXT): Holds the names of the updated columns from the updated_data field.
1. updated_data is a JSONB type field that contains the column names and their values before the update in the base table.
* value_list (TEXT): Contains the updated values of the columns, extracted from the updated_data field.
* column_count (INT): Stores the number of updated columns in column_list.
* new_mode (TEXT): Toggles the mode column value from undo to redo (or vice versa).
* latest_id (INT): Stores the latest row ID of the affected record.
Workflow
1. Table Name Assignment:
* The name of the backup table (target_table) is set using the OLD keyword, where the table_name field of the undo_redo table determines the target table.
2. Extract Column Data:
* Extracts the list of updated column names (column_list) and their corresponding values (value_list) from the updated_data field.
* Computes the number of updated columns and stores it in column_count.
3. Toggle the Mode Column:
* Determines the new value for the mode column:
* If mode is undo, it is changed to redo.
* If mode is redo, it is changed back to undo.
4. Perform Update Query:
* Executes an UPDATE query to synchronize the base table and the backup table, swapping the updated column values.
* The update logic depends on the value of column_count, iterating through all updated columns.
5. Outcome:
* When an update occurs on the base table, the previous column values are saved in the backup table.
* When a delete operation is performed on the undo_redo table:
* The updated column values are swapped between the base and backup tables.
* The mode value is toggled, enabling smooth undo-redo functionality.
CREATE TRIGGER after_delete_trigger
AFTER DELETE ON undo_redo
FOR EACH ROW
EXECUTE FUNCTION handle_log_reinsert();
This trigger, after_delete_trigger, is designed to seamlessly handle undo-redo operations for the undo_redo table. It is invoked whenever a row is deleted from the undo_redo table, enabling a dynamic and efficient mechanism to synchronize changes between the target table and the undo_redo table.
Undo and Redo Logic
Undo
The undo operation reverts a change made to the target table by restoring the previous state of the affected row. It retrieves the updated_data from the undo_redo table and applies it back to the corresponding row in the target table. This ensures that the data is reverted to its original state prior to the update.
Redo
The redo operation reverses an undo action by reapplying the changes stored in the undo_redo table to the target table. It effectively restores the updated state, ensuring that any reverted modifications can be reapplied seamlessly when required.
Implementing a robust undo-redo mechanism in PostgreSQL for Odoo enhances data integrity, accountability, and error recovery. By leveraging triggers, event triggers, and advanced JSONB functionalities, this system ensures seamless tracking and management of changes across database tables. The modular approach of capturing updates, reverting changes (undo), and reapplying updates (redo) enables efficient handling of complex workflows while maintaining auditability. With the ability to dynamically apply triggers to existing and newly created tables, this solution adapts to evolving database schemas without additional overhead.
This framework not only safeguards business-critical data but also empowers users and administrators with a powerful tool to navigate data modifications confidently, ensuring consistency and reliability in Odoo's operational workflows.
Optimizing the performance of large data imports in Odoo can be challenging, but understanding How to Speed Up Odoo Data Import Having Many2Many using PostgreSQL provides the techniques and best practices needed to handle complex Many2Many relationships efficiently and significantly reduce processing time.