This is a simple implementation that involves adding a single column to each database table you wish to protect from lost updates. This column is generally either a NUMBER or DATE/TIMESTAMP column. It is typically maintained via a row trigger on the table, which is responsible for incrementing the NUMBER column or updating the DATE/TIMESTAMP column every time a row is modified.
Note I said it was typically maintained via a row trigger. I did not, however, say that was the best way or right way to maintain it. I would personally prefer this column be maintained by the UPDATE statement itself, not via a trigger because triggers that are not absolutely necessary (as this one is) should be avoided. For background on why I avoid triggers, refer to my “Trouble with Triggers” article from Oracle Magazine, found on the Oracle Technology Network at https://blogs. oracle.com/oraclemagazine/post/the-trouble-with-triggers.
The application you want to implement optimistic concurrency control would need only to save the value of this additional column, not all of the before images of the other columns. The application would only need to verify that the value of this column in the database at the point when the update is requested matches the value that was initially read out. If these values are the same, then the row has not been updated.
Let’s look at an implementation of optimistic locking using a copy of the SCOTT.DEPT table. We could use the following Data Definition Language (DDL) to create the table:
$ sqlplus eoda/foo@PDB1
SQL> create table dept( deptno number(2),dname varchar2(14),
loc varchar2(13),last_mod timestamp with time zonedefault systimestampnot null,constraint dept_pk primary key(deptno) ); Table created.
Then we INSERT a copy of the DEPT data into this table:
SQL> insert into dept( deptno, dname, loc ) select deptno, dname, loc from scott.dept;4 rows created.
SQL> commit;Commit complete.
That code re-creates the DEPT table, but with an additional LAST_MOD column that uses the TIMESTAMP WITH TIME ZONE datatype. We have defined this column to be NOT NULL so that it must be populated, and its default value is the current system time.
This TIMESTAMP datatype has the highest precision available in Oracle, typically going down to the microsecond (millionth of a second). For an application that involves user think time, this level of precision on the TIMESTAMP is more than sufficient, as it is highly unlikely that the process of the database retrieving a row and a human looking at it, modifying it, and issuing the update back to the database could take place within a fraction of a second. The odds of two people reading and modifying the same row in the same fraction of a second are very small indeed.
Next, we need a way of maintaining this value. We have two choices: either the application can maintain the LAST_MOD column by setting its value to SYSTIMESTAMP when it updates a record, or a trigger/stored procedure can maintain it. Having the application maintain LAST_MOD is definitely more performant than a trigger-based approach, since a trigger will add additional processing on top of that already done by Oracle.
However, this does mean that you are relying on all of the applications to maintain LAST_MOD consistently in all places that they modify this table. So, if each application is responsible for maintaining this field, it needs to consistently verify that the LAST_MOD column was not changed and set the LAST_MOD column to the current
SYSTIMESTAMP.
For example, if an application queries the row where DEPTNO=10:
SQL> variable deptno number
SQL> variable dname varchar2(14)
SQL> variable loc varchar2(13)
SQL> variable last_mod varchar2(50)
SQL> begin :deptno := 10; select dname, loc, to_char( last_mod, ‘DD-MON-YYYY HH.MI.SSXFF AM TZR’ ) into :dname,:loc,:last_mod from dept where deptno = :deptno;end;
We would use this next update statement to modify the information. The last line does the very important check to make sure the timestamp has not changed and uses the built-in function TO_TIMESTAMP_TZ (tz is short for time zone) to convert the string we saved in from the SELECT statement back into the proper datatype. Additionally, line 3 of the UPDATE statement updates the LAST_MOD column to be the current time if the row is found to be updated:
SQL> update dept set dname = initcap(:dname),last_mod = systimestamp
where deptno = :deptno and last_mod = to_timestamp_tz(:last_mod, ‘DD-MON-YYYY HH.MI.SSXFF AM TZR’ ); 1 row updated.
As you can see, one row was updated, the row of interest. We updated the row by primary key (DEPTNO) and verified that the LAST_MOD column had not been modified by any other session between the time we read it first and the time we did the update. If we were to try to update that same record again, using the same logic but without retrieving the new LAST_MOD value, we would observe the following:
SQL> update dept set dname = upper(:dname), last_mod = systimestamp where deptno = :deptno and last_mod = to_timestamp_tz(:last_mod, ‘DD-MON-YYYY HH.MI.SSXFF AM TZR’ ); 0 rows updated.
Notice how 0 rows updated is reported this time because the predicate on LAST_MOD was not satisfied. While DEPTNO 10 still exists, the value at the moment we wish to update no longer matches the timestamp value at the moment we queried the row. So, the application knows that the data has been changed in the database, based on the fact that no rows were modified—and it must now figure out what it wants to do about that.
You would not rely on each application to maintain this field for a number of reasons. For one, it adds code to an application, and it is code that must be repeated and correctly implemented anywhere this table is modified. In a large application, that could be in many places. Furthermore, every application developed in the future must also conform to these rules. There are many chances to miss a spot in the application code and thus not have this field properly used. So, if the application code itself isn’t responsible for maintaining this LAST_MOD field, then I believe that the application shouldn’t be responsible for checking this LAST_MOD field either (if it can do the check, it can certainly do the update).
So, in this case, I suggest encapsulating the update logic in a stored procedure and not allowing the application to update the table directly at all. If it cannot be trusted to maintain the value in this field, then it cannot be trusted to check it properly either. So, the stored procedure would take as inputs the bind variables we used in the previous updates and do exactly the same update. Upon detecting that zero rows were updated, the stored procedure could raise an exception back to the client to let the client know the update had, in effect, failed.
An alternate implementation uses a trigger to maintain this LAST_MOD field, but for something as simple as this, my recommendation is to avoid the trigger and let the DML take care of it. Triggers introduce a measurable amount of overhead, and in this case they would be unnecessary. Furthermore, the trigger would not be able to confirm that the row has not been modified (it would only be able to supply the value for LAST_MOD, not check it during the update); hence, the application has to be made painfully aware of this column and how to properly use it. So the trigger is not by itself sufficient.
Leave a Reply