we have a stored procedure which generates primary keys by base36 incrementing a field/row. This normaly works fine, but sometimes (during high load) we sometimes get duplicate primary keys.
This is related to the fact that we don't lock the counter field/row, because sometimes we have long running transactions which would block the whole system.
Out code currently looks like this:
FUNCTION "DBA"."getNewID"(in tableName char(20)) select value,IncrementProcedure into oValue,incProc from gd where keyname = tableName and RemoteName = current_publisher; .....calculate new id based on value...... update gd set value = rNewID where keyname = tableName and RemoteName = current_publisher return rNewID; end;
This code is running in the transaction context of the application and somtimes we get a race condition when the getNewID is executed concurrently. Is there a way to prevent this from inside sql anywhere ? In java we would use synchronized on the function....
However, while this will prevent duplicates (intent locks block other intent locks, but NOT read locks) it still has the behaviour that all insertions are effectively serialized via the updates to the gd table. This can be prevented, as per Volker's comment above, by using a SEQUENCE or an AUTOINCREMENT primary key. Alternatively, one could develop a key pool and manage the keys in the pool using a second connection (one implementation would be to use an EVENT) and a row-level update trigger.
answered 20 Dec '11, 13:53