We have complex single transactions in our powerbuilder client-server application that can involve 20 or more subordinate tables in a complex hierarchy. We would like to control access to the entire set via some lock management scheme which allows inquiry and reporting access but that prevents more than one authorized user updating or inserting records in the set.

We would also like to be able to see all such locks system wide (across tables) and release them in case of a system restart or user lockup.

Right now we are looking at implementing our own separate record locking table which records the lock by table, and key value, recording the userid, timelocked, application, and connection.

Are there already well worked out schedmes for such management - or are there built in feautres of SQL Anywhere which could facilitate this?

asked 27 Apr '12, 16:07

Glenn%20Barber's gravatar image

Glenn Barber
1.1k274456
accept rate: 8%

How often do you expect collisions between overlapping units-of-work initiated by different users? Is it really "more than once in a blue moon"?... do you really have different users regularly pounding away at the same data at the same time, and if so, what does that say about how the workflow is designed?

In my experience as application architect, optimistic concurrency control is almost always adequate, and that designing "transactions" that span multiple commits is just asking for trouble (bugs AND performance problems).

...of course, it is a fascinating design challenge, no end of intricacies to deal with; just make sure it's really necessary.

(30 Apr '12, 15:52) Breck Carter

We had gone for several years happily with optimistic concurrency control and assumed we could continue that way, but once the volume of activity picked up (50+ users) we found unexpected conflicts beginning to occur. Our Licensing system is the hub of a lot of activity in a Studio and multiple departments were starting to get in each others way. We have started to partition the application along departmental lines - but there's always the issue that we need to control overlap by checking the set out for change by a single user. For example - we don't want Legal revising the terms of an agreement while Finance is invoicing or changing the designated bank account or billing address.

(30 Apr '12, 16:07) Glenn Barber
Replies hidden

Glenn - do you also have "batch" processing where you run long-running reports over a significant portion of the database? If so, do you have a "batch window" or do you try to do this while concurrent updates can potentially occur?

(01 May '12, 09:52) Glenn Paulley

We do have the situation of long running reports - however most of these run on the kind of "Posted" data thats less likely to change. We don;t attempt to window this. We do set a status flag on the core Editiable Datasets to advise of possible "dirty" situations and raise that as an alert in the reports. For some reports we warn of Work in Progress before the report query is launched.

(01 May '12, 13:46) Glenn Barber
Replies hidden

I would argue that's another instance for requiring concurrency control management, then. We had identical issues at Great-West, and went to a range-locking scheme to support both batch updates (valuation) and reporting while permitting concurrent policy administration on other objects outside the affected range. By going to ranges, the technique permitted parallelization of the reporting processes which could be scaled up (or down) on-the-fly to correspond to system load or to the need to complete the cycle quickly in case of a prior system outage.

(01 May '12, 14:28) Glenn Paulley

Thats interesting and there may be a stong parallel for that feature in our system where Title, Contract and Customer are broad dimensional ranges that affect reporting and are usually focuses for Administration.

(01 May '12, 15:03) Glenn Barber

Expanding on this - there might be a case for different kinds of locks in the locktable, ones that prevent concurrent update, and others that flag the entire range of related transactions as a work-in-progress.

(01 May '12, 15:06) Glenn Barber
Replies hidden

Possibly - though as I said, the more complex the locking scheme, the greater potential for errors in the implementation. You can catch potential mistakes with the trigger approach, as long as the update frequency is low enough that the trigger invocation overhead is tolerable.

(01 May '12, 15:47) Glenn Paulley
2

Thanks Glenn

We will be implementing this carefully and slowly - with a configuration control that allows us to turn it off entirely if we run into unexpected logjams. I will update here after we have more experience.

(01 May '12, 15:52) Glenn Barber
1

Cool! I'll be interested to hear how you proceed.

One thing that might help you for specific concurrency control queries over the lock table is the use of the READPAST table hint (which only applies to connections executing at isolation level 1). This ensures that the SQL SELECT statement won't block on rows in the lock table that are already write-locked by other connections.

(01 May '12, 17:13) Glenn Paulley
More comments hidden
showing 5 of 10 show all flat view

For the trigger: let's assume that your lock table will contain a set of rows with each row constituting a lock. In that row you'll store the connection id, user name, timestamp, possibly some other meta data. When the lock is acquired (the row is updated for that user/connection) a COMMIT makes the lock durable.

Then, whenever an IUD is performed, an IUD trigger queries the lock table ensuring the connection ID of the current connection is identical to that held in the lock table. If not, the trigger raises an exception as the update transaction doesn't hold the required lock to perform the update. You'll want to give yourself an "out" by having the trigger disregard lock checking for a sysadmin connection, which will permit you to modify the database as necessary to correct problems without issuing a bunch of DDL to disable or modify the trigger.

permanent link

answered 29 Apr '12, 12:07

Glenn%20Paulley's gravatar image

Glenn Paulley
10.8k576106
accept rate: 43%

What we are currently experimenting with is somewhat application driven - ensuring that the user cannot get update access without successfully inserting a lock record for the table and key (which is limited via an alternative key uniqueness). We commit the lock record and test for success before the users application can access the entire transaction. We also create a lock on insertion of new transactions (while the user remains in the edit process), and remove the lock on deletion (that could be managed by a trigger) and when the user completes their work. We have System event triggers that cleanup a connection's locks on disconnect and another that truncates the lock table on startup. As the implementation allows an administrator to query the lock, the administrator can also override it. A nice feature is that a user encountering an unexpected conflict is advised as to who is working on the transaction set that they have requested.

I can see how the implementation of the IUD trigger on any tables in the set of transactions might alert us to uncontrolled access to the tables via rogue processes, however coordinating the trigger on the table and row involved may require a rather more complicated query to determine whether the row involved was in the set controlled by the lock.

(30 Apr '12, 11:32) Glenn Barber
Replies hidden

I would agree with you, Glenn, that the process could get complex. In particular, the more fine-grained the lock implementation (or the intent of the implementation to be fine-grained) the greater the possibilities of either (1) a transaction blocking on a lock query or update for a different lock - which could be mitigated through the blocking_timeout option - and (2) deadlock caused by concurrent update transactions attempting to acquire the same locks but in different orders.

(30 Apr '12, 13:29) Glenn Paulley

Answers can vary depending on precisely what you want the semantics to be.

Your desire for only one updater, thus serializing update transactions, can be accomplished by creating in a couple of ways. You could create a new table expressly for the purposes of controlling the serialization of update transactions - you could either use the LOCK TABLE statement to lock that table (all other attempts will fail until the owner issues a COMMIT or ROLLBACK)). Or you could go finer-grained and do so at a row level using UPDATE statements, again causing other updaters to block. Keep in mind that in SQL Anywhere each blocked transaction will tie up a worker for the duration.

With that - what semantics do you want read transactions to have? Do you want them to have a consistent view of the entire hierarchy? Do you want to permit dirty reads? What kind of blocking are you comfortable with?

permanent link

answered 27 Apr '12, 16:34

Glenn%20Paulley's gravatar image

Glenn Paulley
10.8k576106
accept rate: 43%

Thank you Glenn

As the owner might issue one or several commits while working on the set of records, and others may want unrestricted access to other records in the same table, it doesn't look like Lock Table is a candidate.

Using a row level lock on the top record in each set might accomplish preventing others from maintaining the set of records if we require that they acquire an lock before any maintenance activity. However this doesn't in any way protect the entire transaction set nor prevent insertions into the set from other applications unless they require the row lock.

If I do use the row level locking feature, is there an easy way for other applications to know what locks are being held, by whom and for how long?

(27 Apr '12, 18:05) Glenn Barber

You wrote:

Using a row level lock on the top record in each set might accomplish preventing others from maintaining the set of records if we require that they acquire an lock before any maintenance activity. However this doesn't in any way protect the entire transaction set nor prevent insertions into the set from other applications unless they require the row lock.

Exactly; if you want to implement your own application, unit-of-work locking scheme that spans COMMITs, then all of your applications have to "play ball" and work in concert.

If you want to rely on the server's concurrency control model, then the limitation that you have to deal with is that the server releases all long-term locks on (either) COMMIT or ROLLBACK; so each system transaction is atomic, but the server doesn't help you with business transactions that span across system transactions.

If the intent is to ensure that your application sees a "consistent" view of the data before it starts updating anything, then snapshot isolation might be worthwhile investigating.

permanent link

answered 27 Apr '12, 19:13

Glenn%20Paulley's gravatar image

Glenn Paulley
10.8k576106
accept rate: 43%

I've had to deal with the same scenario in a PB app. I don't have the problem of 20 tables (4 tables with up to 1200 rows per table). My solution was to do 1 commit at the end of the all of the updates.

permanent link

answered 28 Apr '12, 13:13

Tom%20Mangano's gravatar image

Tom Mangano
672242738
accept rate: 8%

We also implemented a unit at work update with a commit at the end - however the user can decide to Save (and therefore commit) and not leave the maintenance for that application. At one point we thought we would require the user to reinitiate the edit (and reacquire a lock) but that proved to be an unpopular behavior.

Also being a MDI application, the user could open another window and issue a commit on the same connection without leaving the first window.

There are also issues in complex applications where there must be controlled behavior in related apps. For example you might want to prevent a Contract from being edited at the same time another user is generating invoices for that contract.

No matter how we slice it, there always seems to be unworkable limitations when depending on Record Locking to control access in Unit of Work situations - especially because of the inability to limit the impact of a Commit.

We are revisiting the old physical lock scheme for these complicated unit-of-work schemes- where the existence of a row in a specialized table (identified by user, application, table, key, connectionid) represents the hard lock of the unit of work. This can be messy because of hotspots in the locking table, and then there is the cleanup that needs to be done when connections drop or on DB initialization - and finally some mechanisms are needed for management of contention.

While we are working this out we were interested in what other approaches others had taken to deal with locking schemes in complex unit of work situations.

(28 Apr '12, 13:49) Glenn Barber

Right. This is a fairly common requirement of complex transactions, and I'm certainly familiar with them from my background in both individual and group insurance (seems like a lifetime ago now).

One technique you can use to ensure that other applications "play ball" with your UOW locking scheme is to create INSERT-UPDATE-DELETE statement triggers that verify the existence of a row in the "lock" table. You'll have to code that check carefully to ensure the trigger doesn't block on any other (valid) connection. You can ensure only the appropriate UOW logic manages the lock table by using a stored procedure that executes as a different user who has the only permissions to modify the table.

You still have to conjure a way to get yourself out of difficulty if any update transaction "hangs" or waits, because you'll need to commit the update to the lock table independently of other UOW sub-transactions. You could implement that using an event that looks for any updating business transaction taking a significant amount of time, and if that time is exceeded then kill the connections for that user and remove the lock.

One thing we did at Great-West Life was use "staging" or "scratchpad" databases that roughly mirrored production but permitted lengthly UOW transactions. Only when a tx was complete did the user then "submit" the complete business transaction and a background process copied the scratchpad to production. Perhaps this is what you're already doing.

(28 Apr '12, 14:28) Glenn Paulley

Thanks Glenn

Thank you for you ideas on the IUD Triggers on the lock file - I would certainly appreciate more detail on how that mechanism worked.

The scratchpad approach harkens back to the old mainframe transaction posting concept where the transactions were fed in from an external interface (sometimes via tape) and posted in batch. This approach still has value when considering the many complications in interactive applications that it avoids.

The Batch concept fit the idea of posted financial transactions which were never subsequently edited, but with complex sets that need to undergo regular revision, one needs to check the transaction set out of the system back to the scratch pad for editing This allows for extensive workflow review of the changes before they arrive back in the system to update the "posted" version. Unfortunately, while the transaction set is being edited in the scratch pad area - one still still has to provide for protection of the both the Posted version and the Scratchpad version to prevent others from also editing.

One benefit of this approach is that It allows you to keep a log of the transactions which update the posted version - providing a historical record of change which is particularly helpful with SOX compliance. Another is that it allows for unimpeded reporting based on the "Posted" version while edits in the scratchpad are ongoing.

This scheme may also be useful when refactoring client-server as Web Apps, as the resultant unit-of-work transaction can be pushed to the remove server as an XML package.

(28 Apr '12, 16:23) Glenn Barber
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×21

question asked: 27 Apr '12, 16:07

question was seen: 4,560 times

last updated: 01 May '12, 17:13