Hello,

I need to lock a table row for a "long" period of time. I have a row containing a BLOB from which a user is presented with a screen of information that they can interact with for a long (perhaps hours) period of time. I want the first user that accesses this row to have update ability and all others that access the row to be prevented from altering any row information. Once that first user is done then another user can obtain "write" permissions.

The way this could be done is to "set a flag" in the row (or elsewhere) when the row is first accessed and in code test to see if the flag is set and if so block attempts to alter the row data. However, this has the vulnerability that if that first user crashes or is disconnected in some other way we are left with a permanently locked row (requiring an Admin to detect the problem and then take action to reset the row). What I have implemented instead uses Level 0 Isolation along with:

SET TEMPORARY OPTION "blocking" = 'ON'; SET TEMPORARY OPTION "blocking_timeout" = '10000';

This allows the first user to get a Write lock and other users will get an error so the client code can act as needed. It takes advantage of the fact that the server (as I understand it) constantly monitors connections and if it detects that a connection has died it frees all associated locks. That takes care of the connection dying problem and works well except that we are leaving the lock in place for perhaps several hours. As I understand it that ties up a db connection for all of that time and, at a minimum, makes db connection pooling ineffective, not to mention we are probably going to hit some server limit on connections. This is for a SQLAnywhere 17 High Availability system.

What can you comment on/recommend for a better way to accomplish this:

1) Am i correct in my understanding that the server will free the lock if the connection dies?

2) Are my fears about exhausting the available connections correct?

3) Can you recommend a better way to keep a "lock" of some time on a row in place?

Thank you.

asked 06 Aug, 15:21

AlK's gravatar image

AlK
46521737
accept rate: 50%

3

As I understand it that ties up a db connection for all of that time

AFAIK, a blocked transaction also ties up a worker, which is a more important resource than a connection (although with v12 and above, the number of workers (aka "multiprogramming level") will be increased automatically by default), so that might be a problem or not.

How exactly does the first transaction get its write lock? By really doing an UPDATE on the row and leaving that transaction open until the user finishes work, or do you issue a blocking SELECT, say via a cursor?


FWIW, here's a thread dealing with a different situation ("unit of work") but also with "long transactions" and with several solutions discussed. With just a few superficial thoughts, I'd think a "non-blocking" solution might be to modify a flag in the table that a user is starting to "do work" (say, by storing his connection number within the row and then committing) and to disallow other users to modify the row when that flag is set. A disconnect event could make sure that the row is "unlocked" when the connection unexceptedly dies. That would not block other users.

However, you would have to implement your own "notification system" to tell those that the other user's work is done, if that is a requirement, too. On a database level, you could both use "MESSAGE TO CLIENT FOR CONNECTION" / "WAITFOR ... AFTER MESSASE BREAK" or v17's new mutex objects, however, those "waits" would again block the workers servicing the according connections.

(07 Aug, 04:12) Volker Barth
1

However, this has the vulnerability that if that first user crashes or is disconnected in some other way we are left with a permanently locked row (requiring an Admin to detect the problem and then take action to reset the row).

Just to comment on that: If you have to "clean up" data for lost connections/crashed applications, a Disconnect system event can be used to do so automatically, such as the sample from this FAQ:

SET OPTION PUBLIC.disconnect_procedure


That is, if you really need to "flag" the "start to edit" by the first updater and have to commit that (so other users see the modified flag), a Disconnect event should solve the problem of manually having to "clean up" that flag, particularly when you store the connection number of the according user, since the event can easily identify the connection number.

(07 Aug, 07:40) Volker Barth
Replies hidden

Hi Volker,

Thank you very much for your reply. You have made me aware of capabilities that I did not know were in the server and they look very powerful. Your first suggestion makes sense to me and it solves the problem of leaving the connection open for long period time. I have studied the example in the link you gave me at SET OPTION PUBLIC.disconnect_procedure and I believe I understand it (I am new to setting up events and stored procedures so please bear with me) but one item is not clear. Would you be so kind as to give me the syntax I would use in the handler to update the row to clear the connection ID that was initially set by the first user?

Also, I find the MESSAGE TO CLIENT to be exciting because I never thought there would be a capability available to notify other users that the lock has been removed. However, I don't understand how the client code actually receives the message (I am using embedded SQL) so could you clarify that please?

Finally, your suggestion of using a mutex is interesting so I went to learn about that and I wonder if you would recommend that over defining the event? Do you see any significant advantages or disadvantages for one approach or the other?

Again, thank you very much for your help.

(08 Aug, 16:44) AlK
1

Would you be so kind as to give me the syntax I would use in the handler to update the row to clear the connection ID that was initially set by the first user?

Well, it would be the opposite of the UPDATE the user does to mark the row as "blocked". Enhancing Breck's sample wirh a "locked_by_userid" column, say the user would do the following:

update t
   set locked_by_userid = connection_property('Number')
   where pkey = 1 and locked_by_userid is null;
commit;
-- Now the row is marked as locked for other users, but does not use a DB lock to do so, so the application need to behave accordingly...
... 
-- loooooong work...
-- finally mark row as available
update t
   set locked_by_userid = null
   where pkey = 1 AND locked_by_userid = connection_property('Number');
commit;

If the connection would die between marking the row as locked and as free, a disconnect event could handle this such as:

create event EV_Disconnect
type "Disconnect"
handler
begin
   -- NOTE: Ommitting all reasonable debug messages and error handling here!
   if event_parameter('DisconnectReason') in ('abnormal', 'inactive') then
      update t
         set locked_by_userid = null
         where locked_by_userid = event_parameter('ConnectionID');
   end if;
end;
(09 Aug, 04:11) Volker Barth
1

Also, I find the MESSAGE TO CLIENT to be exciting because I never thought there would be a capability available to notify other users that the lock has been removed. However, I don't understand how the client code actually receives the message (I am using embedded SQL) so could you clarify that please?

This is just a basic "inter-connection-communication" facility: One connection must use a WAITFOR ... AFTER MESSAGE BREAK statement and will wait until a message is received (or the maximum wait time has been reached), and the other connection must use a "MESSAGE TO CLIENT FOR ..." statement to notify the other(s) (and must usually know the other connection id beforehand). The exchanged information is just the message text, so it is not at all related to a particular database object/row/whatever, you need to use the message text to specify what is meant.

For a basic understanding, see the last sample in the v17 docs.

With v17, in case you have to use somekind of IPC between two connections, I would highly recommend the use of mutexes/semaphores over the MESSAGE TO CLIENT facility as they are easier to handle. (FWIW: While "mutexes" and "events" are both terms of OS level IPC facilities, an event in SQL Anywhere is something totally different, so a SQL Anywhere "mutex" is not meant to replace the event I have spoken about here!...)

However, as stated in a new comment under Breck's answer, I still think its easiest to use the default blocking mechanism.

(09 Aug, 04:33) Volker Barth

Hi Volker,

Thank you for continuing to provide helpful information in this thread! I understood the various column values I wanted to set to on lock the row but I simply had no prior experience with events and handlers so I didn't know what the actual syntax look like for the handler block. You provided that it looks as straightforward as normal SQL!. Also thanks for giving more information on how to set up messaging between the owner of the lock and those that would like to obtain it. As I'm reading things though it would appear that the MESSAGE TO CLIENT approach ties up a worker and per your early comments I understand that that is not desirable. I see that you are recommending Breck's approach but I find his approach to be confusing because of the fact that we tie up these resources. If you would, please see my comments and questions to him below and of course feel free to comment. Thanks

(11 Aug, 18:52) AlK
showing 2 of 6 show all flat view

I need to lock a table row for a "long" period of time.

Since you are dealing with a single row, and seem to be looking for a simple solution, you may not need the techniques discussed in What are best Practices for Unit-of-Work Lock Management?.

One way to lock a row against updates is to update the row yourself, right away, before the "long period of time" begins... and let SQL Anywhere's default locking and blocking mechanisms do all the work for you.

Volker mentioned this technique:

By really doing an UPDATE on the row and leaving that transaction open until the user finishes work.

One way to do that is to (a) add "update count" or similar small column to the row, and (b) have all connections "signal intent" to update the row by immediately updating the small column without comitting.

This technique does NOT require any of the following:

  • changes to isolation level
  • fiddling with database options
  • special code to release locks when Bad Things Happen

This technique also does not fill up the rollback log with blob data for a long period of time because although the whole row is locked, the whole row is not necessarily written to the rollback lock (e.g., when only a small column is actually updated).

As Volker noted, for each connection that is blocked and waiting, you will be using up one extra server-level ActiveReq (the number of server workers handling client-side requests) out of the MultiProgrammingLevel pool (the maximum number of requests that can be processed at one time)...

...so the question is, how many users are you expecting to update the same blob at the same time? ( on average, one, right? )

The following demo assumes you have control over the application architecture, and you can force every connection to attempt the intent-signalling UPDATE before proceeding with their work on the blob.

Set up for testing

CREATE TABLE t ( 
   pkey         INTEGER NOT NULL DEFAULT AUTOINCREMENT PRIMARY KEY,
   update_count INTEGER NOT NULL DEFAULT 1,
   blob         LONG BINARY NOT NULL );

INSERT t ( blob ) 
VALUES ( xp_read_file ( 'C:\\projects\\foxhound_benchmark_inventory_db\\inventory17.db' ) );

-- Show that an INSERT doesn't have to put anything in the rollback log.

SELECT DB_PROPERTY ( 'RollbackLogPages' );

DB_PROPERTY('RollbackLogPages')
'0'

-- Show that an UPDATE of the blob DOES put a lot of data in the rollback log.

UPDATE t
   SET blob = xp_read_file ( 'C:\\projects\\foxhound_benchmark_inventory_db\\inventory16.db' )
 WHERE pkey = 1;

SELECT DB_PROPERTY ( 'RollbackLogPages' );

DB_PROPERTY('RollbackLogPages')
'11423'

-- Show what's in the table, ready for concurrency testing.

COMMIT;

SELECT @@SPID, pkey, update_count, LENGTH ( blob ), @@VERSION FROM t ORDER BY pkey;

@@SPID,pkey,update_count,LENGTH(t.blob),@@VERSION
1,1,1,72130560,'17.0.7.3399'

Connection 1 signals intent to update the blob

UPDATE t
   SET update_count = update_count + 1
 WHERE pkey = 1;

SELECT @@SPID, pkey, update_count, LENGTH ( blob ), @@VERSION FROM t ORDER BY pkey;

@@SPID,pkey,update_count,LENGTH(t.blob),@@VERSION
1,1,2,72130560,'17.0.7.3399'

-- Show that this UPDATE didn't put anything in the rollback log.

SELECT DB_PROPERTY ( 'RollbackLogPages' );

DB_PROPERTY('RollbackLogPages')
'0'

Connection 2 blocked from signalling intent

UPDATE t
   SET update_count = update_count + 1
 WHERE pkey = 1;

-- The block details as shown by Foxhound...

       Blocked By:  1 / dba / Breck / - / ddd17-1 
     Block Reason:  Row Transaction Intent, Row Transaction WriteNoPK lock on dba.t 
 Locked Row Query:  SELECT * FROM dba.t WHERE ROWID ( t ) = 40370176;
   Last Statement:  update  
                      "t"
                      set "update_count" = "update_count"+1
                      where "pkey" = 1
   Last Plan Text:  ( Update [B]
                    ( IndexScan t t )
                    )
permanent link

answered 07 Aug, 06:57

Breck%20Carter's gravatar image

Breck Carter
27.0k447618889
accept rate: 21%

edited 07 Aug, 07:15

nice :). I particularly like the way it doesn't need any clearing up if the connection is lost (or the db crashes).

(07 Aug, 08:36) Justin Willey
Replies hidden
1

(or the db crashes)

OK, clean-up for that one is quite difficult to implement even with a "Disconnect" event...:) (Reminds me of the question about a missing "DatabaseShutdown" event, as opposite of the existing DatabaseStart event...)

(07 Aug, 09:56) Volker Barth

Yes - clean-up code for these kind of things needs to go in a Database Start event as well as in Disconnect events. That's part of what makes Breck's solution so neat.

(07 Aug, 10:27) Justin Willey

Well, I'd still wonder if a user connection should really be blocked that long - what will the UI tell the user? "Please wait until user XY has finally done with that thing...", even if that may take several hours?

For longer transactions, I guess it might be more reasonable to set option "blocking_timeout" to a small timespan (e.g. some seconds) and return the default blocking timeout error to the application so a user might do other work and try lateron (aka "polling" whether the row is still blocked), without blocking a connection for a long time... (All that would still work with Breck's scenario, without additional clean-up overhead.)

But of course, that behaviour is application-specific, so only AIK can tell:)

(07 Aug, 10:48) Volker Barth

what will the UI tell the user?

I know what the user will say...

(07 Aug, 14:59) Breck Carter

"DatabaseShutdown" event

If you could have that, could you also have a "Disembark" event?

(07 Aug, 15:12) Breck Carter

Well... a DatabaseShutdown system event would be helpful for cleaning up a controlled shutdown, say, for saving the current cache contents in v17 or for other clean-up stuff... Of course it would possibly not be callable when the database server crashes or when a database assertion appears... But I'm quite sure in most cases databases are shutdown intentionally and controlled.

Of course part of such clean-up can be delayed for the next DatabaseStart event, however, properties as the former cache contents cannot.

(08 Aug, 03:33) Volker Barth

Instead of the DatabaseShutdown event handler, I'd propose to design the application in the way that it can recover its data/state/what_should_be_cleaned when the DB starts.

If the shutdown process is controlled, this event is not needed. If a crash happens, this event is not fired :)

(08 Aug, 03:37) Vlad

I'm probably the last person who should post a flippant Julie Andrews gif... Foxhound desperately needs what Volker suggests (a DatabaseShutdown event) but Vlad's reality intrudes: Foxhound must (try to) do everything in the DatabaseStart event.

The point "in most cases databases are shutdown intentionally" certainly applies to embedded databases.

(08 Aug, 05:32) Breck Carter

Hi Volker,

The design I'm working towards allows the first user to access the road to have Write permissions. Subsequent users may still open the row and view the data but they are notified that they will not be able to make any changes and save back to the database.

(08 Aug, 16:51) AlK

Hi Breck,

Thank you for your extensive reply with the tests showing impact on the log. However, I'm confused about a couple of points. It seems you are suggesting that I use the default server locking when adding to the row the "signal to update intent" count without committing, but that was my original problem in that it would leave the transaction open for long period of time.It's obvious to me that there is something so that I do not understand about your suggestion. Also, I don't understand what I would do with this count showing the number of clients that are interested in being able to update the row. If you could elaborate a bit further I would appreciate that very much.

Thanks.

(08 Aug, 16:58) AlK

Well, I still do not understand how your users are informed about a writing conflict: Are they just informed that the desired row is currently "under construction" and therefore they cannot themselves modify it and then

  1. they are told to "try later" (leaving it to them how they find out about when the row is available for modification) or
  2. they will get a notification as soon as the row is available again (if so, when several users wait for a row, who is the next who can lock the row?)?

The default locking mechanism Breck has shown offers both ways:

  1. When the "blocking_timeout" is set to a small timespan (e.g. some seconds) and the row is still blocked, the request will return the default blocking timeout error, so you know your modification is currently not allowed, and you should try later. Your connection is not blocked, so you can do other work.

  2. When the "blocking_timeout" is set to 0, your request will wait until the current writer has committed his change, so your work (at least the current transaction) is suspended. (Of course you could use another connection to do separate work.)

In my understanding, in case the user should be informed automatically when the row is available again, that means any kind of DB-internal wait mechanism must be established (a blocked connection as with option 2, a WAITFOR MESSAGE, a wait for a mutex), and all those will require a worker to do the wait. There's no free lunch here. And therefore it seems that the default locking mechanism might be much easier to use than the other alternatives.

(09 Aug, 03:56) Volker Barth
2

For the record, the update_count column is just a column that can be updated in order to obtain a write lock on the row. It has no other purpose.

Yes, the technique presented does start a long-running transaction, that is the point: use the standard locking mechanism to prevent other people from starting a update dialog if one is already in progress.

A long-running transaction isn't necessarily evil, only if it has undesirable (and often unexpected) effects on other connections which is not the case here.

Nothing prevents other users from selecting and displaying the row if the default isolation level 0 (dirty reads) is in effect.

The technique can be enhanced to warn other users the row is being updated, instead of silently blocking them, as follows (using ISQL as the "application"):

BEGIN -- simulated application code block

DECLARE @sqlcode   INTEGER;
DECLARE @errormsg  VARCHAR ( 32767 );

SET TEMPORARY OPTION BLOCKING = 'OFF';

BEGIN 
   UPDATE t
      SET update_count = update_count + 1
    WHERE pkey = 1;
EXCEPTION
   WHEN OTHERS THEN
      SELECT SQLCODE, ERRORMSG() 
        INTO @sqlcode, @errormsg;
      SET TEMPORARY OPTION BLOCKING = 'ON';
      IF @sqlcode = -210 THEN
         MESSAGE STRING ( 'Connection ', @@SPID, ' - Try again later: ', @errormsg ) TO CONSOLE;
         RETURN; -- return to caller
      ELSE
         RESIGNAL; -- return to caller with error
      END IF;
END;

-- Carry on...

SET TEMPORARY OPTION BLOCKING = 'ON';
MESSAGE STRING ( 'Connection ', @@SPID, ' - OK to continue with update' ) TO CONSOLE; 

END; -- simulated application code block

-- MESSAGE output written to dbsrv17.exe -o file:

Connection 1 - OK to continue with update
Connection 2 - Try again later: User 'dba' has the row in 't' locked
(09 Aug, 13:05) Breck Carter

Nothing prevents other users from selecting and displaying the row if the default isolation level 0 (dirty reads) is in effect.

As Captain Obvious would state, the same is true for the snapshot isolation modes.

(10 Aug, 03:04) Volker Barth

snapshot isolation

Yes, indeed, that works too.

Being old and out of date myself, I love deja vu isolation, er, snapshot isolation... you pay extra in performance for the privilege of seeing data which no longer exists :)

(10 Aug, 06:34) Breck Carter

Captain Obvious

Pardon my cultural ignorance, but surely you don't have those hotels.com commercials in Germany, do you? showing a guy in a uniform popping up unnanounced in hotel rooms? :)

(10 Aug, 06:40) Breck Carter

Well, not my area of expertise, apparently, so I'm unsure about that. At least if they are shown and I have seen them, I can't remember them, but that doesn't say much:)

(10 Aug, 08:41) Volker Barth

Hello Breck,

Thank you for your ongoing suggestions because they're very helpful. I had originally implemented pretty much what you have been suggesting by updating a row to obtain a lock and leaving the transaction open by not committing. However, my original reason for asking the question is that I was concerned about leaving transactions open for long periods of time since those are resources that must have some limits in the server. One of the first comments that Volker made was that he felt this approach was tying up Workers which could be an even more important resource. So while neither he nor you have said that doing so should be avoided at all costs and in fact your suggestion is making the point that we should just use the native locking and therefore tying up the transaction for long for the time I'm still left with this confusion. Please forgive me for appearing to continue to be so "dense".

To try to as clear as I can about what I'm trying to do, I want to be able to let some user obtain a lock on a row, keep that lock open for a long period of time (perhaps a couple of hours) while also allowing other users to access the row and view its contents without being able to make any changes until the first user releases the lock. None of the users has a special privileges, it is simply a matter of who opens the row first. As a first pass I was not worrying about notifying the other users that the lock had been removed but Volcker has made me aware of a possible way I could do that. But to keep things simple for the moment I'm just trying to get an understanding of the best way to keep the lock in place for a long time. So I suppose at this point my question is simply how dangerous is it to use the native locking over perhaps several hours. This question is related to server use for a SaaS where we could potentially have hundreds of users on the same server each of which obtains locks on multiple (different) rows. I don't have any idea what the capacity of the server is so what is your reaction to the possibility that we have hundreds of transactions left open for perhaps hours at a time. (I can't believe that would be considered a "good thing" and that is why I am concerned about your suggestion - if I'm not missing something!). Thank you.

(11 Aug, 19:07) AlK

I see this question hasn't still be answered.

I do not know what kind of application you develop. Is it a web, or native, or something that is built with SQL only (no compiled languages). How you want to scale it, or the underlying database. Does it use multiple schemas, or a single one... Well - more questions than answers.

In my small comment, I'd like to give you an interesting chapter that describes different types of locks and thus you can choose whatever you want for your implementation.

I am actually fond of in-memory concept for small, but important things such as the one being discussed. Who knows, maybe this solution will be more preferable for you rather than "pausing" the worker thread with the trick that Breck suggested.

SAP servers use the similar concept - in-memory locking table + API that manages it. It serves well for many cases, quite cheap, reliable, easier to understand, but sometimes difficult to debug/fix.

I haven't given you any ready-to-use implementation, because I do not know your target and answers to my questions above. I'd rather think into the direction how to use Redis-like solution. It requires the coding + concept + testing, but I think it is worth it :)

(13 Aug, 10:23) Vlad

One of the first comments that Volker made was that he felt this approach was tying up Workers which could be an even more important resource. So while neither he nor you have said that doing so should be avoided at all costs and in fact your suggestion is making the point that we should just use the native locking and therefore tying up the transaction for long for the time I'm still left with this confusion.

As to "tying up workers": As discussed in several comments here, blocking only occurs when you ask the server to do so, i.e. when you want to wait for a lock to be released. If you do not need to wait but just want to test whether the row is available or not, you can use the "non blocking" test from Breck's comment from 09 Aug, 13:05. That will not tie up additional workers.

Workers are a limited resource, but SQL Anywhere by default will increase there number if needed, so that is not a general problem. It's just something to be aware of. (The other component is the stack size used by each worker, so basically many workers with a huge stack size require a large part of the database cache, s. the -gnh and -gss database server options, and therefore you will need to provide enough cache.)


To clarify: "Long transactions" are not necessarily a bad thing - if users do need long time to modify data, then this is simply a fact of life. (Though I'm not aware what your users are doing with a single row for hours...? - Will the do several updates on a row and then a final commit, or will they start modifying a row (say with a BLOB) and then - after some hours, do a single update and then commit?)

A drawback of long transactions for readers is that they might be blocked for a long time, based on your isolation level, or they might have dirty reads, or when using snapshot isolation, they might require that row versions are stored for several hours. - Not necessarily a bad thing either, but something to be aware of.

(14 Aug, 03:48) Volker Barth
1

I apologize to Volker, Breck and now Vlad for failing to provide the information you are requesting so please allow me to attempt to do that now. Here are what I think are the important points to help answer some of your questions about what I’m trying to accomplish:

  • A row from a specific table in our database contains a BLOB that our application converts into a great deal of graphically rendered information. When a user accesses this row and the graphical information is displayed in the app the user can often manipulate that information over long periods of time. (The user is encouraged to do a periodic save of this updated information back to the database.) A user that first accesses a particular row needs to be able to “lock” that row so that no other user is able to make changes to that row as long as the first user still has the row in use.

  • We allow other users to open the same row after it has been “locked” by the first user so that they can view the initial starting row information but we need to prevent them from being able to update the row.

  • My initial attempt to implement these controls involved having a user attempt an UPDATE of a special column value. If the user is the first to attempt such a UPDATE then the server automatically provides a row lock on that row. That user then has UPDATE privileges until a final COMMIT is done.

  • Any subsequent access by another user who tries to do a (the same type of) UPDATE results in that user’s client app receiving a server error code which we interpret as “the row is already locked”. The user is then notified of the situation and told that they may view the row’s data but changes will not be allowed. The context of the work and the messaging we provide to the subsequent users is such that they understand what is going on and in this particular context it would make sense to them so “dirty reads” and other typical problems are not an issue here.

  • Once the first user is done with their work they execute an “app close” of the data which triggers a COMMIT and the server removes the row lock. Now this row is available for another user to attempt a UPDATE and obtain a lock for their purposes.

  • It would of course be very useful to be able to notify users who are “waiting” for update access to the row but let’s keep things simple and ignore that feature for the moment.

  • The mechanism as described is working in that the first user is able to have read/write access and all other subsequent users have only read access.

  • As I’ve described previously, my concern is that this approach requires that we leave the lock on the row that the first user has for what can be a several hour period. No subsequent user access attempts result in obtaining a lock so there is no issue for them.

  • Volker has informed me that what may be a more critical concern is that in addition to tying up a transaction for a very long printed time I’m tying up one or more “workers”. I’m informed as to what a worker actually is but clearly it is a critical server resource that I’m sure is not unlimited. In particular has been mentioned that our chewing up heap space with each worker.

  • This is all being done in the context of providing a SaaS so, if things work out the way we hope they will, we could have hundreds of users each with a lock on let’s say 2 to 3 rows resulting in what would seem to be a quite large number of open transactions/workers.

  • I hope this information allows somebody to make a somewhat definitive statement about whether or not this type of “long-term locking” strategy will be okay or if it is setting us up for a very serious problem due to exhaustion of server resources. I am confused because it seems that some comments are suggesting I should be concerned about running out of these resources but other comments suggest that my best approach is to use the “native server locking” which of course is consuming the various server resources.

Can somebody help me get a sense of confidence about which way we need to go?

Thanks to everyone very much for your help.

(17 Aug, 17:22) AlK

Sorry for the "collapsed" text above. Each "-" is on a separate line but the web page is collapsing all of them into one long paragraph!

(17 Aug, 17:24) AlK

FWIW, see the help on the MarkDown syntax used here (in case you can find it...:(), I just used new lines and the "*" to generate a bullet list, in the hope that that was what you're looking for.

(18 Aug, 03:28) Volker Barth
1

Well, thanks for that detailed explanation. My 2 cents:

It would of course be very useful to be able to notify users who are “waiting” for update access to the row but let’s keep things simple and ignore that feature for the moment.

IMHO, when you do not need the others users to be informed that the previous row is unlocked now, then my previous comment from Aug 14th should still hold:

As discussed in several comments here, blocking only occurs when you ask the server to do so, i.e. when you want to wait for a lock to be released. If you do not need to wait but just want to test whether the row is available or not, you can use the "non blocking" test from Breck's comment from 09 Aug, 13:05. That will not tie up additional workers.

So given that, you should not have to need about tied up workers.

From your first paragraph:

(The user is encouraged to do a periodic save of this updated information back to the database.)

What does that mean? How does a user store modified information without "committing" that change? Or does it mean a user might not do only one long transaction on that row but several ones? (In my understanding, an intermittent commit would allow other users to "take over" at that moment...)


One further hint: In case you really do need to use a "logical long transaction" that spans several database commits (i.e. a user can store several steps of his work until he has to give up the "update ownership"), then again a mutex might be helpful here, as a mutex (defined as SCOPE CONNECTION) can be locked independent of any transaction scope. As with a row lock, a user can just test whether a mutex is locked, and if so, just go on without tying up a worker. - If that is a possible requirement, I would strongly recommend to raise a new question on that topic because the current FAQ is going to be very difficult to follow...

(18 Aug, 03:42) Volker Barth
1

Hm, after I have read your comment twice, I understood that those fancy locking things on the DB side are not necessary. My main concerns:
1. If you want to save data as often as possible, just create a table something like AutoSaveGraphicalData(UserID, timestamp/version, blob) and store your temporary work. If necessary the user can go back and forth in case if he wants to check the old version of the document.
2. When the user saves the work, he can update the "main" table where those files are and copy the blob field from the temporary table (by the way, you can even create a temporary table for temporary results) to the main one.
3. With this approach, other users can access data from the main table.
4. You probably ask me, how will you implement the locking?! Simple. You can either add a field, or add a (or temporary) table with locks, where the documents are associated with the users, who edit the documents. Every time, when another users wants to edit the document, you can check the table and decline his request politely.
5. If you add the field "lastModifiedDate", two users can even edit the same document simultaneously, but you can always check this field value before saving the document. If it was changed, you can display the message to the user (who is saving the document currently) and tell him "the document has been modified, please reload it and apply your changes". Something like this.
To be honest, I do not think that all those isolation things with transactions, row-locks are necessary. Everything can be resolved on the application side. I think this is simpler, easier to support and debug - you know queries, tables (persistent or temporary) and you store the data there.
If you work with GIT, you know that it has the similar behavior - the repository is shared between users, but all changes include timestamps and thus you can always conflicts and tell whether the data has been modified, or not.

there were my two cents.

(23 Aug, 09:42) Vlad

Thank you to everyone who has contributed to this very long thread. A lot of very useful information has been contributed and I've learned quite a bit from your various contributions.

First, let me respond to Vlad's suggestion. Unfortunate, this is not a good fit for our use case. If we allowed multiple users to all be making long running changes to the row working off of a in a "working" table we would not be able to merge the changes into a new row. Further, telling all but one of the users that they need to refresh their view and redo all their work would not be acceptable to the users since it would represent the loss of perhaps hours of work. So unfortunately we aren't able to go with that suggestion.

I will take Volker's advice and start a new thread that is a follow-on from our discussions here because this has gotten very long and a bit convoluted. For those who are interested, here is what I have concluded and decided. As I described earlier, a user who first gets the row lock can interact with the row over perhaps several hours. It was asked how intermittent saves would be handled in such a scenario. What I have designed is for the intermittent saves to of, course, do a COMMIT which, of course, causes the lock to be lost. The user that had the lock immediately tries to reacquire the lock but there's no guarantee the lock will be acquired. To deal with this I have code so that when anyone tries to obtain a lock and they get they take extra steps to look at certain items in the row to determine if that lock was being held by somebody else and they have "slipped in" between that other user's COMMIT and their attempt to reacquire the lock. In that case the user "backs off" for a small amount of time and tries again. With very high probability the original user will be able to quickly reacquire the lock. I had also mentioned that, at least in its first phase, I wasn't going to worry about notifying other users that the row has become "un-lockable". So other users are notified that they cannot obtain a lock and they simply have to try again at later date to see if they can then acquire the lock. However, even in this scenario (if I understand everything that's been discussed in this thread) I still have a long running (single) transaction/worker that has to maintain this very long lock on the row. Since it is possible for a single user to have multiple rows locked and since we could have running on the server perhaps 100 users it appears that a worst-case scenario would require that several hundred transactions/workers be tied up. Volker suggested using a mutex and I have finally been able to make the time to research this and it appears that that is a simple and clean way to accomplish what I need without tying up any of these transactions/workers. So, assuming I don't discover something very bad, I'm going to take that route and if anyone is interested in how that develops I have started a new thread labeled "Need help with MUTEXes". Again thanks to everyone for your contributions and help; this has been very useful.

(23 Aug, 19:50) AlK
showing 2 of 26 show all flat view
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×19

question asked: 06 Aug, 15:21

question was seen: 255 times

last updated: 23 Aug, 19:50