The forum will experience an outage sometime between February 10 at 7:00pm EST and February 12 at 11:59 EST for installation of security updates. The actual time and duration of the outage are unknown but attempts will be made to minimize the downtime. We apologize for any inconvenience.

I have a questions about an SQL procedure. I'm replacing some code in the front end application with an SQL Procedure to allow other languages to use the same logic, shocking behaviour, I know. The procedure checks for the existence of a record, if it exists it increments it and returns the incremented value or inserts a new record where necessary and returns "1". Anyway, we have a couple of hundred users all potentially accessing this code, so I want to be absolutely certain that it was going to work how it's supposed to. My question is this: on the comment marked "HERE", is there any possibility that another connection could alter that value before the SELECT sets the variable? Is the row locked by the BEGIN TRANSACTION command in a way that will avoid concurrency issues? From the documentation, I gather that it does but I wanted to ask a person/people.

Also, is there a better/faster way for this code to work? Any tips would be appreciated.

The second/inner BEGIN/COMMIT section is probably unnecessary but, I'm new to SQL Procedures so I'm covering all bases.


      IF EXISTS(SELECT * FROM "DBA"."UniqueIds" WHERE "IdName" = @IdName)
             UPDATE "DBA"."UniqueIds" SET "Id" = "Id" + 1 WHERE "IdName" = @IdName
             SELECT @IDValue = "ID" FROM "DBA"."UniqueIds" WHERE "IdName" = @IdName
          INSERT INTO "DBA"."UniqueIds" ("IdName", "Id") VALUES(@IdName, 1)
          SET @IDValue = 1

   RETURN @IdValue

asked 16 Oct '13, 07:55

samrae's gravatar image

accept rate: 50%

To atone for the harshness of my previous reply, here are some answers to the questions you ACTUALLY asked...

"My question is this: on the comment marked "HERE", is there any possibility that another connection could alter that value before the SELECT sets the variable?"

No. The UPDATE statement acquires a lock on the row that prevents all other connections from reaching the "HERE" point until the update is committed. Since the UPDATE is issued from inside a stored procedure, the client application doesn't know anything about it, so even if client-side autocommit is turned on the client software won't send an immediate COMMIT to the server together with the UPDATE. Since a BEGIN TRANSACTION has been executed, even if server-side autocommit is turned on at this point (CHAINED = 'OFF') SQL Anywhere itself will not execute an immediate COMMIT together with the UPDATE.

Those immediate COMMITs, either sent by the client or executed by the server, are what constitute "autocommit", and autocommit is evil, but that's off-topic.

"Is the row locked by the BEGIN TRANSACTION command in a way that will avoid concurrency issues?"

No. The UPDATE gets a lock, locks can cause blocks, and blocks cause concurrency problems. If the second explicit COMMIT statements in this procedure actually does a commit (no guarantee!) there won't be any locks held after the procedure terminates, then life is good. However, if a transaction is already in progress when the procedure is called, the lock acquired by the UPDATE will not be released by the second COMMIT... see the discussion of @@TRANCOUNT below (and have the Tylenol handy :)

So, yeah, the inner BEGIN TRANSACTION / COMMIT is unnecessary... it has no effect.

"Is there a better/faster way for this code to work?"

Yes. If the system is very busy, it is possible for two or more connection to more-or-less simultaneously execute this statement

IF EXISTS(SELECT * FROM "DBA"."UniqueIds" WHERE "IdName" = @IdName)

in which case they will all try to execute this statement

INSERT INTO "DBA"."UniqueIds" ("IdName", "Id") VALUES(@IdName, 1)

and all but one connections will fail. This will probably never happen, however, since it only happens for the first new entry in each @IdName.

The real problem is concurrency, when this procedure is called as part of a string of updates being executed in one transaction. All connections attempting to insert a new entry in @IdName will execute one after the other, single-threaded... this procedure will be a bottleneck on a busy system.

Let's assume you can't change the underlying columns to use DEFAULT AUTOINCREMENT, and since you're using Version 11 you can't use CREATE SEQUENCE.

You might consider a series of empty shadow tables, each with a single DEFAULT AUTOINCREMENT column that corresponds to a column of interest in a real table. Replace calls to your procedure with SET statements like SET @pkey = GET_IDENTITY ( 'shadow-table-name', 1 ) to reserve and return the next autoincrement value, then use @pkey in the INSERT. The shadow tables remain empty, and have no effect whatsoever on the rest of the application... especially no locks, blocks or concurrency issues.

For the record, here is a chapter from my book on SQL Anywhere 9; the facts are the same for SQL Anywhere 11. Note that nested SAVEPOINT and ROLLBACK statements are not discussed here.

9.3 Transactions

A database transaction is a sequence of SQL statements that are treated as an single unit for the purposes of correctly satisfying a request while ensuring database integrity. Transaction design is an important part of application design, and it has profound effects on database performance as well as correctness and integrity.

A transaction must satisfy four requirements: it must be atomic, consistent, isolated and durable, or "ACID" for short.

Atomic: All of the database changes made by the transaction must be completed when the transaction finishes successfully; if the transaction fails none of the changes are to be made to the database. A failure at any point during the transaction causes the entire transaction to fail; a failure may be explicitly forced by a SQL statement such as ROLLBACK, or caused implicitly by an external event such as a system crash. For example, if a transaction inserts a sales_order row in the ASADEMO database, together with one or more corresponding sales_order_item rows, a failure during the transaction means that none of those rows exist when the transaction is completed.

Consistent: All constraints on the data must be satisfied when the transaction is completed. These include constraints maintained by application programs as well as constraints defined within the database schema.

Isolated: Separate transactions must not change the same data at the same time; all changes must be isolated from one another. The smallest unit of isolation with SQL Anywhere is the row.

Isolation may be taken further when considering whether different transactions may even see the effects of changes made by other transactions; the ISOLATION_LEVEL option applies to this aspect, and is discussed later in this chapter. As far as updates are concerned, however, isolation is not optional; once a transaction has changed a row in a table, SQL Anywhere does not permit any other transaction from changing the same row until the first transaction is completed.

Durable: All database changes made by a transaction that finishes successfully must be persistent; subsequent failures must not affect the changes made by this transaction. In other words, a COMMIT must be permanent.

For example, a transaction may insert a sales_order row in the ASADEMO database, together with one or more corresponding sales_order_item rows. This transaction is atomic because all of the inserted rows will exist in the database if the transaction succeeds, and none of them will remain if the transaction fails. This transaction is consistent because it satisfies the application constraint that for every sales_order row at least one corresponding sales_order_item row must also exist; it also satisfies the database foreign key constraint that every sales_order_item row must have a single corresponding sales_order row. This transaction is isolated because no other database connection can delete or update the inserted rows until the transaction is successfully completed. This transaction is durable because, once it is successfully completed, the inserted rows remain in the database.

Each transaction has a beginning and an end, and is run within the context of a single connection. One connection may execute more than one transaction, but only one after another in a serial fashion. Different transactions may run at the same time on different connections, and they can affect one another, but a single transaction does not span more than one connection.

Note: Individual non-compound SQL statements are atomic in nature, which means that if the statement fails any changes it has already made to the database are automatically undone. This applies to single INSERT, UPDATE and DELETE statements that operate on more than one row; if one of these statements fail after affecting one or more rows, all of its effects are automatically undone. This activity is separate from transaction control; the failure of one statement does not automatically cause a transaction to roll back. The atomic nature of SQL statements is implemented internally via savepoints which are discussed in Section 9.4 Savepoints and Subtransactions.

Three different SQL statements may be used to explicitly control when transactions begin and end: BEGIN TRANSACTION, COMMIT and ROLLBACK. This control is not absolute in all cases; a ROLLBACK statement always ends a transaction, but a BEGIN TRANSACTION doesn't necessarily start a transaction, nor does a COMMIT necessarily end one.

<begin_transaction> ::= BEGIN TRAN        -- all forms are equivalent
                      | BEGIN TRANSACTION
                      | BEGIN TRAN <transaction_name>
                      | BEGIN TRANSACTION <transaction_name>

<transaction_name>  ::= <identifier> -- not used for any purpose

<commit>            ::= COMMIT      -- all forms are equivalent
                      | COMMIT WORK
                      | COMMIT TRAN
                      | COMMIT TRANSACTION
                      | COMMIT TRAN <transaction_name>
                      | COMMIT TRANSACTION <transaction_name>

<rollback>          ::= ROLLBACK      -- all forms are equivalent
                      | ROLLBACK WORK
                      | ROLLBACK TRAN
                      | ROLLBACK TRANSACTION
                      | ROLLBACK TRAN <transaction_name>
                      | ROLLBACK TRANSACTION <transaction_name>

Tip: Don't bother with the transaction name parameters on the BEGIN TRANSACTION, COMMIT or ROLLBACK statements; they have no effect in SQL Anywhere. Also, the different formats for each statement are equivalent. The full syntax is shown here because these different formats sometimes appear in documentation and utility program output, and they often lead to unnecessary confusion.

SQL Anywhere has two modes of transaction control, called "chained mode" and "unchained mode".

In chained mode a transaction is implicitly started by any INSERT, UPDATE or DELETE statement, or any SELECT statement that acquires locks. This transaction ends when an explicit COMMIT or ROLLBACK statement is executed, or when the transaction fails.

In unchained mode a transaction may be explicitly started by a BEGIN TRANSACTION statement; such a transaction ends when an explicit COMMIT or ROLLBACK statement is executed, or when the transaction fails. If no BEGIN TRANSACTION statement is executed, each statement is run as its own transaction, with an implicit commit if it works and an implicit rollback if it fails. Here is a simple example of two UPDATE statements run in chained mode; the SET TEMPORARY OPTION CHAINED = 'ON' statement is used to clearly document that chained mode is in effect:

UPDATE t1 SET non_key_1 = 'xxx' WHERE key_1 = 1;
UPDATE t1 SET non_key_1 = 'yyy' WHERE key_1 = 2;

The transaction log file may be examined to determine when transactions begin and end; the dbtran.exe utility program can be used to translate the log file into readable SQL statements. Here is a command line that executes dbtran.exe, using the options -a to include all operations including uncommitted ones, -c to specify the connection parameters, -s to produce UPDATE statements in the ANSI format, and -y to overwrite the output file without confirmation. The final parameter is the file specification for the output text file, test9_log.sql: "%ASANY9%win32dbtran.exe" -a -c "DSN=test9" -s -y test9_log.sql

Here's what the output looks like for the UPDATE and COMMIT statements shown above; the BEGIN TRANSACTION entry shows that a transaction was started before the first UPDATE:

--BEGIN TRANSACTION-1001-0000402114
   SET non_key_1='xxx'
 WHERE key_1=1
   SET non_key_1='yyy'
 WHERE key_1=2

Here are the same two UPDATE statements, run in unchained mode with no explicit COMMIT:

UPDATE t1 SET non_key_1 = 'xxx' WHERE key_1 = 1;
UPDATE t1 SET non_key_1 = 'yyy' WHERE key_1 = 2;

This time the dbtran.exe output shows two separate BEGIN TRANSACTION and COMMIT operations were performed:

--BEGIN TRANSACTION-1001-0000402237
   SET non_key_1='xxx'
 WHERE key_1=1
--BEGIN TRANSACTION-1001-0000402250
   SET non_key_1='yyy'
 WHERE key_1=2

Applications using unchained mode have no control over the design or scope of transactions unless they issue explicit BEGIN TRANSACTION statements. Here is the previous example, modified to take control and force both UPDATE statements to be included in one transaction:

UPDATE t1 SET non_key_1 = 'xxx' WHERE key_1 = 1;
UPDATE t1 SET non_key_1 = 'yyy' WHERE key_1 = 2;

The dbtran.exe output for unchained mode using an explicit BEGIN TRANSACTION looks exactly the same as it did for first example using chained mode above; one BEGIN TRANSACTION, two UPDATE statements and a single COMMIT:

--BEGIN TRANSACTION-1001-0000402314
   SET non_key_1='xxx'
 WHERE key_1=1
   SET non_key_1='yyy'
 WHERE key_1=2

Note: The BEGIN TRANSACTION entry in the dbtran.exe output serves to mark the point when a transaction was actually started, not the point when an explicit BEGIN TRANSACTION statement was executed. That might sound pedantic, but it's important when reading the output from dbtran.exe: an explicit BEGIN TRANSACTION statement in the application code may or may not correspond to a BEGIN TRANSACTION entry in the transaction log, regardless of the transaction mode.

Chained mode is sometimes called manual mode because it requires explicit COMMIT and ROLLBACK statements to mark the end of transactions. Unchained mode is sometimes called autocommit mode because each successful statement is automatically committed when no explicit BEGIN TRANSACTION statement has been executed. However, there are two kinds of autocommit mode: Server-side autocommit mode is the kind implemented by SQL Anywhere database engine when you set the CHAINED option 'OFF' to get unchained mode as described above, and you don't execute explicit BEGIN TRANSACTION statements. Transactions are started and ended automatically, inside the database engine, and there are no BEGIN TRANSACTION, COMMIT or ROLLBACK statements coming from the client side.

Client-side autocommit mode is implemented by the client database interface software such as ODBC and JDBC. When the interface "AutoCommit" flag is set, an explicit COMMIT statement is sent to SQL Anywhere after each INSERT, UPDATE and DELETE statement. If server-side autocommit is also in force, these extra COMMIT statements have no effect because by the time they arrive, SQL Anywhere has already done a commit. If server-side autocommit is not in force, then the COMMIT statements sent by client-side autocommit will have an effect.

Here's a table that shows what happens when a single UPDATE statement is issued by a PowerBuilder application using ODBC, under the four combinations of server-side and client-side autocommit settings. The first column shows the client-side SQLCA.AutoCommit setting used by the PowerBuilder application when connecting via ODBC to the database. The second column shows the server-side setting of the CHAINED option used by SQL Anywhere. The third column shows what SQL statements were actually sent across the client server interface from ODBC to SQL Anywhere. The fourth column shows what internal operations were performed by SQL Anywhere, as recorded in the transaction log:

Client-side      Server-side
ODBC AutoCommit  CHAINED Mode  Statements Sent  Operations Performed
===============  ============  ===============  =====================
     FALSE           OFF       UPDATE           Begin, update, commit
     TRUE            OFF       UPDATE, COMMIT   Begin, update, commit
     TRUE            ON        UPDATE, COMMIT   Begin, update, commit
     FALSE           ON        UPDATE           Update

Autocommit mode is in effect for the first three combinations shown above; it doesn't matter much if it's client-side or server-side autocommit, the important fact is that each database change is treated as a transaction by itself. With autocommit in effect it is impossible for an application transaction to span more than one INSERT, UPDATE or DELETE statement.

Tip: Never use autocommit. Always take explicit control of transaction design in your applications. Use the settings shown on the last line of the table above: always set the CHAINED option 'ON', set any client-side AutoCommit flag to FALSE, and explicitly execute a COMMIT or ROLLBACK statement when it is time to finish a transaction.

SQL Anywhere may or may not actually perform a commit operation when it executes a COMMIT statement; this depends on the current value of a builtin connection-level variable called @@TRANCOUNT. If @@TRANCOUNT is zero or 1 when a COMMIT statement is executed, SQL Anywhere will perform a commit; if @@TRANCOUNT is 2 or higher the COMMIT statement will be ignored. Here are the details of how @@TRANCOUNT gets changed and used:

@@TRANCOUNT is set to zero when a connection is started, and is set back to zero whenever a transaction is finished.

In unchained mode, each explicit BEGIN TRANSACTION statement increases @@TRANCOUNT by 1.

In chained mode, if @@TRANCOUNT is zero when an implicit transaction is been started, it isn't immediately changed, but the next explicit BEGIN TRANSACTION statement will set @@TRANCOUNT to 2. Subsequent BEGIN TRANSACTION statements increase @@TRANCOUNT by 1.

In chained mode, if an explicit BEGIN TRANSACTION statement is executed before a transaction is implicitly started, @@TRANCOUNT is set to 1; subsequent BEGIN TRANSACTION statements increase @@TRANCOUNT by 1.

In both chained and unchained modes, each COMMIT statement decreases @@TRANCOUNT by 1 until it reaches zero. If @@TRANCOUNT reaches zero when a COMMIT statement is executed, an actual commit operation is performed: all the transaction's database changes are made permanent, all the locks held by the transaction are released, and the transaction is ended. If @@TRANCOUNT does not reach zero when a COMMIT statement is executed, nothing more is done; as far as the outstanding database changes and locks are concerned, the COMMIT is ignored, and the transaction is still in progress.

In both chained and unchained modes, a ROLLBACK statement sets @@TRANCOUNT to zero, rolls back all the transaction's database changes, releases all the locks, and ends the transaction.

This happens regardless of the current value of @@TRANCOUNT.

The term "nested transaction" is sometimes used when @@TRANCOUNT rises to 2 or higher. That term is misleading, however, because only the outermost transaction has any meaning as far as database changes and locks are concerned. When @@TRANCOUNT rises to 2 or higher, a COMMIT statement does absolutely nothing except lower the @@TRANCOUNT value. A nested transaction implies that changes made in an inner transaction may be made permanent while changes made in the outer transaction are rolled back, and that simply is not possible in SQL Anywhere; there is no such thing as a nested transaction.

Tip: Don't use the BEGIN TRANSACTION statement at all. Use chained mode: let SQL Anywhere start each transaction implicitly, use explicit COMMIT and ROLLBACK statements to end each transaction, and don't disable COMMIT statements with extra BEGIN TRANSACTION statements.

Here is an example of a simple transaction; a parent row is inserted in the sales_order table in the ASADEMO database, and a corresponding child row is inserted in the sales_order_items table. The first INSERT starts the transaction. If both INSERT statements work OK, the transaction ends with a COMMIT; if a foreign key violation is detected, the SQLSTATE will be set to '23503' and the exception handler will end the transaction with a ROLLBACK:

   DECLARE @errormsg  VARCHAR ( 32767 );
   DECLARE error_23503 EXCEPTION FOR SQLSTATE '23503';

INSERT sales_order ( id, cust_id, order_date, fin_code_id, region, sales_rep )
      VALUES ( 1, 101, CURRENT DATE, 'r1', 'Eastern', 299 );

INSERT sales_order_items ( id , line_id, prod_id, quantity, ship_date )
      VALUES ( 1, 1, 999, 12, DATEADD ( DAY, 1, CURRENT DATE ) );


   WHEN error_23503 THEN
      SET @errormsg = ERRORMSG();
      MESSAGE 'ROLLBACK after error.' TO CONSOLE;
      MESSAGE STRING ( 'ERRORMSG() = ', @errormsg ) TO CONSOLE;

In this particular case, the prod_id value of 999 causes a foreign key violation and the transaction ends with a ROLLBACK; here's the output from the MESSAGE statements: ROLLBACK after error.

ERRORMSG() = No primary key value for foreign key 'ky_prod_id' in 
   table 'sales_order_items'

For more information about the tables in the ASADEMO database, see Section 3.10 in Chapter 3, Selecting. For more information about the BEGIN block and exception handlers, see Sections 8.2 and 8.3 in Chapter 8, Packaging.

Tip: When designing transactions, watch out for any SQL statement that performs a commit as a side effect. Any statement that updates the SQL Anywhere system catalog tables falls into this category, and if a transaction is in progress it will be ended. For example, statements like SET OPTION, GRANT and CREATE SYNCHRONIZATION USER statements all perform commits, and so do CREATE TABLE and CREATE INDEX when used on local permanent and global temporary tables. The LOAD TABLE statement also performs a commit, and so does TRUNCATE TABLE if the fast form is used; for more information about TRUNCATE TABLE see Section 5.7 in Chapter 3, Deleting. Some statements that affect schema don't perform commits, however; CREATE DATABASE doesn't, probably because it doesn't affect the current database, and neither does any statement involving a local temporary table. Also, the SET TEMPORARY OPTION statement doesn't do a commit so it's possible to change connection options while a transaction is in progress.

permanent link

answered 17 Oct '13, 10:40

Breck%20Carter's gravatar image

Breck Carter
accept rate: 21%


Oops, that's a very long answer - it makes it somewhat difficult to reach the other ones...

FWIW: Breck himself won't share the link - he seems to be shy in that respect:) - but he has blogged on that exact question here:


(18 Oct '13, 03:33) Volker Barth
Replies hidden

That's an excellent article. Thanks.

(25 Oct '13, 09:36) samrae

Excellent answer. Very thorough. I'm taking your advice and trying to avoid using these kinds of manually generated Ids. Everywhere I can I'll be converting to autoinc and possibly using shadow tables for the others, so I can take advantage of the concurrency handling without having to rewrite large parts of the application. Also, several people have recommended your book to me so, I guess you're the guy to ask.

(25 Oct '13, 09:39) samrae

Just a few remarks:

As you are basically trying to build a primary key generator, there are several different approaches, and personally, I would recommend DEFAULT AUTOINCREMENTs (or SEQUENCES for v12 and above) over a separate "key value table" like you are using currently because the former should behave better with many concurrent transactions - confine this help topic that gives an overview:

That being said, I guess your construct is safe even on the lower default isolation levels (0 or 1, depending on the API used): SQL Anywhere prevents two concurrent transactions to modify the same row by using exclusive write locks - writers always block other writers on the same row. So in my understanding, when transaction A has reached the update statement, it will block all other transactions that are trying to update the same row until A has been committed (or rolled back). As the SELECT follows the UPDATE, it should be safe. - As I stated, that's a guess.

As to the nested transactions: Note that only the outermost COMMIT will make the changes to the database permanent - the inner COMMIT will only decrease the nesting level so it will have no influence in your sample (and note that the stored procedure may be called within an outer transaction itself, so even the COMMIT at the end of the SP's body might not make the changes permanent... And that may mean you will prevent other transactions from getting a new ID value for a potential long timespan, thus reducing concurrency (or even introducing deadlocks, in case transaction A (after it has got a new ID) would try to update a second row that has already been modified by transaction B before B tried to gather a new ID...).

It should also be mentioned that the database server can use "server-side autocommit", i.e. a mode in which each DML statement (even within stored procedures) is treated as a separate transaction... EDIT Correction based on Breck's answers: ...unless it uses explicit BEGIN TRANSACTION statements, as in your case. So that would not lead to problems with your current implementations. In that mode, the STP would not be safe.

Did anybody tell you transactions are easy?

permanent link

answered 16 Oct '13, 09:44

Volker%20Barth's gravatar image

Volker Barth
accept rate: 32%

edited 18 Oct '13, 03:42

Thanks, I use autoinc in all my new code. This was only to get round some legacy stuff.

(16 Oct '13, 11:24) samrae

How about this, the suggestion came up a couple of time to use shadow tables and GET_IDENTITY to solve the problem.

ALTER FUNCTION "DBA"."GetNextIdNew"(@IdName CHAR(20) )
    DECLARE @ReturnValue BIGINT

  DECLARE @TableExists INT
  DECLARE @TableName CHAR(40)
  SET @OldIdValue = 0
  SET @TableName = 'Shadow_'||@IdName

  SELECT @TableExists = COUNT(1) FROM sysObjects WHERE "Name" = @TableName

  IF @TableExists = 0
      //MESSAGE 'Table Does Not Exist' TO CLIENT
      EXECUTE CreateShadowTable @IdName

      SELECT @OldIdValue = "Id" FROM UniqueIds WHERE "IdName" = @IdName
      IF @OldIdValue > 0
        EXECUTE SA_RESET_IDENTITY @TableName, 'DBA', @OldIdValue
      SET @ReturnValue = GET_IDENTITY(@TableName, 1)
      //MESSAGE 'Table Exists' TO CLIENT
      SET @ReturnValue = GET_IDENTITY(@TableName, 1)

    RETURN @ReturnValue

The entire @TableExists = 0 section will only ever happen once for each Id and I'll go through each one in the old table and call this proc to create the shadow tables. It will also take care of any new Ids added this way, although that shouldn't be happening any more.

The create table proc is watcom procedure because I couldn't get it to work with t-sql, because I'm dumb: The IF NOT EXISTS is probably unnecessary, too.

ALTER PROCEDURE "DBA"."CreateShadowTable"(IN tablename char(20) )

   'CREATE TABLE IF NOT EXISTS Shadow_' || tablename ||
permanent link

answered 24 Oct '13, 09:34

samrae's gravatar image

accept rate: 50%

You can not be sure.

What I use is the following. ( Probably this is also not safe :-) But has worked for me many years )


    into nKey, tsDate
    from "AFAdmin"."ORDERKEY" with (holdlock)

    if nKey is null
        -- insert
        -- update
    end if;


What you also have to be aware of is that these building your own PK Generator is Top 38 on the list How To Make SQL Anywhere Slow.

In newer Versions of SQLA you should try to create a sequences. Creating a sequence

permanent link

answered 16 Oct '13, 08:37

Thomas%20Duemesnil's gravatar image

Thomas Dueme...
accept rate: 16%

edited 17 Oct '13, 02:21

I'd need to create a sequence for each new Id, I wanted to store, though, wouldn't I? The advantage of the hack I've got at the moment is that it grows dynamically.

(16 Oct '13, 11:26) samrae
Replies hidden

Yes. But it is define one time and working much faster and more efficient than any approach you can come up with ! Depends on your needs.

(17 Oct '13, 02:22) Thomas Dueme...

You did not mention which version of SQLA you were using?

If you are using version 11 or above then I would suggest you using a MERGE statement.

Example: I believe that something like this will do everything you need (Note: I am using WATCOM SQL syntax):

    set @IDValue = 1;  -- assume we are going to insert
     into "DBA"."UniqueIds" as ov( "IdName", "ID" )
    using ( select @idName as "IdName",
                   1       as "ID" ) nv
       on ov."IdName" = nv."IdName"
     when matched then update
      set ov."ID" = ov."ID" + 1,
          @IDValue = ov."ID"   -- extract the ID value from the table
     when not matched then insert;
   return @IDValue;

The way that the above MERGE statement works is this: We construct a row "nv" (for "New Value") and then merge that row into the existing "ov" (for "Old Value") that is your UniqueIds table. If the new value IdName matches a rows in old value (aka UniqueIds) then we update the row setting ov."ID" = ov."ID" + 1; else when there is not a match then we simply insert the new row into the UniqueIds table.

Regarding your question about locking: A row is not locked within a transaction until it is selected (depending on your isolation level) or updated. For example, if you are using isolation level 0 then no read locks are taken.


permanent link

answered 16 Oct '13, 09:02

Mark%20Culp's gravatar image

Mark Culp
accept rate: 40%

edited 16 Oct '13, 11:29

I'll have a look at MERGE. Looks interesting.

(16 Oct '13, 11:24) samrae
Replies hidden

I'm using version by the way

(16 Oct '13, 11:24) samrae

Note: AFAIK, the important point is that selecting the ID value and possibly incrementing the next ID value are done in one single (and therefore atomic) statement here. That should prevent "dirty reads" here, as only one transaction can execute that statement for a given "IDName" - the next transaction trying to do so would be blocked by a write (or insert) lock.

(16 Oct '13, 16:11) Volker Barth

The code you show is extremely dangerous. Its operation depends on whether the connection is operating in CHAINED = 'ON' or CHAINED = 'OFF' mode. Its operation also depends on whether a transaction is already in progress when the block is entered or not. There is no way to inspect just this code, and determine what it does.

It is highly recommended you avoid using the BEGIN TRANSACTION statement altogether, and use the CHAINED = 'ON' mode (which is the default except for Open Client and jConnect connections, where you have to remove or modify or override the effect of the evil sp_tsql_environment system procedure).

Avoiding BEGIN TRANSACTION implies you will also be avoiding use of nested COMMIT statements which are misleading to the reader. In fact, nested COMMIT statements don't do anything at all and there is no such thing as a nested transaction.

If this code is used to generate unique auto-incrementing primary key values, then switch to using DEFAULT AUTOINCREMENT on the column (like Volker said), and do not specify an explicit value when doing the INSERT. SQL Anywhere will generate unique values with no locks that cause blocks, and if you need to know what value got generated just SELECT @@IDENTITY after the INSERT.

permanent link

answered 16 Oct '13, 11:27

Breck%20Carter's gravatar image

Breck Carter
accept rate: 21%

edited 16 Oct '13, 14:07

I'm very aware that this is not the right way to do things. As I said, I'm pretty new to new to writing stored procedures but I'm a fairly careful programmer and I wanted to be sure about something. Apologies. I'll keep my questions to myself in future.

(16 Oct '13, 11:51) samrae
Replies hidden

I have edited my reply to make it sound less like a personal attack. You did say it was legacy code so it was an easy assumption that you did not have a personal stake in it.

(16 Oct '13, 14:12) Breck Carter

there is no such thing as a nested transaction

I agree that this aspect should be pointed out in the current discussion to prevent a common misunderstanding (and your original posting w.r.t. @@TRANCOUNT may have sounded harsh but was very true - and made me smile:)).

Nevertheless, it should also be stated that nested transactions (like savepoints) can be used to be able to rollback parts of a transaction without having to rollback the whole transaction - say, if you try to store a bunch of objects in one transaction, and one object would fail for any unexpected reason, and you want to store the rest of them - we have such business rules...

(16 Oct '13, 15:57) Volker Barth
Replies hidden

I hope the wealth of answers here in short time (and from volunteers, not to forget) is of help to you, and I hope you don't feel that we are unrespectful to your questions - please keep asking...

(16 Oct '13, 16:02) Volker Barth

To be 100% clear, a nested savepoint and rollback is not the same as "nested transaction". When folks hear "nested transaction" they think "nested commit"... which is a dangerous fantasy :)

(17 Oct '13, 08:49) Breck Carter
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 16 Oct '13, 07:55

question was seen: 17,396 times

last updated: 25 Oct '13, 09:39