The forum will experience an outage sometime between February 10 at 7:00pm EST and February 12 at 11:59 EST for installation of security updates. The actual time and duration of the outage are unknown but attempts will be made to minimize the downtime. We apologize for any inconvenience.

Dear All,

I have a PrimaryKey Auto-Increment field (PID), how to get its value as soon as a new row inserted?

private void test(string mycode, string myname)
                string sql = "insert into Test (mycode, myname) values (@mycode,@myname)";
                using (SACommand cmd = new SACommand(sql, new SAConnection("DSN = "MySA12_Test1")))
                        cmd.Parameters.AddWithValue("@mycode", mycode);
                        cmd.Parameters.AddWithValue("@myname", myname);
                    catch (SAException) { }
                        if (cmd.Connection.State == ConnectionState.Open)
            catch { }
            finally { }

asked 18 May '13, 00:23

KayKay's gravatar image

accept rate: 0%

edited 20 May '13, 15:50

Volker%20Barth's gravatar image

Volker Barth

select @@identity from sys.dummy ;
permanent link

answered 18 May '13, 05:33

Dmitri's gravatar image

accept rate: 11%


select @@identity; -- tl;dr :)

(21 May '13, 05:57) Breck Carter

If you are using v12 or above, you can also combine the INSERT and SELECT in one statement, by using the DML derived table feature (aka "SELECT over a DML statement"), as Breck has discussed in this FAQ. (And I think Breck has had the current question as inspiration:))

In your case, you could simply use

select New_Test.PID from
    (insert into Test (mycode, myname) values (@mycode,@myname))
    referencing (final as New_Test);

Apparently, that is a query returning a result set, so you would need to call that with ExecuteReader() or ExecuteScalar() instead of ExecuteNonQuery().

EDIT: Thanks to Reimer: "New" is as keyword and would need to be masked, so New_Test seems easier to use as alias.

permanent link

answered 20 May '13, 07:20

Volker%20Barth's gravatar image

Volker Barth
accept rate: 32%

edited 21 May '13, 03:34

"I think Breck has had the current question as inspiration" ...clever lad, you are! Sadly, my approach didn't work, yours apparently does (you tested it, right? :)

(20 May '13, 09:11) Breck Carter

While the statement looks pretty convincing, it wouldn't run in my test (syntax error at "." in line 1).

(21 May '13, 03:23) Reimer Pods

Dmitri is correct however I would suggest creating a function, and calling the function from your application.


INSERT INTO xxx.Test(MyCode, MyName) VALUES(@MyCode, @MyName);

SELECT @@Identity INTO @Result;

RETURN (@Result); END;

permanent link

answered 20 May '13, 06:56

J%20Diaz's gravatar image

J Diaz
accept rate: 14%

What reason(s) do you have for creating a function?

In the real world, tables have lots of columns, so the CALL would have many arguments, and the CREATE FUNCTION would have many parameters, and the INSERT would have many names and many values... about 2x the amount of code to write and debug and maintain.

Information hiding is great... when it provides a benefit.

(20 May '13, 09:14) Breck Carter

OK, I'm not sure why a function would have more parameters than an insert statement, for example the function above I believe may be easier to call than the insert statement.

But more importantly it enforces standards, and provides security in that your application or application user doesn't need to have insert permission only the function or procedure does. It also provides a degree of separation from your application and the data structures and a single place in which all inserts into xxx.Test is performed.

Finally, well designed table structures don't have lots of columns :-).

(20 May '13, 19:26) J Diaz
Replies hidden

If a table has 10 columns, an INSERT statement has 20 items (10 in the name list, 10 in the VALUES list.

With a function call, you have 40 items: 10 function arguments in the call, 10 function parameters, 10 names in the INSERT and 10 parameter references in the VALUES list... that's where the "about 2x the amount of code" comes from.

The "enforces standards" argument presumes a hierarchy of developers: careful knowledgeable developers write the function, while sloppy or ignorant developers write the application code, and the sloppy ignorant ones can't be trusted to write an actual INSERT (or they don't even know HOW to code an INSERT because all they know are the basics of the application programming language). It is the Mayor Bloomberg theory of application development.

The "provides security" argument is a powerful one, when appropriate; for example, in multi-user enterprise application. In the case of, say, a single-user embedded system, database-level permissions and security and user ids and passwords are just not necessary at all... they are a COMPLETE waste of developer effort, UTTERLY without benefit. There are more such databases out there than one might think, scuttling about in the dark, taking over the world :)

The "degree of separation" argument presumes that separation is a good thing per se... but it's not necessarily a good thing, especially when the introduction of extra interfaces makes the whole application more complex. Interfaces need to be debugged, too... and a function call is an extra interface. An INSERT is not ... that ... difficult ... that it needs to be hidden from view, is it?

The "single place" argument presumes that it is actually possible for a single INSERT to satisfy all requirements for inserting in the table. Sometimes it does, but often in those cases, there is only one call so the savings are zero. In other cases, different contexts have different needs, and those needs are reflected in extra logic in the function... and extra testing whenever a change is made to the function (MORE testing, because the function is MORE dangerous to change now that it serves many masters).

The "well designed table structures don't have lots of columns" argument does not apply. 10 is not a lot of columns. Neither is 20. Not in the real world. However "2x", as in "two times the work", is a lot of work. Two times the lines of code, two times the number of functional units, two times the debugging effort, one-half the clarity (because a function call doesn't actually tell the reader what's really going on, you MUST delve into the function to determine that)... all those are bad things.

Besides, simple, easy to understand, well designed tables with a large numbers of columns DO exist, and would be less-well-designed if they were arbitrarily fractured into separate tables to satisfy some arbitrary limit on number of columns. The "too large" argument is usually accompanied by examples that aren't normalized, so it's not the number of columns that is the problem... the number of columns is the RESULT of a design error, not a design error in itself.

Having said all that, SQL functions and procedures are wonderful things, for all the reasons you mention: enforces standards, degree of separation, single place, all of them. I write functions and procedures all the time, for all those reasons... in fact, I spend MOST of my time writing them, almost zero time up in the application layer.

Just not in this case, not in the case of just ... about ... the simplest verb in all of in SQL, the INSERT.

(21 May '13, 05:49) Breck Carter
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 18 May '13, 00:23

question was seen: 2,272 times

last updated: 21 May '13, 05:57