We are running into this issue with a remote site on which we are trying to "reset" synchronization. We have found that when this involves a large enough number of records (due to re-synchronizing all data from some point in the past), synchronization fails during the upload stage due to "too many temporary tables". We have found that when this happens, we can get around it by excluding the offending tables (or some large subset of their records) from synchronization. But as you may imagine, that is less than an ideal solution.

Both consolidated and remotes are running SQL Anywhere / Mobilink on Windows Server 2008 (64 bit). In the current case, that table upload is failing on has over 326,000 inserts in the upload. The error logged on the client side is (in French):

E. 2013-02-08 11:35:30. Erreur de serveur : Message : ODBC : [Sybase][ODBC Driver][SQL Anywhere]Trop de tables temporaires dans la connexion (Etat ODBC = HY000, Code d'erreur natif = -817). Nom de la table : cs_archive_narratives. Clé(s) primaire(s) : 4295677467

The only triggers on the table on the consolidated are those generated by Mobilink on creating the synchronization model.

Any insight or suggestions would be appreciated.

Thanks, Bob Leviton

asked 20 Feb '13, 09:34

Bob%20Leviton's gravatar image

Bob Leviton
accept rate: 0%

FYI "Number of temporary tables per connection 2**20 = 1048576"


(20 Feb '13, 10:19) Breck Carter

There may be more or different information in the MobiLink server-side log file.

(20 Feb '13, 10:21) Breck Carter

Thanks, Breck. FYI, what I see in the ML server log from consolidated is pretty much the same: [-10002] Serveur de base de données consolidée ou erreur ODBC : ODBC : [Sybase][ODBC Driver][SQL Anywhere]Trop de tables temporaires dans la connexion (Etat ODBC = HY000, Code d'erreur natif = -817) [-10072] Impossible d'insérer dans la table 'cs_archive_narratives' à l'aide de upload_insert Contexte de l'erreur : ID distant : 4201c21c-0165-4051-9291-7c449181ede2 Nom d'utilisateur : GN_GRISNEZ Nom d'utilisateur modifié : GN_GRISNEZ Transaction : upload Nom de la table : cs_archive_narratives Ligne Insert : 4295677467 ... Version du script : sm3_replication_v3 Script : / Insert the row into the consolidated database. / INSERT INTO "cs"."cs_archive_narratives" ( "narrative_ID", "dssi_message_number", "narrative_type", "transmit_dtg", "receipt_dtg", "display_dtg", "ack_dtg", "ack_user", "ack_note", "narrative_text", "flag_for_delete", "cm_incident_number", "ACCEPT_REQUIRED", "DISPLAY_OVERRIDE", "MCC_SITE_ID", "EMS_COMPOSITE_ID", "EMS_ELEMENTAL_ID", "SUPPLEMENTAL_TEXT", "ready_for_processing", "source_type", "ack_station", "ack_band", "system_source_type", "system_uid", "mlink_forceresetcode", "routed_to_siteid", "externalsystem_ack_code" ) ON EXISTING UPDATE VALUES ( {ml r."narrative_ID"}, {ml r."dssi_message_number"}, {ml r."narrative_type"}, {ml r."transmit_dtg"}, {ml r."receipt_dtg"}, {ml r."display_dtg"}, {ml r."ack_dtg"}, {ml r."ack_user"}, {ml r."ack_note"}, {ml r."narrative_text"}, {ml r."flag_for_delete"}, {ml r."cm_incident_number"}, {ml r."ACCEPT_REQUIRED"}, {ml r."DISPLAY_OVERRIDE"}, {ml r."MCC_SITE_ID"}, {ml r."EMS_COMPOSITE_ID"}, {ml r."EMS_ELEMENTAL_ID"}, {ml r."SUPPLEMENTAL_TEXT"}, {ml r."ready_for_processing"}, {ml r."source_type"}, {ml r."ack_station"}, {ml r."ack_band"}, {ml r."system_source_type"}, {ml r."system_uid"}, {ml r."mlink_forceresetcode"}, {ml r."routed_to_siteid"}, {ml r."externalsystem_ack_code"} )

Fin du contexte de l'erreur

(20 Feb '13, 10:42) Bob Leviton
Comment Text Removed

I notice that the INSERT statement in the log includes ON EXISTING UPDATE but synchronization models don't generate that clause. Also, temporary tables are only created by synchronization models if you have enabled conflict resolution for uploaded updates (not inserts), and only one or two are created per synchronized table when the synchronization model is deployed (not during synchronizations). So the problem likely isn't from what the sync model generates.

Perhaps some Overridden or User-defined script is causing the problem, by either directly or indirectly (eg. in a called stored procedure) creating temporary tables.


A colleague reminded me that statement-level triggers (such as those created by synchronization models) create temporary tables, one for REFERENCING OLD and one for REFERENCING NEW, though those should be dropped at the end of the trigger. In case the sync model triggers are causing the problem, there are some workarounds (assuming you are not using shadow tables for timestamp downloads):

  1. The insert triggers are only needed if deleted rows can be re-added. Otherwise you can drop those triggers (the ones named <table>_ins).
  2. The update triggers (named <table>_upd) can be dropped if you change the last_modified column from DEFAULT CURRENT TIMESTAMP to DEFAULT TIMESTAMP. (FYI, this became the default for sync models in version 12.0.1.)

Note that since your upload_insert scripts have ON EXISTING UPDATE, uploaded inserts would fire update triggers instead of insert triggers for existing rows.

Unfortunately if you are using shadow tables to hold the timestamp column you need the the triggers.

---2nd update---

Updating to save digging through the comments: The problem seems to arrise because temporary tables created by statement-level triggers are dropped after the trigger, but temporary table IDs are held until a transaction commits so the limit could be reached with large uploads that use statement-level triggers.

Avoiding triggers is the best solution, since it should also improve performance. For example, with a SQL Anywhere consolidated database, you can use DEFAULT TIMESTAMP for your last-modified columns instead of needing an update trigger. (As of 12.0.1 that is the default for synchronization models.) Also, you don't need an insert trigger if you never insert rows with same primary key as previously deleted rows, or if you don't download deletes.

If you have to use triggers, there are two less-performant options:

permanent link

answered 20 Feb '13, 12:04

Graham%20Hurst's gravatar image

Graham Hurst
accept rate: 29%

edited 22 Feb '13, 16:09

Thanks for this info, Graham. So does ON EXISTING UPDATE of itself cause temporary tables to be created? We do not have any conflict resolution enabled, and we haven't done much in the way of customization other than the ON EXISTING UPDATE, which we need to avoid conflicts.

(20 Feb '13, 13:05) Bob Leviton
Replies hidden

So does ON EXISTING UPDATE of itself cause temporary tables to be created?

I don't think so.

You might use dbtran on a consolidated database log file to help track down what is creating the temporary tables.

(20 Feb '13, 14:25) Graham Hurst

FYI, dbtran would only show any CREATE TEMPORARY TABLE statements from your scripts, stored procedures or triggers. Declared local temporary tables are not recorded in the transaction log.

(20 Feb '13, 16:07) Graham Hurst

Graham, your update above regarding the model-generated triggers could explain the source of the temporary tables, except that from the numbers we are seeing, the tables must not be being dropped after each run of each trigger. They would seem to be persisting through the entire upload, or at least for all rows (inserts) for a given table. Is that possible?

(21 Feb '13, 10:53) Bob Leviton
Replies hidden

What do the triggers look like? They should be visible in Sybase Central.

(21 Feb '13, 11:00) Breck Carter

FWIW, in case DBTRAN gives not enough clues, using Request Logging might give more insight - it can also log the operations inside procedures and triggers... - Note: Just in case...

(21 Feb '13, 11:17) Volker Barth

The temp tables are dropped, but I've just found out that the temp table ids cannot be re-used until after a commit. So the problem happens if there are more than 1048576 temp tables created between commits. This could cause the problem if you are uploading more than a million inserts or half a million updates with the model-generated triggers. (Update - see clarification in later comment.)

If you need to use the triggers, then a solution for dbmlsync would be to use incremental uploads so that commits are more frequent.

BTW, the generated UPDATE trigger includes REFERENCING OLD but that is never used. If you remove that from the trigger definition it should halve the number of temp tables created by that trigger.

(21 Feb '13, 11:50) Graham Hurst

You are saying a separate temporary table (or even two) are generated for each uploaded row?

(21 Feb '13, 12:23) Volker Barth

They are statement-level triggers, so it would probably be for every 10 rows (assuming the default rowset size). It would be per row if -s 1 was used for the MobiLink server.

(21 Feb '13, 12:35) Graham Hurst

Graham, thanks, it looks like incremental uploads should solve this problem for us. We are testing it now, will update this thread after we have results from the customer systems.

(22 Feb '13, 15:28) Bob Leviton
Replies hidden

Any results?

(06 Mar '13, 18:00) Graham Hurst

ON EXISTING UPDATE may use work tables, which would also use up temp table ids.

(06 Mar '13, 18:12) Graham Hurst

Graham, we are still working on this with the customer. Latest update was that on one remote we were still getting failures due to "too many temporary tables", even when using incremental upload with increment of 100K. On other remotes that seems to have solved the problem. Since we can't get our hands on a copy of the problem remote database, we are now looking at doing a "clean" reset on it (restart replication as of now, without trying to upload or download the massive backlog).

(11 Mar '13, 11:04) Bob Leviton
showing 4 of 13 show all flat view
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 20 Feb '13, 09:34

question was seen: 4,238 times

last updated: 11 Mar '13, 11:04