Hi, We are running into this issue with a remote site on which we are trying to "reset" synchronization. We have found that when this involves a large enough number of records (due to re-synchronizing all data from some point in the past), synchronization fails during the upload stage due to "too many temporary tables". We have found that when this happens, we can get around it by excluding the offending tables (or some large subset of their records) from synchronization. But as you may imagine, that is less than an ideal solution. Both consolidated and remotes are running SQL Anywhere / Mobilink 11.0.1.2506 on Windows Server 2008 (64 bit). In the current case, that table upload is failing on has over 326,000 inserts in the upload. The error logged on the client side is (in French): E. 2013-02-08 11:35:30. Erreur de serveur : Message : ODBC : [Sybase][ODBC Driver][SQL Anywhere]Trop de tables temporaires dans la connexion (Etat ODBC = HY000, Code d'erreur natif = -817). Nom de la table : cs_archive_narratives. Clé(s) primaire(s) : 4295677467 The only triggers on the table on the consolidated are those generated by Mobilink on creating the synchronization model. Any insight or suggestions would be appreciated. Thanks, Bob Leviton |
I notice that the Perhaps some Overridden or User-defined script is causing the problem, by either directly or indirectly (eg. in a called stored procedure) creating temporary tables. ---Update--- A colleague reminded me that statement-level triggers (such as those created by synchronization models) create temporary tables, one for
Note that since your Unfortunately if you are using shadow tables to hold the timestamp column you need the the triggers. ---2nd update--- Updating to save digging through the comments: The problem seems to arrise because temporary tables created by statement-level triggers are dropped after the trigger, but temporary table IDs are held until a transaction commits so the limit could be reached with large uploads that use statement-level triggers. Avoiding triggers is the best solution, since it should also improve performance. For example, with a SQL Anywhere consolidated database, you can use If you have to use triggers, there are two less-performant options:
Thanks for this info, Graham. So does ON EXISTING UPDATE of itself cause temporary tables to be created? We do not have any conflict resolution enabled, and we haven't done much in the way of customization other than the ON EXISTING UPDATE, which we need to avoid conflicts.
(20 Feb '13, 13:05)
Bob Leviton
Replies hidden
I don't think so. You might use dbtran on a consolidated database log file to help track down what is creating the temporary tables.
(20 Feb '13, 14:25)
Graham Hurst
FYI,
(20 Feb '13, 16:07)
Graham Hurst
Graham, your update above regarding the model-generated triggers could explain the source of the temporary tables, except that from the numbers we are seeing, the tables must not be being dropped after each run of each trigger. They would seem to be persisting through the entire upload, or at least for all rows (inserts) for a given table. Is that possible?
(21 Feb '13, 10:53)
Bob Leviton
Replies hidden
What do the triggers look like? They should be visible in Sybase Central.
(21 Feb '13, 11:00)
Breck Carter
FWIW, in case DBTRAN gives not enough clues, using Request Logging might give more insight - it can also log the operations inside procedures and triggers... - Note: Just in case...
(21 Feb '13, 11:17)
Volker Barth
The temp tables are dropped, but I've just found out that the temp table ids cannot be re-used until after a commit. So the problem happens if there are more than 1048576 temp tables created between commits. This could cause the problem if you are uploading more than a million inserts or half a million updates with the model-generated triggers. (Update - see clarification in later comment.) If you need to use the triggers, then a solution for dbmlsync would be to use incremental uploads so that commits are more frequent. BTW, the generated
(21 Feb '13, 11:50)
Graham Hurst
You are saying a separate temporary table (or even two) are generated for each uploaded row?
(21 Feb '13, 12:23)
Volker Barth
1
They are statement-level triggers, so it would probably be for every 10 rows (assuming the default rowset size). It would be per row if
(21 Feb '13, 12:35)
Graham Hurst
Graham, thanks, it looks like incremental uploads should solve this problem for us. We are testing it now, will update this thread after we have results from the customer systems.
(22 Feb '13, 15:28)
Bob Leviton
Replies hidden
Any results?
(06 Mar '13, 18:00)
Graham Hurst
ON EXISTING UPDATE may use work tables, which would also use up temp table ids.
(06 Mar '13, 18:12)
Graham Hurst
Graham, we are still working on this with the customer. Latest update was that on one remote we were still getting failures due to "too many temporary tables", even when using incremental upload with increment of 100K. On other remotes that seems to have solved the problem. Since we can't get our hands on a copy of the problem remote database, we are now looking at doing a "clean" reset on it (restart replication as of now, without trying to upload or download the massive backlog).
(11 Mar '13, 11:04)
Bob Leviton
|
FYI "Number of temporary tables per connection 2**20 = 1048576"
http://dcx.sybase.com/1101/en/dbadmin_en11/lt-size.html
There may be more or different information in the MobiLink server-side log file.
Thanks, Breck. FYI, what I see in the ML server log from consolidated is pretty much the same: [-10002] Serveur de base de données consolidée ou erreur ODBC : ODBC : [Sybase][ODBC Driver][SQL Anywhere]Trop de tables temporaires dans la connexion (Etat ODBC = HY000, Code d'erreur natif = -817) [-10072] Impossible d'insérer dans la table 'cs_archive_narratives' à l'aide de upload_insert Contexte de l'erreur : ID distant : 4201c21c-0165-4051-9291-7c449181ede2 Nom d'utilisateur : GN_GRISNEZ Nom d'utilisateur modifié : GN_GRISNEZ Transaction : upload Nom de la table : cs_archive_narratives Ligne Insert : 4295677467 ... Version du script : sm3_replication_v3 Script : / Insert the row into the consolidated database. / INSERT INTO "cs"."cs_archive_narratives" ( "narrative_ID", "dssi_message_number", "narrative_type", "transmit_dtg", "receipt_dtg", "display_dtg", "ack_dtg", "ack_user", "ack_note", "narrative_text", "flag_for_delete", "cm_incident_number", "ACCEPT_REQUIRED", "DISPLAY_OVERRIDE", "MCC_SITE_ID", "EMS_COMPOSITE_ID", "EMS_ELEMENTAL_ID", "SUPPLEMENTAL_TEXT", "ready_for_processing", "source_type", "ack_station", "ack_band", "system_source_type", "system_uid", "mlink_forceresetcode", "routed_to_siteid", "externalsystem_ack_code" ) ON EXISTING UPDATE VALUES ( {ml r."narrative_ID"}, {ml r."dssi_message_number"}, {ml r."narrative_type"}, {ml r."transmit_dtg"}, {ml r."receipt_dtg"}, {ml r."display_dtg"}, {ml r."ack_dtg"}, {ml r."ack_user"}, {ml r."ack_note"}, {ml r."narrative_text"}, {ml r."flag_for_delete"}, {ml r."cm_incident_number"}, {ml r."ACCEPT_REQUIRED"}, {ml r."DISPLAY_OVERRIDE"}, {ml r."MCC_SITE_ID"}, {ml r."EMS_COMPOSITE_ID"}, {ml r."EMS_ELEMENTAL_ID"}, {ml r."SUPPLEMENTAL_TEXT"}, {ml r."ready_for_processing"}, {ml r."source_type"}, {ml r."ack_station"}, {ml r."ack_band"}, {ml r."system_source_type"}, {ml r."system_uid"}, {ml r."mlink_forceresetcode"}, {ml r."routed_to_siteid"}, {ml r."externalsystem_ack_code"} )
Fin du contexte de l'erreur