Please be aware that the content in SAP SQL Anywhere Forum will be migrated to the SAP Community in June and this forum will be retired.

Recently I was given the need to make queries on cross-bound and consolidated data from multiple tables and on data from multiple clients. The information is only in one sense "Client -> Base Centralizer", do not need all information, are just some tables chosen. All clients run Sybase Adaptive Sql Anywhere

The "centralizing" structure is brand new and can be assembled as we wish, could use current versions! What would be the suggestion to feed this centralizing base? Mobilink?

Many thanks!


asked 20 Apr '18, 10:48

DRauber's gravatar image

accept rate: 0%

For your purposes the Remote Data Access "proxy table" feature might be a bit easier to work with than MobiLink.

Proxy tables created on (for example) SQL Anywhere 16 can be used represent local copies of SQL Anywhere 9 tables or views. An INSERT ... SELECT FROM a proxy table will "pull" the data across from SQL Anywhere 9 to SQL Anywhere 16 and store it in a local real table.

The following example shows some code run on a SQL Anywhere 16 database to "pull" data across from a SQL Anywhere 9 database. The ODBC "DRIVER=SQL Anywhere 16" is used because it works fine with SQL Anywhere 9 databases, AND because "DRIVER=Adaptive Server Anywhere 9.0" is a PITA on modern computers :)

-- Simple SQL Anywhere 16 and 9 setup

   DROP TABLE local_table;

CREATE TABLE local_table (
   local_column SMALLINT NOT NULL );

   DROP TABLE proxy_RowGenerator;

   DROP SERVER ddd9;


CREATE EXISTING TABLE proxy_RowGenerator AT 'ddd9..dbo.RowGenerator';

FORWARD TO ddd9 { SELECT @@VERSION AS remote_version };

SELECT @@VERSION AS local_version;

INSERT local_table ( local_column )
SELECT row_num 
  FROM proxy_RowGenerator
 WHERE row_num <= 10;


SELECT local_column
  FROM local_table
 ORDER BY local_column;




permanent link

answered 20 Apr '18, 16:47

Breck%20Carter's gravatar image

Breck Carter
accept rate: 20%

Breck's idea is excellent (and simpler) if you can access all the remote databases you want to query from a single location.

My suggestion assumed that you did not have access to all the databases.

(20 Apr '18, 17:19) Reg Domaratzki

Unfortunately, I do not have access to all the servers running the database! Few of the distributed databases have external access, and some cases that have access are precarious. Thank you!

-- Infelizmente, não tenho acesso a todos os servidores rodando o banco! Poucas das bases de dados distribuídas tem acesso externo, e alguns casos que tem acesso é precário. Muito obrigado!

(26 Apr '18, 13:36) DRauber

MobiLink is an excellent solution to consolidate data from multiple SQL Anywhere databases to a single centralized database. That centralized database could be another SQL Anywhere database, or any number of RDBMS supported by MobiLink.

The trickiest part of the project is taking all the data in the distributed databases and getting it to the centralized database. My suggestion is as follows :

1) Create the schema of your centralized database with the exact same schema as the remotes, except each table now has another column in the table to identify which database the row came from, and this column is now part of the primary key at the centralized database.

2) Use a feature of MobiLink clients called scripted uploads to initially bulk load the data in the remote databases. Scripted uploads allows you to write a stored procedure whose result set will be the data to be uploaded. You essentially create a stored procedure that executes "select * from table_name" and the whole table is uploaded. Do this for each table you want to synchronize.

3) Once you've confirmed that an upload using scripted upload was successful, stop using scripted upload, and instead use the "regular" method to synchronize, which involves scanning the transaction log and sending all changes made to the remote to the centralized database.

Scripted uploads is not a feature that is available in SQL Anywhere v9, so an upgrade to a more recent version at the clients will be needed, or a more complicated method to load the data from the remotes to the consolidated could be found.

I've attached a sample that uses SQL Anywhere v17 to do what I've described above. You don't need to run the sample, but if you look at the rem.sql file in the ZIP file, you'll see how I've initially populated the remote database with some data, and then defined a publication with scripted upload to use the admin-ui stored procedure to synchronize the initial data. The procedure called sp-hook-dbmlsync-schema-upgrade is a special procedure that runs only when there is a successful synchronization. You can see that the stored procedure drops the scripted upload publication and defines a regular (or log scanning) publication.

If you want to run the sample :

  1. Unzip the contents of the ZIP file into an EMPTY directory
  2. Open a DOS prompt and CD into the directory when the files exist
  3. Make sure SQL Anywhere v17 is in your path and the %SQLANY17% environment variable is set
  4. Type mlrep

Hope that helps,


PS: I'm only working part time right now, and will be offline for a week or so after this. My response will be delayed, but hopefully Volker or Breck can step in if you have questions.

permanent link

answered 20 Apr '18, 15:37

Reg%20Domaratzki's gravatar image

Reg Domaratzki
accept rate: 37%

Well, I can't figure out how to share right now. Here's the schema of the remote database (the rem.sql I quoted). If you want the full sample, please email me at, where you obviously replace firstname and lastname with my real first and last name.

set option public.Global_database_id = 1;

create table Admin (
  admin_id bigint default global autoincrement(1000000) primary key,
  data     varchar(64)

insert into admin(data) values(uuidtostr(newid()));
insert into admin(data) values(uuidtostr(newid()));
insert into admin(data) values(uuidtostr(newid()));
insert into admin(data) values(uuidtostr(newid()));
insert into admin(data) values(uuidtostr(newid()));

create procedure admin_ui()
result ( admin_id bigint, data varchar(64) )
  select admin_id,data from Admin;

create publication p1 with scripted upload ( 
   table Admin using (

create synchronization user "rem1";
create synchronization subscription to p1 for "rem1" 
  type 'http' address 'host=localhost;port=8001' script version 'v17';
create synchronization profile sp1 'mp=sql;u=rem1;n=p1;verbosity=high;ap=scripted';

create procedure sp_hook_dbmlsync_schema_upgrade()
    message 'In sp_hook_dbmlsync_schema_upgrade';
    drop synchronization profile sp1;
    drop synchronization subscription to p1 for "rem1";
    drop publication p1;
    create publication p1 ( table Admin );
    create synchronization subscription to p1 for "rem1" 
      type 'http' address 'host=localhost;port=8001' script version 'v17';
    create synchronization profile sp1 'mp=sql;u=rem1;n=p1;verbosity=high;ap=normal';
    UPDATE #hook_dict SET value = 'on success' WHERE name = 'drop hook';

(20 Apr '18, 15:39) Reg Domaratzki

Updating distributed databases using ASA9 is not possible because of the impact it would bring and the high cost. At least it's not an accepted hypothesis at this point. The use of WebServices was considered, however, the large amount of data to be processed daily and the need for manual control of inserts, updates, deletions (and their respective references), led us to think of a "ready to use" solution.

-- A atualização das bases de dados distribuídas que utilizam ASA9 não é possível pelo impacto que traria e elevado custo. Pelo menos não é uma hipóteste aceita neste momento. A utilização de WebServices foi cogitada, porém, o grande volume de dados a ser processado diariamente e necessidade de controle manual de inserts, updates, deletes (e suas respectivas referencias), nos levou a pensar em uma solução "pronta para uso".

(26 Apr '18, 13:44) DRauber
Replies hidden

So have you considered using SQL Remote? Of course each type of distributed database (via MobiLink or SQL Remote or by "collecting log contents") will require some kind of globally unique primary keys - if that is not already contained in the existing systems, it might be difficult to add that lateron.

I guess it would be helpful if you could provide more information about your requirements, possibilities and limitations, e.g. what transport could be used between the clients and the central instance - HTTPS, FTP, ...?

(27 Apr '18, 03:09) Volker Barth

Just to add: In case you cannot access the client databases directly (in contrast to Breck's suggestion), and the data you want to collect is rather simple (say, just a few tables without deeper foreign key relationships): As an alternative to MobiLink (or SQL Remote), you might also consider using a web service to collect those. The consolidated could establish a web service and the ASA 9 clients could then use the web client feature to call the service and supply their data. (AFAIK ASA9 does support web services and web client procedures/functions.)

If that is easier to implement than a MobiLink solution will depend on the complexity of the data structures and your knowledge w.r.t. both technologies...

permanent link

answered 23 Apr '18, 03:23

Volker%20Barth's gravatar image

Volker Barth
accept rate: 34%

Yet another possibility is to run UNLOAD SELECT ... FROM ... TO 'filespec.txt' statements on the SQL Anywhere 9 database, manually transport the filespec.txt files to the central computer, and run LOAD TABLE FROM 'filespec.txt' statements on the SQL Anywhere 16 database.

permanent link

answered 23 Apr '18, 08:01

Breck%20Carter's gravatar image

Breck Carter
accept rate: 20%

My response will be delayed, but hopefully Volker or Breck can step in if you have questions.

I hadn't read Reg's PS but apparently he's right:)

(23 Apr '18, 08:06) Volker Barth

Hello! Other services in the company use this technique, but some tables are quite large and the result has been poor. In addition to needing relationships, we often have flaws. Thank you!

-- Outros serviços na empresa usam essa técnica, mas algumas tabelas são bem grandes e o resultado tem sido ruim. Além de precisarmos de relacionamentos, que frequentemente tem apresentado falhas. Obrigado!

(26 Apr '18, 13:34) DRauber

Is there any method I could periodically collect the log and run all the commands in a mirror database? (Using ASA 9)

(26 Apr '18, 13:46) DRauber
Replies hidden

The closest automated method is dbbackup -l for a live backup, BUT that's for backup, not creating a parallel mirror database for independent queries... which is what you want, right?

High Availability doesn't exist for version 9... but SQL Remote does, which is what folks often use for parallel database copies.

(26 Apr '18, 16:57) Breck Carter

but SQL Remote does, which is what folks often use for parallel database copies.

...although I guess the typical setup would be to use SQL Remote to copy data to another site (that being the "remote") whereas here there's the additional need to "merge" the copied data in one place, if my understanding is correct, so that place would usually need to be the "consolidated" database...

(27 Apr '18, 03:14) Volker Barth
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here



Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text]( "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:


question asked: 20 Apr '18, 10:48

question was seen: 2,368 times

last updated: 27 Apr '18, 03:14