Hello,

This is a follow-on question from https://sqlanywhere-forum.sap.com/questions/32842/suggestions-for-how-to-determine-why-sa-17-ha-is-crashing that I posted once I determined that the problem was a lack of RAM.

I am running a High Availability system under Ubuntu 16.04 on AWS. We are still in a test phase and so we are running all three of the servers on the same AWS host. (I realize that this configuration eliminates the true high-availability feature but it is much simpler to run the servers almost same host as we are doing our testing.)

We discovered that the Mirror server consistently crashed when trying to insert a .5 GB BLOB into the database. The only information provided was in the log for the Mirror server which said: Cache size adjusted to 499052K Cache size adjusted to 529480K /home/wizard/SQL Anywhere HA Scripts/Launch.sh: line 22: 2603 Killed

Line 22 in that script was the dbsrv17 command which launched the Mirror server.

Monitoring available RAM as the file was being inserted into the database suggested that we were running low on RAM. I doubled the RAM in the AWS host and the crash no longer occurs so I think this is strong evidence that we were running out of RAM.

From the prior submission (link above) I learned to experiment with the –ca option to size the maximum cache and in researching about the cache I learned that this controls a combination of the amount of RAM and also temporary disk file.

My question comes from a lack of understanding about how to set a cache size with respect to the amount of RAM that is available. When the initial crashes occurred the Ubuntu system running on AWS had 1 GB of RAM. As I said I doubled that so we were running with 2 GB of RAM when the insert for this half gigabyte BLOB succeeded. However, monitoring the Ubuntu RAM show that we got down to 27 MB of available RAM at the end of that insert! So it looks like 2 GB of RAM just barely allowed that particular insert of this large BLOB to work. That raises my question

How does one either determine how much RAM is needed if we are not going to limit the cache or if we cannot add additional RAM how do we determine what size cache we should specify? In experimenting with -ca we tried 128 MB and saw that it took a tremendously long period of time to complete the insert for the .5 GB BLOB. So clearly setting a small cache dramatically increases the amount of time it takes to execute the insert and would not work.

So, how do we determine either how much RAM we should run the HA servers with or how do we set the cache size? The problem is complicated by the fact that we don't know how large a BLOB we may need to process in the future since this BLOB comes from a file supplied by the user at runtime and there really is no meaningful size limitation to such a file.

I am sure this is a common issue and there must be some reasonable algorithms or at least "rules of thumb" that other people have used.

Thank you.

asked 16 Nov '18, 18:23

AlK's gravatar image

AlK
735313554
accept rate: 37%

If the RAM usage is dominated by the file size of BLOBs you have to insert, you could also check whether load table, openstring() or other facilities to access file contents lead to lesser memory usage...

(17 Nov '18, 10:38) Volker Barth
3

Generally, the server shouldn't crash regardless of the cache size (although a certain base minimum of a few MB is required to function). Beyond that amount, cache size only affects performance.

The server shouldn't be using up all of the memory in the system either. Also, inserting a large blob shouldn't require lots of memory (blobs are paged).

The -ca switch only controls whether or not we automatically tune the cache size periodically based on system characteristics. To set a maximum on the cache size, use the '-ch' switch.

How far apart were the 'cache size adjusted' messages? If they are extremely close in time (less than a couple of seconds) then something internally is actually allocating lots of memory. If they are far between, then likely they are just messages related to tuning the cache. I would suggest adding the "-cs" switch which will print more statistics about the memory tuning decisions to be added to console log. At startup, what was displayed as the max cache size? How much swap space to you have?

If the server just 'disappeared' and didn't display an assertion failure message that it was unable to allocate memory, then I would suspect that the Linux Out-Of-Memory killer is involved. You might find 'kill' messages in the 'dmesg' output, in /var/log/syslog, or other log files. Basically, if the system is configured to allow overcommit (which is the default) and the system gets low on memory, then Linux chooses a process and just kills it.

If you really want to run a small system (1GB ram), you might want to set your max cache size explicitly and possibly turn off overcommit.

One other thing: what particular value were you looking at for the 'available RAM' figure you were reporting?

(18 Nov '18, 06:44) John Smirnios
Replies hidden
1

(18 Nov '18, 07:54) Breck Carter

You can never have too much RAM. There, you can't ask for a simpler rule of thumb than that :)

Seriously, you can never have too much RAM.

No, really, seriosly... you can never have too much RAM.


The only rule of thumb that ever existed dates back to the 1990s: Calculate 10% of the *.db file size, and use that much RAM cache.

Then databases got really busy, and wild queries started using enormous amounts of temporary space, and RAM cache sizes larger than the *.db file size were sometimes required for decent performance... the 10% rule is now a punch line of jokes about obsolescence :)

Don't ever admit you tried using 128 MB. Even 2G sounds tiny in 2018. Take off the ca 0, set ch to 4G, and see what your high-water mark is.

I recommend Foxhound... you can monitor both primary and secondary servers and see whare the RAM is going (temp space, rollback log and so on):

permanent link

answered 17 Nov '18, 09:52

Breck%20Carter's gravatar image

Breck Carter
32.5k5417261050
accept rate: 20%

edited 18 Nov '18, 08:26

To add:

You can also use the properties MinCacheSize, CurrentCacheSize, MaxCacheSize and PeakCacheSize to monitor the cache usage. Particularly the difference between the last two properties will show you whether the server has already fully used the maximum cache size during the current session - that might show you if you have spent more RAM than necessary:)

(19 Nov '18, 03:38) Volker Barth
Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×26

question asked: 16 Nov '18, 18:23

question was seen: 2,086 times

last updated: 19 Nov '18, 03:39