SQL Anywhere 17.0.6.2757 I've been looking at the 'max_temp_space' option but it doesn't seem to work as per the documentation. I have these public settings in place: temp_space_limit_check=on max_temp_space=1048576 (one megabyte) I then execute a query that I know will use a lot of temp space (SELECT * FROM largetable ORDER BY nonindexed_column) I leave this running for several minutes and then query the 'tempfilepages' property of the connection, which returns the value 435448. My database is using 4096K pages so I think this means the connection is using (435448*4096)=1783595008 bytes or around 1.6GB of temp space. Therefore, I would expect the original query to have failed (some time ago) with the error SQLSTATE_TEMP_SPACE_LIMIT yet it hasn't. Have I done something wrong/misundertood how max_temp_space is supposed to work or is this a possible bug? Edit: shortly after posting this I did get the expected 'Temporary space limit exceeded' message. However, it took 39 minutes before it failed. The temp space would have exceeded the specified 1MB shortly after execution started so why was it allowed to run for this long and take up all the additional space before failing? |
This behaviour has been acknowledged as a bug by SAP and will be fixed in 16.0.0 build 2537 and 17.0 build 4033 Thanks for letting us know.
(04 Sep '17, 15:40)
Breck Carter
|
Note that statement from the docs on the temp_space_limit_check option:
So I guess that the mentioned threshold was not reached initially. FWIW: I do not know whether the limit is related to the "logical temp db size" as used by the database engine or to the physical file size - the temp pages will held in cache as long as they fit there... Comment Text Removed
1
@Chris I have verified that the correct settings are in place: temp_space_limit_check On max_temp_space 1048576 I've been reading the documentation again in light of Volker's comment. In the page about the temp_space_limit_check option it says:
Previously I had assumed that the statement 'This threshold is used only if the temporary file has grown to 80% or more of its maximum size' only applied if the second threshold was being considered and that the max_temp_space option should always be enforced regardless of how big the whole temporary file was but now I'm not so sure. If max_temp_space is only considered after the temporary file has grown to 80% of its maximum size then in my example I have set a per-connection limit of only 1MB yet this will only be enforced once the temp file is around 56GB (the disk had 71GB free before the engine was started) - this doesn't seem very useful but is this the intended behaviour? If the above is the intended behaviour then it's still not working correctly as the relevant disk is nowhere near being 80% full when the error is finally returned, it looks like the temp file is around 6GB in size.
(16 Jun '17, 04:27)
Luke
Replies hidden
1
Yep, that's what the docs say, but surely it must be a mistake since it violates The Watcom Rule that "SQL Anywhere engineering does things the way they should be done." First of all, the word "file" in "tempfilepages" is misleading because the temporary data may or may not reside in the actual temporary file (see "What is the Temp File?" in this Foxhound 4 Help topic) Second, the option is named "temp_space_limit_check" not "temp_file_size_limit_check". Third, in a multi-tenant environment there may be hundreds of temporary files (one per database) all stored on the same drive... "the amount of free space remaining on the device" should probably have no role at all in determining whether runaway connections are detected or not. If the statement is true these Foxhound features suddenly look more attractive; although the descriptions also include the word "file", the features are all based on tempfilepages and make no use of the "free space remaining": Alert #21 - Temp file usage - The total temporary file space used by all connections has been [1G] or larger for [10] or more recent samples. Alert #22 - Conn temp file usage - At least one single connection has used [500M] or more of temporary file space during [10] or more recent samples. AutoDrop #4 - Temp file usage - Automatically drop each connection that uses [512M] or more of temporary file space for [10] or more recent samples.
(16 Jun '17, 04:45)
Breck Carter
> ... I had assumed ... should always ... It is generally safe to assume SQL Anywhere does what should be done, that's the whole premise of The Watcom Rule. In this particular case, however... does anyone have time to create a reproducible that accurately demonstrates behavior? (it won't be easy)
(16 Jun '17, 07:52)
Breck Carter
|
The following email snippet explains the true nature of tempfilepages... from: Mark.Culp@sybase.com to: breck.carter@gmail.com cc: Mark.Culp@sybase.com, Glenn.Paulley@sybase.com date: Tue, Aug 4, 2009 at 2:08 PM subject: Re: Fw: Some questions about the temporary file and in-memory never-write mode. Re. Q1: What is Connection_property('tempfilepages)? The engine keeps track of the number of temporary pages which have been allocated to a connection. Temp pages are used for many things, but mostly to hold intermediate materialized query results. This counter is precisely the counter that is used by the temp_space_limit_check option. Even if there is no temp file, it can be useful to know how many temp pages are being used by a connection. Re. Q2: Why does it return a non-zero result.... when -im nw used...? Even though there is no paging file (aka the "temp file") for the database, temp pages are still allocated and used by the engine. The -im nw switch simply controls whether these pages are allowed to be pitched to disk (and whether a temp file is created) in the event that the engine gets low of available cache. In this respect, the statistic counter is misnamed and should, perhaps, not have "file" in its name?
Useful to know
(16 Jun '17, 04:36)
Luke
|
I can verify that it works in my testing in current software as well as the version you are using.
Please double check that the options are set (correctly) by querying the SYSOPTIONS system table and select * from sa_conn_options() where OptionName like '%temp%'. This should list 2 rows per connection with the customer option setting. Make sure these are also correctly set.
@Chris: Your comment should be under the question, not Volker's response.
Now it is:)
I have double-checked the options. When you say that 'it works' are you able to clarify what the expected behaviour is - should the max_temp_space option be the limiting factor regardless of temp file size or is it only checked once the temporary file has grown to 80% of the available disk space?
I prepared a repro based which creates and populates a table with >3 million rows (schema is 21 uniqueidentifier columns. I then set the max_temp_space=1048576 and run the query
select * from t order by c05
So when you got the 'temporary space limit exceeded error' do you think the disk containing the temp file was 80% full or is this not relevant?
I've been playing with this some more and now think the behaviour is dependent on the amount of cache available.
In my original post I was starting the engine with '-ca 0 -c 6G' (fixed 6GB cache) and as reported it took a long time for the 'Temporary space limit exceeded' message to appear and the connection had clearly been able to grab more than the 1MB limit I had set (I've just tried this scenario again and the connection had used almost 4GB when it finally failed)
If I change the switches to '-ca 0 -c 128M' (fixed 128MB cache) and do the same test then the 'Temporary space limit exceeded' message appears almost immediately and the temp file has barely grown at all (was around 13MB)