Suppose the two partner computers have their system clocks set differently; what does CURRENT TIMESTAMP return after an HA failover?

When successive runs of dbbackup -r rename the current log on the primary server to yymmddxx.log, this file could be renamed on partner 1, then on partner 2, then on partner 1 and so on; what determines the date portion of the yymmddxx.log file name?

What determines the Windows file timestamp on the yymmddxx.log file?

Does the Windows file timestamp agree with the yymmdd portion of the file name? (think backups at midnight :)

Which one should be used to determine the file age when deleting old files? (file name or timestamp)

Note: The HA system makes sure both partner computers get copies of all the yymmddxx.log files, so that's not a problem.

[edit test]

asked 22 Apr '17, 13:12

Breck%20Carter's gravatar image

Breck Carter
32.5k5417261050
accept rate: 20%

edited 23 Apr '17, 09:39

1

I would think the backup file rename facility checks for existing file names in the according directory, so that no logs should be overwritten. In case of recovery, option -ad should help to apply logs in the correct (offset-based) order, independent of the file name order...

For (current) default timestamp: Doesn't the behaviour discussed here apply for HA, as well?

(23 Apr '17, 11:00) Volker Barth
Replies hidden

I assume you ask because you have seen something not working correctly or as expected?

(23 Apr '17, 12:54) Nick Elson S...
Replies hidden

I am not asking about DEFAULT.

I am not asking about changes to the system clock.

I AM asking about two computers where the system clock is different; e.g., different time zone, or different time in the SAME time zone.

Does HA somehow synchronize the system clock times?

...or does it not matter (much), given your point about offsets :)

(24 Apr '17, 04:06) Breck Carter

> In case of recovery, option -ad should help to apply logs in the correct (offset-based) order, independent of the file name order.

Yes, of course, right you are on that point :)

(24 Apr '17, 04:07) Breck Carter

No, I had an OCD attack after reading more than a week old in the Help...

"To remove old transaction logs from the primary, use a scheduled event to delete them once you are certain that they are no longer needed. For example, you can create an event that runs each day and renames the transaction log. This event could also delete copies of the transaction log that are more than a week old." (see Transaction log file management in a database mirroring system)

...how exactly should I code that? I'm doing this, but will it work?

FOR f_fetch AS c_fetch INSENSITIVE CURSOR FOR
SELECT CAST ( modified_date_time AS TIMESTAMP ) AS @modified_timestamp,
       file_path                                AS @file_path
  FROM sp_list_directory ( LEFT ( DB_PROPERTY ( 'LogName' ), LOCATE ( DB_PROPERTY ( 'LogName' ), '\\', -1 ) ) ) 
 WHERE file_path <> DB_PROPERTY ( 'LogName' )
   AND file_type = 'F'
   AND @modified_timestamp < DATEADD ( DAY, -7, CURRENT TIMESTAMP ) 
 ORDER BY @modified_timestamp 
FOR READ ONLY
DO

   @return_code = CALL sp_delete_file ( @file_path );

(24 Apr '17, 04:24) Breck Carter

OK, I see another problem :)... the code depends on the log directory only containing log files, which it does... today... but the client may change things in the future...

(24 Apr '17, 04:28) Breck Carter

OK... but this still doesn't answer the question "will @modified_timestamp < DATEADD work?"

FOR f_fetch AS c_fetch INSENSITIVE CURSOR FOR
SELECT CAST ( modified_date_time AS TIMESTAMP ) AS @modified_timestamp,
       file_path                                AS @file_path
  FROM sp_list_directory ( LEFT ( DB_PROPERTY ( 'LogName' ), LOCATE ( DB_PROPERTY ( 'LogName' ), '\\', -1 ) ) ) 
 WHERE file_path LIKE '%.log'                                       -- it is a log file
   AND file_path <> DB_PROPERTY ( 'LogName' )                       -- it is not the current log file
   AND file_type = 'F'                                              -- it is a file, not a subfolder
   AND @modified_timestamp < DATEADD ( DAY, -7, CURRENT TIMESTAMP ) -- it is at least 7 days old
 ORDER BY @modified_timestamp 
FOR READ ONLY
DO

   @return_code = CALL sp_delete_file ( @file_path );
(24 Apr '17, 04:39) Breck Carter

Can't tell that - but if you do use an event to clear up old logs, could you not also use an event to rename logs - and then store the name of the renamed logs within the database (and possibly the server currently acting as primary), and then use that to lateron delete those logs.

(I'm aware that this does not answer the question whether HA server time may fall apart but you could then check the file time of the according log (taken from the OS directory) with the "rename timestamp" stored within the database and use that to know whether the file is "old enough" to be deleted, or to calculate a "time-fallen-behind" offset between the time of both servers.)

(24 Apr '17, 04:57) Volker Barth

For the rest of us in the OCD cult, the acting primary is in control of the filenames used when renaming transaction logs and, thus, these names will reflect the clock/TZ/calendar date of the primary system for both sites.

If you do something tighter than 'last week', you might also want to check that the mirror status is reasonable as well.

(24 Apr '17, 11:51) Nick Elson S...
showing 3 of 9 show all flat view

You've asked lots of questions here - hopefully this answers all of them!

As stated in the docs: "The information CURRENT TIMESTAMP returns is equivalent to the information returned by the GETDATE and NOW functions." These should all be giving the date and time according to the system clock of the machine the server is running on. The time returned by CURRENT_TIMESTAMP is completely unaffected by an HA failover.

The date portion of yymmddxx.log file name is determined by the system clock on the primary at rename time. This same filename is communicated to the mirror/copynodes and used when they also rename the file, which means the "yymmdd" may not be accurate according to the system clock on those machines if they are in a different timezone.

The Windows file timestamp should be the system time of the creation of the file on the local machine - so, on a mirror/copynode, the timestamp will definitely not match the Windows file timestamp of the same file on the Primary (file creation on the mirror/copynode is after the file creation on the primary), and it might not match the yymmddxx.log file name if the servers are in different timezones.

If your ultimate goal here is to write an event to delete logs that are exactly 7 days old, I think the most accurate solution is to use the method Volker has suggested: store the renamed file name and date/time in the database and use this information in the event to delete logs. As discussed earlier, since filenames (yymmddxx.log) and Windows timestamps could be different depending on whether there has been a failover or not, relying on these is going to result in a certain amount of inaccuracy. (Which may be fine if you want to delete files approximately 7 days old, but not if you want exactly 7 days.)

Note that the deletion of files is propagated from the primary to any mirror/copynodes automatically, so the event doesn't need to run anywhere else. At least that's a bit of a help :)

permanent link

answered 24 Apr '17, 11:54

MarySteele's gravatar image

MarySteele
29126
accept rate: 50%

Your answer
toggle preview

Follow this question

By Email:

Once you sign in you will be able to subscribe for any updates here

By RSS:

Answers

Answers and Comments

Markdown Basics

  • *italic* or _italic_
  • **bold** or __bold__
  • link:[text](http://url.com/ "title")
  • image?![alt text](/path/img.jpg "title")
  • numbered list: 1. Foo 2. Bar
  • to add a line break simply add two spaces to where you would like the new line to be.
  • basic HTML tags are also supported

Question tags:

×1

question asked: 22 Apr '17, 13:12

question was seen: 1,410 times

last updated: 24 Apr '17, 11:54

Related questions