Start a new topic

Move Backup Files to S3 Bucket in PostCode

On my AWS EC2 SQL Servers, after each Minion Backup runs, putting the backups on a fileshare, the backup files need to be moved to an AWS S3 bucket.  I can do this manually using xp_cmdshell containing an AWS S3 mv command.


However, when I put that same command into the PostCode column of BackupSettintsSoftware, nothing happened.  The backup ran fine, but there were no errors in the Minion BackupLogDetails nor in the SQL Server error log.


So, two questions:


1)  How can Minion Backup be set up to do this move to S3?


2) The move to S3 takes longer than the backup itself.  Will that delay the next execution of the Minion Backup?  (We take log backups every 15 minutes, but if it takes, say, 50 minutes to move the backup file to S3, will that delay or cancel the next 3 Minion log backups?)







It looks like it did run after all; I see the files no longer on the file share and see them in the S3 bucket.


Even so, the BackupLogDetails table contains NULL for the DBPostCodeEndDateTime and DBPostCodeTimeInSecs columns.  Should those columns be valued, and when to they get valued?  Also, should / can the status indicate that even though the backup is done, that the postcode is still running?


So the first question is answered, but I would still like to find out about the second -- will a long-running postcode delay or cancel subsequent scheduled backups, or does the postcode run asynchronously from the backups?


Thanks in advance.

Hey Richard,

I moved this topic to the MB forum.

So you've put this code into the DBPostCode which will run after each DB is backed up.  And yes it will definitely delay your next run.  Currently, there's only one job that controls backups and if it's running it can't run again until it's done.

However, there's a cool workaround.  Instead of running  your script in the PostCode, run an SP that creates a job that runs the script.  This will be done with dynamic-sql.  It's the perfect solution cause it takes it out of the backup rotation, and you can choose to have the job delete itself or not. 


And if you don't want too many jobs running at the same time, you can put it in the BatchPostCode instead and modify your script to copy ALL the new backup files.  You can even zip them up beforehand.


Also, as for the logging of the PostCode, yeah, it's supposed to both log in the Details table, and it's supposed to update the Status col to let you know what it's doing.  Now, exactly why it's not doing that, I'm not sure.  But I can definitely see if I can repro it.  

Just to be sure, you're running a PS script in the PostCode, right?  Can I see the call that you put into the PostCode col so I make sure I have it setup the same way?

Login or Signup to post a comment