Thursday, March 12, 2020

Veeam 10 Upgrade

I thought I would update my copies of Veeam this week.  It's always been a simple process in the past.  Mount the iso, run setup.exe and off we go.  All went well until I got the error: Unable to detect database action.  WTF???  I tried the SQL creds I thought were correct.  No go.  I opened the database in SSMS with the same credentials with no problem.  Still nothing when used in Veeam.  Reboot.  Same results.  Put in a support case with Veeam.

The next day I got a response saying I should follow the steps in this article:  https://web.archive.org/web/20151020081242/https://support.microsoft.com/en-us/kb/886549.  It has to do with the user shell paths.  Well that's it!  We changed file servers a couple of weeks ago.  I went through and made the changes in that article that still pointed to the old file server.  Reboot.  Same issue.  This time I searched the whole registry for instances of my old file server and found a dozen or so more references to it.  I updates all those with the new file server.  Reboot.  There we go!  Veeam 10 installed without another complaint.

If you recently changed file servers that affect the user shell paths, you will need to update those for Veeam to update.  No idea how that affects "database action", but it worked for me.

Thursday, January 30, 2020

Stupid....Stupid....Stupid!!!!

I noticed a discrepancy between the reporting of free space between the Server 2019 VM that is my Veeam Backup server and VMware.  The server (and Veeam, of course) reported one of my backup drives as having 4 TB free and VMware reported it as having about 1.9 TB free.  Finally I noticed my mistake.  I had over-provisioned the datastore.  Crap.  I didn't know you could do that!  I can only assume that when I added the drive to the VM, I saw that there was 13.8 TB free and meant to make the drive 13 TB in size but got fat fingered and typed in 15 TB.

Fortunately I had a 9 TB datastore that was currently unused that I could put in the Scale-out repository and could evacuate the offending datastore with space I still had available.  So that's what I did.  49 hours later and that job is still running.  I guess that's what you get when you are on 2 trunked 1 Gb network connections.

After I had the first job fail because the files were on the repository we were evacuating, I disabled the rest of the jobs.  I was hoping that would speed things up.  I guess this is the sped up time!  On top of that, this is my Sun-Thu week so I'm supposed to be off for the weekend starting tomorrow.  If this job doesn't finish before I go home tonight, I think I'll turn the backups back on tomorrow morning.  Then Saturday I'll fix my over-provisioned datastore and evacuate the repository I "borrowed".  Hopefully when I come in Monday, all will be back to normal and I can enable all the backups again.  I'm not leaving the 9 TB repository in the Scale-out repository because I'm retiring that hardware.  We've been using it since 2011 - it's time!  We're getting a Synology NAS with 10 Gb ports.  Fortunately I still have available 10 Gb ports in my switch.