Monday, September 13, 2010

MSDeploy vs. xcopy network bandwidth throughput

MSDeploy (aka Microsoft Web Deploy) is great for syncing up all kinds of things between two machines. One issue I've noticed recently is, for whatever reason, it doesn't always do well at maxing out your network bandwidth.

We were recently syncing up large SQL Server backups between two boxes connected through a gigabit ethernet switch. I initially decided to use msdeploy.exe, because I wanted it to not only bring over the latest files, but delete the ones on the destination server that no longer exist on the source server. Using -verb=sync and an xml -source:manifest and -dest:manifest file, I ran the test sync from the destination server, and watched the network usage with task manager.

The average throughput was in the 5-15% range on our gigabit connection.

As a test, I ran xcopy /S /D for the same directory tree, and was getting over 90% throughput most of the time, often at 99% throughput!

Unfortunately, xcopy will not truly sync, it's only copying the files over. So now what we do in our .bat file is run the xcopy command first to copy over all the newest data at a high rate of speed, and then we call the same msdeploy command as before, and it deletes any files on destination not found on source. Since all the copying of new data has been done by xcopy, it finishes very quickly and now we're synced up.

I am aware of robocopy's ability to do a lot of this stuff, but it's clear that msdeploy is where Microsoft is putting it's eggs for a lot of the newer syncing technology, so we default to using that first, and now just augment with xcopy when pure throughput is needed.

If anyone knows why msdeploy is so much slower, please comment and I will update this article with any tips to gain performance. My guess is it's due to using the HTTP agent? It just seems weird, because I'm talking about copying of really huge files, so the slowdown is not with hash checking.

No comments: