My environment is fairly simple. We have two web servers running 2012 R2 in a NLB cluster. They have a shared config and the sites and all their corresponding files live on a shared volume (D:) that is replicated with DFSR between the two. The replication is set to constant (maximum bandwidth) and the servers themselves live on a Hyper-V cluster that uses a high end SAN for its shared storage. The servers are blazing fast, the web traffic and response is blazing fast.
The issue: When our development team is doing a release of new files (updates and such), the DFS slows to a halt and the backlog of files starts growing rapidly. The latest update we had was about 1700 files, but it was only 68MBs. This should be nearly instant between the two servers.
The only things I can think of are:
1. The developers delete the old folder structures and replace them with the new files/folders. This eliminates the chance of leaving older internal files that are no longer needed inside the site's folders. By doing this, when Server 2 is trying to replicate the deletion, and then the files with a similar name but with internal changes reappear in the same folder, Server 2 is unsure what to do and has to analyze each file and decide what to do again.
2. Since the servers live in a Hyper-V cluster, they can communicate very quickly to one another. I still have RDC turned on. Our largest files would be images, which would be a couple MB at the most. All the rest of the files are tiny, so due to the fast network and size of files, I could always turn off RDC to see if that improves the performance.
From what I have seen for Web Clusters, DFSR is what is recommended for the sites files and the shared config. The size of the files we are changing/adding are not huge, but it is crippling to our servers. Has anyone experienced this, or does anyone have any recommendations to tune the performance of our 2012 R2 servers for DFSR? Thanks in advance for any help.