Hi,
I am involve in a discussion, at our IT department, about DFS, below is our DFS setup:
STORE01 (Server 2008 R2), 1x ISCI (500GB) volume mounted as drive E: contains a sharedfolder called DATA
STORE02 (Server 2008 R2), 1x ISCI (750GB) volume mounted as drive E: contains a sharedfolder called DATA
*Server are connected to a 1GBS Layer3 router (so data throughput is not an issue)
Both server have DFS installed, 1x namespace called DATA and one REPLICATION called DATA
So the namespace is : mycompany.local\corporatedata</b>
Folder targets are: \\store01.mycompany.local\data+ \\store01.mycompany.local\dataThe issues i am noticing is when a folder containing large quantity of data (say about 10GB + several hundred files) are moved within the folder (DATA),
the replication can take hours even a day! To me it says there's something wrong with our DFS!
Besides that, i say that it's not a good idea to have TWO DIFFERENT VOLUME SIZES (one of 500GB, and the other 750GB)
Would really appreciate if somebody could help me with this discussion, a have several questions about (real life) usage of DFS in large environments, like:
1. does moving big chunks of data within dfs have a great impact on performance (resulting in very very very slow replication)?
2. is it a problem to use different volume size?
3. How much data can be moved within a DFS, without deeply affecting the replication process
Thanks for reactions/help!
Kind regards
Charles