Quantcast
Channel: File Services and Storage forum
Viewing all 16038 articles
Browse latest View live

Deploy DFS namespace and DFS-R for different locations

$
0
0

Hi All,

I am new to DFS-N and DFS-R, I have never deployed and used DFS before. We have different branches where different file servers are deployed on Windows Server 2008 R2, we want to migrate from file server to DFS-N and DFS-R so that users can access all the resources with single name space. Kindly help me for the below:

1. We have File Server in HO and four different file servers in branches, we want to migrate their share and NTFs permissions to DFS server in their each locations.

2. What configurations will be required in each site, Each site there will be a member DFS root server or each site there will be a separate DFS root server

3. We want to have a folder named Apps in each location and want every location users to access a folder named Apps from their DFS server located at their site server how to achieve this.

3a. If the folder is updated in one location then how users from another site will get the same updated folder from their DFS server located at their site, I know the answer is replication but how to design this

4. Some of the branches does not have active directory deployed and they are getting authenticated from another site, is it necessary to deploy the active directory in each location where DFS server will be deployed, what if the WAN connection will go down then will the users will be able to access the DFS server deployed at their location

5. we have some WAN accelerators but that works once encryption is disabled while doing replication from one DFS server to another DFS server in different branches, can we disable encryption between two DFS server while replication.

6. We want to enable DE duplication on disks where files and folders will be placed I believe in 2012 R2 we can enable DE duplication on the disk drive should we need to do any other configuration from storage side

7.  We want users to access their own DFS servers instead of going to the DFS server on another site, should I create some thing like contoso.com\public\locaation1 so that users in location one access their data from their DFS server or there is no need to create location1 or location2, the DFS it self will redirect him to the closest location.

8. I want users to not see any other location's DFS data how to achieve this.

9. We want to enable two way replication for specific folders how to achieve this.

10. We want to replicate all data from each location to the HO DFS server for backup purposes but we do not want to replicate HO data to another locations, how to achieve this

11. We want redundant DFS server so that if one DFS server goes down then another one should be available, in this case what actions should I perform.

12. should we go with one DFS root name space and others as a member servers, or should we go with different DFS root name sapces, kindly note that we want users to access one name space and then the location, if we will go with one DFS root name space then will all the servers have the same data through replication if yes then I do not want to replicate all data on all servers.

13. How DFS will redirect users to connect to their own DFS servers in their site.

Any more suggestions are most welcome.

Thank you in Advance


Storage Tiering in Server 2012 R2 Core Hyperv

$
0
0

Hello everyone,

Recently I installed the free version of server core 2012 R2 with Hyperv.
The following hardware Raid has been configured:
Raid 1 with 2 drivers for the physical Hyperv Host installation
Raid 1 with 2 SSD drives
Raid 5 with 4 HDD drives

My goal is to use the Raid1 SSD drive and the Raid 5 HDD drive to configure them in a storage pool with storage tiering for extreme redundancy and the better performance of storage tiering and high storage.

When I add the new hypervisor to another server 2012 R2 Datacenter edition to manage it through server manager I go to create a new storage pool. However, when I create a new virtual disk there is no such option as Storage Tiering. I would understand that it would greyed out as the server doesn't recognize the media type but the option isn't even there. 

Questions:
1- Why isn't the option Storage Tiering visible  / greyed out?
2- Is it a limitation of the free server core 2012 R2 hyperv that it is unable to create storage tiering?
3- Will my theory work? and if it would work, would it have any performance benefits/loss?
4- Is it even possible to create a virtual disk with storage tiering if I use hardware raid hard drives?

Many thanks in advance.


Andre

Folder permission

$
0
0

I have an Windows 2008 R2 file server with clients on Win 7 & 8, there is a requirement for 2 different departments to have a common shared folder with different permissions asper the below scenario, 

Common shared root folder with 2 sub folders (example sales, purchase), now we need to share the root folder and map the drive, however sales users should have full control to sales folder and read access to purchase folder.Also the vice versa..purchase to have read access to sales folder & full access to sales folder.

Howe to achieve this...??

I have shared the root folder and granted permission, however both users can either have access or denied while trying to create a file or folder.

Thanks.

Confusing disk usage numbers

$
0
0

Hi, all.

While in a prosess of moving a large file share I came across a strange issue. I wanted to check the amount of data moved so I checked the properties of the disk I'm moving data to - 1.38TB used, data deduplication shows savings of about 0.6TB so I got about 2TB data moved. Then I check properties of the folder containing data, and there comes surprise - 2TB of data, as expected, but size on disk is just 86 GB :-)

Is this just a bug or there is some logical explanation to those numbers?


Gleb.

Server 2008 NTFS vs 2012 NTFS

$
0
0
I have a 2008R2 File server with 2 large NTFS volumes.  I want to upgrade this server to 2012R2.  If these NTFS volumes, that were created on a 2008R2 server, are attached to a 2012R2 server, will they be able to take advantage of the improvements made in 2012R2?  Such as, faster check disk times, Deduplication, improved NTFS reliability, etc.  My concern is that the volumes may need to be reformatted under the new OS.  Can someone please clarify this?

Access Based Enumeration causing slow downs on File Server

$
0
0

I am looking for ideas here.

We have a 2012 r2 file server that has a share with Access Based Enumeration turned on. This has been going on forever, what happens is if around 6 or more people try to open a folder at the same time it can hang for up to a few minutes. We troubleshooted this problem forever before testing turning off the Access Based Enumeration and boom problem all fixed.

Ideas on why this is an issue? We really really want to use ABE.

Multiple friendly names in storage tiering

$
0
0

Hello everyone,

I recently managed to setup the following:
1 raid1 with 2 ssd drives for the hypervisor
1 raid1 with 2 ssd drives for storage tiering
1 raid5 with 4 hdd drives for storage tiering

I successfully manged to create a storage tiering by create a storage tiering pool and virtual disk with the letter D.
I moved my virtual machine to the D drive.
Now I would like to configure the VHDX file of the virtual machine to be pinned to the SSD part( just for testing purposes ) and later on move it to the HDD part.

If I enter the following powershell cmd:
get-storagetier | select friendlyname
the result is:
Microsoft_HDD_template
Microsoft_SSD_template
Storage tiering_Microsoft_ssd_template
Storage tiering_Microsoft HDD_template

Could someone explain me why there are 4 storage tiers as I only expect to see 1.
I must add that the size of the first 2 mentioned is 0 and the other 2 are the size of my ssd drive and the size of my harddrives.

Furthermore I would like to know how it could be possible to create a 65GB VHDX manually put it on the SSD tier and after running the scheduled optimization task it states completely on tier while the SSD disk in the tier is only 60GB.... I would expect it to say partially on tier... What am I missing.

Many thanks in advance.


Andre



Windows 2012 R2 Fileserver - Files open sometimes slow

$
0
0

We have problems - file open sometimes slow. Fileserver isWindows 2012 R2 (all updates) and Window 7 clients. The  Fileservers are virtual and running von VMWare vShpere 5.1. (paravirtual, vmxnet3 NICs) On the fileserver is no bottleneck CPU, RAM, NIC, DISC, TCP-Connections visable. Performance Reports are alway green, system resports are alway green. On some volumes dedublication ist enabled but this issueoccurson all volumes. Nothing interessting to see in the eventlogs.

The issue lookfollows: A useropen a filethat takes long time.Another useropensother files and has no problem at the same time! Especiallyfilesare affected thatotherusers have in accessat that time. This affects allFiles(from simple textfilestoExcel,Word,... )

The followingthingshave beentriedwithoutsuccess:
- install the system again with default smb settings (without AV, ... )
- disable SMBv2 on Client side
- stop backup software
- move the disc files of the VM (Fileserver) to an other storage system
- move the VM to an other host
- and settings from the performance tuning guide


Does anyone have suggestions
how to geta solution?


DFS Replication Problem - 2 out of 3 groups replicating

$
0
0

Hello,

I'm having a small problem with DFS replication. This is a home lab with 2 Windows Server 2012 R2 machines as the upstream/downstream servers. I have 3 replication groups. The replication works for 2 of them but fails on the third one. The share and NTFS permissions are identical on all 3 shares. The propagation tests are successfull on the 2 shares, but the third one shows as incomplete and the replication status shows "arrival pending".

I have tried the following commands:

a) Get-DfsrBacklog : No replicated folders were found on the member.

b)Get-DfsrState - returns empty

What other tools can I use to check where the problem is?

Thank you,

Wojciech

How can Windows files and folder owners just vanish, and how to restore all of them at once?

$
0
0

See, we have a file server running windows 2003 at our company, and suddenly most of the files and subfolders permissions are gone, lost even the owner, making it unaccessible even for the administrator.

So since it happend out of nowhere it's most possible to be a system flaw, how can I retrieve all the permisisons back or how can I restore the owner(s) ? Using the "Copy permissions to all subfolders and ....>" does not work, it says it cannot apply the changes to the files. 

 Can any1 help? 


Automated scrubbing and bit rot repair on a mirrored storage space using ReFS

$
0
0

I use several ReFS volumes , either on one-way or two-way mirrored storage spaces under Windows Server 2012R2.

The description of the automated 'scrubbing' on TechNet seems a bit unclear to me. Microsoft explains:

Integrity. ReFS stores data in a way that protects it from many of the common errors that can normally cause data loss. When ReFS is used in conjunction with a mirror space or a parity space, detected corruption—both metadata and user data, when integrity streams are enabled—can be automatically repaired using the alternate copy provided by Storage Spaces. In addition, there are Windows PowerShell cmdlets (Get-FileIntegrity andSet-FileIntegrity) that you can use to manage the integrity and disk scrubbing policies.

And further on:

ReFS can automatically correct corruption on a parity space when integrity streams are enabled to detect corrupt data and because ReFS examines the second (and third) copies that the data parity spaces contain. ReFS then uses the correct version to correct the corruption.

noteNote
ReFS can already detect corruption on mirrored spaces and automatically repair those corruptions.


Does this mean the file attribute "FileIntegrity" has to be set to "on" in order for automatic scrubs (of user data) and automatic correction (of user data) an mirrored spaces to take place--or not?

If the answer is that "FileIntegrity" has to be explicitly set to "on", what happens with new files copied into or created in an existing folder structure for which I set "FileIntegrity" to on? Do I see it correctly that "FileIntegrity" is off for new files by default?

So do I have to run a PowerShell command like 

PS C:\> Get-Item-Path 'H:\Temp\*' | Set-FileIntegrity-Enable $True

(source)

every time a new file might have been copied into the folder structure of "H:\Temp\" in order to activate automatic scrubbing and repairing forall files in this folder?

Thanks for your comments.



Mirroring a volume with partitions...

$
0
0

Hi there folks,

I seek advice as to mirroring a drive that has 4 partitions on it. Each partition has a separate drive letter.  This is on a running prod server 2012 r2.

I have an identical spare drive that I'd like to use as the other drive.

I saw that when you go through the Mirror Volume option it states that a drive letter can be assigned to the volume but if I do that will I lose the drive letters assigned to the partitions.

The other option I noticed was that if I right click on a partition that you can add a mirror... can the same be done for the remaining partitions..??

Thanks in advance for any assistance...

Regards...

Trevor


Even though the file is closed by the client in the shared folder I still can see it open when I query it with openfiles.exe

$
0
0

Dear Experts

In order to be more clear I would like to describe the problem step by step:

1-A client (64Bit Win8.1) puts a pdf file on a mapped shared folder on a virtual Windows Server 2012 standart.

2-Another client (64Bit win7 or 32 bit win7 or 32 bit win8.1) opens the file, checks whether it belongs to him/her.

3-She/he tries to delete the file

4-she/he can not delete it.

5-I am informed of the problem and I check the status with openfiles.exe on the server and can see that the file is still open.

6-I confirm that the file is closed by the owner and the second user.

Here is the question, why do not all the closed files on the client side not closed on the server side? How can I stop this event?

Thanking you in advance for your support.

Regards

Shadow Copy restore

$
0
0

Trying to restore shadow copies of files that have changed in error.

If on the file server (Windows 2008 r2) you drill down to the file that needs to be restored and right click "Restore previous version" no previous version show up.  However if you right click on the parent directory and right click "Restore previous version" you can then open up a previous version of the directory and copy/open the file from that window. 

Is there any reason why you cant just select the file with out having to go via the parent directory?

DFS Best Practice and domain structures

$
0
0

We currently have built a new domain structure for our client which consist of domains in separate forests with the approapriate trusts in place. The domains being:

user, resource, legacy resource  and Test.

It is assumed that the resources will be migrated out of the legacy domain to the new resource domain in the medium term.

Given we have circa 650 file shares, which causes problems with logon scripts, I want to present those shares via a DFS. Internally our view is that the DFS should be built in the user forest to reference file shares in the resource domain(s).

Our client, however believes the DFS is a resource, and therfore should be presented in the new resource domain.

What are the pros / cons of each and what would be the Microsoft recommendation and why ?



Branch Cache Deployment

$
0
0

Hi Techies,

I need to deploy a Branch Cache Solution for one of the client’s remote office. Users in the remote office need to access 3 files shares from two different locations (parent or main office). Below are the complete setup details:-

1. Branch Office:-

a) Number of Users: 50

b) Local Servers: Yes (Domain Controller and File Server)

c) Client Operating System:Windows 7 Enterprise

d) Local File Server O.S (For Branch Cache):Windows Server 2012 R2 Standard x64

e) Clustered: NO

2. Main Office 1:-

a) File Server O.S(For Content server):Windows Server 2008 R2 Standard x64

b) Clustered: NO

3. Main Office 2:

a) File Server O.S (For Content Server):-Windows Server 2008 R2 Enterprise x64

b) Clustered: YES

4.Network Bandwidth: The branch office is connected to the main offices through4 MBPS MPLS cloud link.

Based on the above environment, I need to deploy a Branch Cache solution and have few questions below:-

1. Can I have partner servers with different Operating systems (Server on remote site is 2012 R2 and Content Servers on Main offices are 2008 R2 both Standard and Enterprise respectively)

2. Is Branch Cache service Cluster aware as one of the above main office server is clustered for File services.

3. I know the recommendation for 50 clients is distributed cache model, but I do have a local File server in remote location. providing this which one is the best solution and effective one?

Regards,

Imran Khan

Can not get access files from Windows 7 to Claims-based file authorization share

$
0
0

We have AD level 2012R2, DCs running 2012R2 of course, and we have clustered File Server (3 FSNodes running 2012R2).

We enabled 2 policies 

KDC Support for claim

Kerberos support for claim

We created 1 claim type in ADAC (For example "Division" Source Property). Filled this property to all IT AD Accounts by our value "IT"

On FS made a share folder ITDivision:

- set permissions  Domain Users can Modify if User.Division equals "IT"

so on Windows 8 IT Users can access files on this share and on Windows 7 they cant=\ . We know from many presentations about Dynamic Access Control that File Server must enroll user claims if client do not support this claims (Service-for-User-To-Self) . 




Batch file for DFSR Backlog check doing weird stuff

$
0
0

Hi Everyone,

I have a DFS Replication structure with just over 20 Replication Groups and wrote a batch file that runs a backlog check on each writing the results to a text file.

It works perfectly except for three RGs.  The only common factor I can see so far is that the Replicated Folder name in each of the RGs has spaces in it.  But, here's the weird part:

I have the command saved in OneNote and if I copy and paste it from there into a DOS box, it reports the backlog status of the group just fine.  The command is:

dfsrdiag backlog /smem:eng-V28 /rmem:ENG-G03 /rgname:V28-Acct-EIS/rfname:"Acct – EIS"

If I copy and paste this command into a batch file and save it, the batch run fails; reports back: 

[ERROR] Cannot find DfsrReplicatedFolderConfig object. Possible reasons:
   + The replicated folder is not configured on the member 
   + Access is denied to its configuration information

[ERROR] Replicated folder <acct – eis> not found. Err: -2147217406 (0x80041002)


Operation Failed

This is the exact same command that just worked when pasted into the same DOS window;  I paste directly from the clipboard into DOS, it works - then immediately paste it from the clipboard into a batch file, run the batch and it fails.

Then, if I copy the command from in the batch file I just created - that failed - back into the clipboard and then paste into the DOS window it fails.  I go back to the original command in my OneNote page, copy to cllipboard, paste into same DOS window, it works.  Now, paste the new clipboard contents into the batch file overwriting the bad one, run the batch and fail.

I set the echo ON at one point to see how the command was being parsed and it looks fine; no weird characters or mistakes (which would be even weirder since I'm doing copy/paste so there is no chance of typo).  Tried matching case exactly, all lower case etc - no change.

Tried saving batch code with Notepad, Notepad++ and Edit Pad Pro. Also, tried

echo [command] > test.bat

to put the command into a batch file all by itself without an editor at all and that batch still fails.

The way I did that was to copy from OneNote into the clipboard and past into DOS, run it and see it succeed.  The, I press up-arrow, left-arrow and insert echo at the beginning of the command. Next, press right-arrow to the end of the line and type> test.bat and press Enter thus creating test.bat

  Now, I run test.bat and it fails

I am completely puzzled.

Anyone have an idea wtf is going on?

Thanks!

- a -

Data Deduplication and File Backup

$
0
0

Hello all,

This is my first question here in the forum. I just migrated one of our server from Server 2008 R2 to Server 2012 Standard R2. I use Freefilesync to backup a couple of network storages that was attached to the server 2008 and now it is attached to the Server 2012. I would like to know if I apply data deduplication on the network storage that are now attached to the Server 2012 would that affect the files on my backups? How would that work? Thanks

User Profile Disks and DFS replication

$
0
0
We're looking to replace a single, heavily used 2003 TS server--for performance, feature, and capacity reasons.  Our users frequently have small amounts of data on their desktops/settings that we'd to persist between sessions, and they tend to have long-running sessions (disconnecting and reconnecting while traveling, but not logging off--keeping open apps in the meantime).

The servers were originally purchased intending for 2008R2 SP1 remote desktops services, with lots of fast internal (RAID 5, 12x300GB 15K) storage.  We're considering the option of using 2012 (virtual sessions, not virtual desktops) so that we can scale out as we grow--and use the User Profile Disks.  

Ideally, we'd like to maximize the usefulness of the purchased servers (and their internal storage), and not have to purchase additional hardware for shared storage--(iSCSI/External array that can be clustered).  We're wondering if it's possible to pair the User Profile Disks (UPD) with DFS replication (possibly over a dedicated NIC). Then, a user could log in to server X, connect to her local UPD (with the changes replicating to server Y's copy).  If she disconnected/reconnected, the RD connection broker would connect her to back to her existing session, and if she logs off and back on, she could connect to either server X or Y and it would all work.  For maintenance, we'd be able to drainstop one server via the connection broker, perform the maintenance, let DFS catch up, and then do the same on the second server.

Would something like this be possible?  Or is it just asking for major problems?
Viewing all 16038 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>