I´m trying to setup storagespaces on two server and a Intel jbod, the setup is:
Two HP DL 380p Gen 8 servers with a LSI 9207-8e card connected with two SAS cables to a Intel jbod2224s2dp.
Cable Setup: Nod1 to jbod connector A primary and A secondary
Nod2 to jbod connector B primary and B secondary
When I run the Powershell Command "get-storageenclosure" (on either server). The Status that comes up is "Unhealthy", but the column "ElementTypesInError" is empty, so I can not detemine whats wrong.
And when I look at "Server Manager - File and Storage Services - Volumes - Storage Pools" Server one get all info, including Chassi info and One server just get basic info.
If I disconnect the SAS cables (with the server power on) and do a refresh, reconnect the cables and do a new refresh. The server get all info including the chassi info.
I have updated to the latest FW (v0.13.0.9) on the JBOD, HBAs (P20 Firmware) and server (P70 08/02/2014).
I have changed SAS cables.
I have submitted a service request to Intel and they say it has nothing to do with the JBOD, it is a problem with the LSI chip controllers.
I have submitted a service request to LSI and the I should downgrade the FW to P19.
I downgraded FW on the HBAs, no difference.
Does anyone have any suggestions?
Problem with Storagespaces LSI9207-8e and intel jbod2224s2dp
Failed Hard Drive, Mirrored Virtual Disk Not Repairing
I am running Server 2012 R2 with Storage Spaces. I have a Storage Pool with 4 drives and 1 virtual disk with all drives in the mirror configuration.
One of the drives recently failed hard and was causing the system to bluescreen. After finding and removing the drive I noticed the Virtual Disk was detached. I tried attaching it with an error. I had a spare drive in the computer that was not in the storage pool so I added it to the storage pool and retied the old failed drive and tried to run a repair all to no luck. For some reason it is not adding the new drive (PhysicalDisk4) to the virtual disk.
I will admit this is my first dealing with a failed drive in Storage Spaces and I'm afraid of loosing data, any help on how to correct the situation would be greatly appreciated.
I've attached pictures of what is happening in Storage Spaces for further info.
How to check SAN disk is shared or dedicated disk
Hi,
I would like to know,
From server side Is there way to identify SAN disk is shared between different nodes or its dedicated SAN disk. (Without reaching Storage team).
Regards,
Manjunath Sullad
DFS BROKEN: THE FOLDER CANNOT BE...ACCESS IS DENIED! HELP!!!
Last week, I created 4 folders and created folder target for 2 of the 4 folders.
Today, IN DFS MANAGEMENT, I cannot add a folder target.
The operation failed. See the errors tab for details.
Validate shared folder success
Validate path success
Commit changes failed
\\srs\dfs\documents\NiskuMod. The folder cannot be deleted. Access is denied.
When I try to delete any of the folder targets, I get the same message.
Nothing has changed. Windows 2008 R2 server
!!! help !!!
Windows PowerShell Classifier (FCI) 2012 R2
Hi folks,
I'm busy implementing FSRM FCI to classify files using PowerShell. The Windows PowerShell Classifier module is integrated in Windows 2012 R2. So when I launch the fsrm.msc I am able to choose the module and in a new popup window I can paste a PowerShell script for classification to do fancy stuff.
No I am able to set a string value from script, but when I try to use the examples from the SDK (f.i. '$propertyBag.Name' or '$PropertyBag.RelativePath' no value is being returned. Also when I write to Windows Eventlog nothing is being written.
The following command was actually working from the PowerShell script:
Write-EventLog -LogName MM-log -Source MM-Powershell -Message "test message" -EventId 0 -EntryType information
Can anyone post examples how to use the module in Windows 2012? Do I need to change begin{} also? In the examples only the process{} part is given (http://blogs.technet.com/b/filecab/archive/2009/08/14/using-windows-powershell-scripts-for-file-classification.aspx)
In C# you need to get the name in "public void OnBeginFile". But then only "Name" is working and being returned. I was not able to get the path of the file being classified.
I really need to do some classification before uploading files to SharePoint. Hope someone can help me out.
**
What I can do in PowerShell ISE (working)
# Set the value of the "Demo" property to "DemoValue2"
#$cm.SetFileProperty("C:\FCItest\Demo\pwsdemofile.txt", "Demo", "DemoValue2")#
# Get and display the value of the "Demo" property
#$Demo = $cm.GetFileProperty("C:\FCItest\Demo\pwsdemofile.txt", "Demo", 0)
#Write-Host $Demo.Value
**
What I can do in FSRM (working)
$PropertyBag = $_
$FileName = $PropertyBag.Name
$FilePath = $PropertyBag.RelativePath
$SpecialString = ‘Confidential’
-> Value 'Confidential' is being returned and set to the property chosen in the FCI rule
**
What should working in FSRM but not returning any values (example from SDK: C:\Program Files\Microsoft SDKs\Windows\v7.1\Samples\winbase\FSRM\PowerShellHostClassifier\CS\powershell_scripts\demo_script)
process{
$propertyBag = $_
#get the file name
$fileName = $propertyBag.Name
if($fileName -like "*.txt")
{
#set the property to the name without .txt
$fileName.Substring(0, $fileName.Length-4 )
}
}
Problems setting up Remote Web Access in Windows Storage Server 2008 R2
We just bought a WD Sentinel DX4000 Storage Server and went through the Remote Web Access wizard. It setup the router just fine but when we got to the setup your domain name we have been having headaches ever since. We chose the option to use our own domain name, which we have registered through GoDaddy. It then showed that yes, our provider was GoDaddy but then it told us we needed to update our service with GoDaddy so we clicked the"Go to GoDaddy.com" button and it sent us to a page to purchase an SSL certificate. So, we did that but then had issues with it not being able to be managed in our GoDaddy account. So, I went into the domain name with GoDaddy and purchased a certificate there and got it setup on the domain name with GoDaddy.
The problem is that when I try to go through the wizard again to set up the domain name I still have to click the goDaddy button to move to the next page where I type my domain name, username, and password with GoDaddy. I setup the remote extension in the A records with GoDaddy as remote.mydomain.com, and put in my username and password with GoDaddy. After I click next it tries to connect to the domain name service provider but then gives me the following error message over and over:
The domain name was not set up for your server. Wait a few minutes and run the wizard again.
An unexpected or unknown problem occurred. Please wait a few minutes, and then try again.
Well, I have tried this several times and am about to start pulling my hair out. Can anyone please tell me what I may be doing wrong?
Thanks for any help you may be able to provide.
Access Denied when accessing ipc$ but not admin$ of a Windows 2008 R2 Standard server
From a Windows 2008 R2 Server,
c:\> net use * \\<winserver2008>\ipc$
System error 66 has occurred.
The network resource type is not correct.
c:\> net use * \\<winserver2008>\admin$
Drive Z: is now connected to \\<winserver2008\admin$.
However, running the above commands from a Windows 2003 Server, I have no problem at all.
Does anyone has any idea?
Data Deduplication - max memory allocation
Hi
i encountered this issue on a server:
2012R2
https://support.microsoft.com/en-us/kb/2891882/
i have configured the value from the KB, but i still get:
Data Deduplication detected job type "Optimization" on volume "\\?\Volume{8ce5ac3c-46df-4a37-8a9e-c4b28f1354d1}\" uses too much memory. 3179 MB is assigned. 3870 MB is used.
Data Deduplication cancelled job type "Optimization" on volume "\\?\Volume{8ce5ac3c-46df-4a37-8a9e-c4b28f1354d1}\". It uses too much memory than the amount assigned to it.
for the volume with most data.
can the value be increased?
iSCSI Target Initiator IDs malfunction?
On Windows Server 2012 R2, I have setup an iSCSI target and have allowed the following list of Initiators to connect:
IPAddress: 192.168.0.102 (iSCSI Initiator)
IPAddress: 192.168.0.109 (iSCSI Target)
No IQN has been listed, only IPAddress.
On the initiator, I make a quick connect to 192.168.0.109 (the target) but while I get a response right away, I get nothing to connect to. The Quick Connect windows says: "No Targets available for Login using Quick Connect."
If I telnet to 192.168.0.109 port 3260, I get through right away so I know the target is reachable.
Then I try to connect from the target (from 192.168.0.109 to 192.168.0.109) itself and I get a connection right away. Everything works.
So, why can't the iSCSI Initiator from 192.168.0.102 get a connection?
I have tried to add the IQN from the initiator to the target and retried the connect and now the initiator can connect. Something is weird here. Why is IPAddress not enough to connect remotely when it is enough to connect locally?
Then I thought perhaps the listed IQN is allowed from the listed IPAddress'es so I connected from a third pc (192.168.0.101) and it connected just fine (assuming the same IQN as 192.168.0.102 used).
What purpose does IPAdress have? It only seems to work locally.
One specific detail. There are 2 netcards in all pc's, one for 192.168.0.x and one for 192.168.34.x. Perhaps this is the issue so I specicied 192.168.0.102 as source IP on the Discover Portal setup for the initiator, but that changed nothing. Either that source IP isn't passed along correctly or the target throws the source IP away and uses the target IP instead. Bug?
At any rate, specifying IPAddress has no value whatsoever.
It seems the only way to secure the iscsi target is to use the firewall for port 3260.
Firewall - Trust - ISILON
Environment:
Domain A and Domain B have a 2 way trust
Domain A houses the ISILON Storage device with SMB Shares, permissioned with Domain A Local groups and Domain B Global Groups
Firewalls are rampant across the environment
Situation:
User from Domain A can connect to the SMB Shares on the ISILON no problem.
Exact same computer, Exact same share, exact same permissions - Domain B Account gets "Error 59 An unexpected network error occurred"
Domain User B is directly permissioned on the share as well as being in a Domain A Local Group with permissions.
In addition to the user authentication issue, when looking at the security on the share the Domain B groups show as SIDs, not display name.
Assumption:
It appears that the issue with the user authentication of users in Domain B. However, we have "Validated" the trust of both sides and the validation says it's fine.
Question:
What is the process flow and PORTS for a resource in Domain A to authenticate a user from Domain B, and to translate a SID into a Display name?
Thanks,
T
apply Quota to exisiting folder: Command when hitting soft quota does not run
Hello,
during a Fileserver Migration a whole tree of UserHome Shares was copied. After the initial copy a Quota Template is applied to [Drive]:\userhome for the subfolders. As there are different Quotas, I wanted to use the mechanism to run a command (dirquota)
when a limit is reached to apply the next higher Quota Template. I talk about home folders for 1000+ users and no list of the actual quotas applied in the source fileserver is available or could be created (please don't ask why ;-) ).
I experienced that this works fine for new subfolders (i. e. [Drive]:\userhome\NewFolder): when the threshold is reached the dirquota command runs and sets the defined higher Quota Template. BUT is does not work for existing folders (i. e. [Drive]:\userhome\ExistingFolder).
The question: Does anybody know how to resolve or why this does not work for existing folders?
To illustrate:
2 Quota Templates:
- Default (Name "Default", 100 MB, soft, when 85% hit run command dirquota quota modify /path:[Quota Path] /sourcetemplate:"Extended"
- Extended (Name "Extended", 2000MB, soft, no action (for testing)
then subfolders, each 200 MB in size
- one existing ([Drive]:\userhome\ExistingFolder)
- then appliying Quota Template "Default" to [Drive]:\userhome with autoapply to subfolders
- nothing happens to "ExistingFolder" FSRM shows only "Percent Used" of 200%, Quota "Standard" applied
- new subfolder "[Drive]:\userhome\NewFolder"
- fill Folder with 200 MB data
- FSRM for folder "NewFolder" shows Quota "Extended" applied, Percent used 10% -> the command to assign another quota to the folder worked for this new subfolder, bt not for the existing folder
Does anyone have an idea?
Thank you, David
Storage Spaces & Disk pools
Hi,
Windows 2012R2 Storage Spaces.
Is there a maximum size for a disk pool. I have a client that wants a 25TB disk. should we make several smallwer disks rather than one large disk ?
Regards
Richard
chkdsk question
I am designing a new storage system which will be in the 80TB range. We've traditionally used RAID cards and NTFS, but I am now looking at ReFS and Storage Spaces. One of the big benefits of ReFS is not having to run chkdsk. On this new volume though, we will be storing all larger files - all files are in the 2GB range. Does the fact that this volume will not be full of a bunch of small files mean chkdsk will run "fast", at least in comparison to a bunch of smaller files? I'd love to stick with NTFS because it's been around so long, and want to figure out if our workload might make it more feasible. Thanks!
Users Not Syncing to Document Server
We currently have a Document Storage Server for each Student are Networked through GPO. Occasionally we run into an issue that has stumped us.
When a Student logs onto the Windows 7 Enterprise PC, they have no documents or path to the Storage Server. When I log on to another PC with the Students profile, the Student has their documents. We have reset and cleared all the profiles on the 1st PC, but yet still the Student has no documents. However another Student can log on to the 1st PC and have all their documents.
Confusing, but we have yet to find a reason why.
Any assistance is greatly appreciated.
Thank you.
Cannot access shared drive via mapped drive "Drive Name:\ refers to a location that is unavailable"
Users are unable to save files into shares on our file server via Windows Explorer.
GPO hands out the network share to all users. When user browses to a location within it via Windows Explorer and right-clicks and tries to add a new file, error is :
"W:\ refers to a location that is unavailable. It could be on a hard drive on this computer, or on a network".
Workaround: Browsing through the UNC path allows the user to save files normally. Opening Word and clicking File/SaveAs then browsing to the same location also allows the users to save normally.
DFS is not in use.
I've done plenty reading on this and the following don't help.
- turning off UAC on the client
- Recreating the GPO (including with the IP address in the share)
- Disable antivirus
Another suggestion was to disable SMBv2 and v3 but we can't do this as it impacts other systems.
We recently rebuilt our file server (same Data drive, but rebuilt the OS and attached it). File server name is new and old fileserver name references the new name via CNAME which pings back as expected.
Any assistance welcome.
Single mapped drive for all shared data
Hello everyone,
TLDR version: I need to configure a single UNC share to host data across multiple file servers and utilize Access-based Enumeration. A DFS namespace isn't going to fit the bill because of the way ABE works in DFS. Does anyone know how this
might be accomplished?
Our idea: We would like to map a single drive to a UNC path. This UNC path is a single view for all employees in the company. We want to use Access-based Enumeration so that users will only see the folders they have access to. As an example,
the UNC share might have the following folders (we'll actually have more, but I used four folders for simplicity):
- HR
- IT
- Sales
- Shared
When creating these folders in a share and enabling access-based enumeration, it works just fine. As an IT worker, I only see the "IT" folder and the "Shared" folder; "HR" and "Sales" are hidden from view. However, we have a lot of data and for backup and restore purposes our backup folks don't want to put all of it on a single logical disk. Because of this, we wanted to use DFS to create a namespace and then use DFS Folders to direct users to the appropriate server and share (we'll have the data on multiple file servers and multiple disks on those file servers). This would allow us to provide a single view for all employees and map everyone to the same location.
Our problem: It turns out that DFS will not hide the DFS folders from view unless I configure the "Set explicit view permissions on the DFS folder" setting in the DFS Management console and explicitly deny a group/user read access.
We only want users to see the data that they have access to (i.e. access-based enumeration) but that isn't going to work for us because of the way we plan to do permissions (we aren't using dynamic access control yet).
I've looked into Server 2012 R2 Storage Spaces a bit, but we already have a SAN and this doesn't seem like the solution we need. That said, our entire method might be "the old way" of doing things. I'm hoping to get some feedback and suggestions
about how others have accomplished something similar. Cloud-based storage is not an option for us at this time. Any help is greatly appreciated!
-Matt
shared folder access
Hello everyone.
I work for a new media firm and we have a windows server 2008r2 and 32 windows 7 clients on our network, we run a domain infrastructure. recently we want everyone on the network to work directly from the server. i have setup the shares and permissions on the folders, i have created groups and assigned them to the folders. but my problem is some users in the group on my network can access the folders and see the files and documents but other users in the same group can open the folder but cannot see the files and document. Pls i need help on this.. Thanks
2 Way mirror via Storage Spaces using 2xJBODS
2012R2 Storage Spaces - Enclosure redundancy
Hi,
We are currently testing redundancy with Storage Space and have ran into a big problem..
Here is a description of our setup (I'll try to be as precise as possible):
Two HP DL360 Gen8 servers with 2x10 Gbe Ethernet cards and 2 LSI 4 SAS external ports, each connected to 3 DataON JBOD enclosures via dual SAS paths (2 SAS cables per servers going to 2 separate controllers on each enclosures).
The 2 10Gbe Ethernet cards are setup in separate network (10.0.0.0/16 and 192.168.0.0/16).
The 10.0.0.0/16 network is part of the Windows domain and host the DNS servers.
The 192.168.0.0/16 network is independent and only accessible by the above servers (no DNS defined, No default gateway).
I installed failover clustering and built a new cluster with those two servers, making sure to untick the “add available storage” from the wizard.
The cluster built successfully, so I proceeded to build the storage pool..
On one of those servers, I created a Storage Pool using all the disks from all 3 DataON enclosures (The disks are composed of 32x SAS HDD and 12x SAS SSD (Dual ports)).
And on top of this Storage Pool, I created two virtual hard disk:
One small 1GB virtual hard disk for the Quorum (non-tiered, enclosure awareness enabled, mirrored)
One large 15TB virtual hard disk for the data (Tiered Storage, enclosure awareness, write-back cache and mirrored)
As a reference, here are the powershell commands I used to create the virtual disks and the storage pool:
$pooldisks=Get-PhysicalDisk|? {$_.CanPool–eq$true }
New-StoragePool-StorageSubSystemFriendlyName *Spaces*-FriendlyName SP1-PhysicalDisks $pooldisks
$tier_ssd=New-StorageTier-StoragePoolFriendlyName SP1-FriendlyName SSD_TIER-MediaType SSD
$tier_hdd=New-StorageTier-StoragePoolFriendlyName SP1-FriendlyName HDD_TIER-MediaType HDD
New-VirtualDisk-StoragePoolFriendlyName 'SP1'-FriendlyName 'VD1'–StorageTiers @($tier_ssd,$tier_hdd) -StorageTierSizes @(2212GB,13108GB)-ResiliencySettingName Mirror-NumberOfColumns 4-WriteCacheSize 10GB-IsEnclosureAware $true
New-VirtualDisk-StoragePoolFriendlyName 'SP1'-FriendlyName 'Quorum'-Size 1GB-ResiliencySettingName Mirror-IsEnclosureAware $true
So far so good, I then added the storage pool to the cluster using the failover cluster manager, then added the two disks created above (created a volume within first).
I then added the bigger disk to the Cluster Shared Volmue.
Added to second disks (smaller one) as a quorum to the cluster.
In the failover cluster manager, I added the Scale Out File Server role (used the name 999SAN01P001 as the distributed server name) , and created a highly available share on the Cluster Shared Volume (now appearing under c:\clusterStorage\Volume1\Shares\Hyper-V).
I can now access the share via \\999SAN01P001\Hyper-V without any problem and even run Virtual Machines on it.
Here is the problem:
If I eject a couple of disks from one of the enclosures, no problems, everything stays available.
If I however simulate an enclosure failure (by pulling the power), the Cluster Shared Volume becomes inaccessible!
The “Cluster Virtual Disk” status in the failover cluster manager shows as “NO ACCESS”.
The virtual disk in Server Manager (under the File and Storage Services), although shows as “Degraded” is still accessible (not offline).
What am I doing wrong here?
With three enclosures, the system should be able to sustains a failure of a complete enclosure (and it does as my virtual disks in server manager shows online, but degraded), but my cluster cannot access it anymore (the cluster shared volume as “no access).
Thank you,
Stephane
VPN Access to Shared Folders on Windows 2012 R2 Server - Remote Procedure Call Failure
Hi There,
Hopefully someone can help me with my frustrating issue.
I have users running Windows 8.1 Professional, fully updated.
They connect to the network via Cisco AnyConnect version 3.1 (via Cisco ASA 5510)
File Server is Windows 2012 R2 Standard, it is not DFS, just a single server.
Sometimes the users cannot access the shared folders on the server, they go to access the server via Windows Explorer and it eventually times out with an error message "Remote Procedure Call Failure"
There are 1 or 2 users (me for one) that never has an issue accessing the files (Windows 8.1)
The VPN session seems to be ok, since they can access other services, including an old Windows 2008 R2 File Server, without issues.
None of the users have issues when within the office, only ever remotely via VPN.
I have tried a couple of things to try and get it working, but none seem to work. Disabled all offloadings on the Server NICs, enabled and disabled TCP Auto Tuning..
I have no idea where to go now to fix this issue.
If you need more information then please ask.
Thanks,
-Tim