Quantcast
Channel: File Services and Storage forum
Viewing all 4075 articles
Browse latest View live

Cannot install "SMB 1.0/CIFS File Sharing Support" role on server 2019 data center.

$
0
0

Hi Guys

I have a server 2019 data center on azure and I cannot seem to be able to install the SMB 1.0 to be able to connect to my on-prem AD.

I have downloaded the 2019 data center image and mounted it on this server and further added the source path for installing the role. And it didnt fix the issue.

I further tried to install it using Powershell wiht below commands:

Get-WindowsOptionalFeature -Online -FeatureName "SMB1Protocol"

FeatureName      : SMB1Protocol
DisplayName      : SMB 1.0/CIFS File Sharing Support
Description      : Support for the SMB 1.0/CIFS file sharing protocol, and the Computer Browser protocol.
RestartRequired  : Possible
State            : DisabledWithPayloadRemoved
CustomProperties :
                   ServerComponent\Description : Support for the SMB 1.0/CIFS file sharing protocol, and the Computer Browser protocol.
                   ServerComponent\DisplayName : SMB 1.0/CIFS File Sharing Support
                   ServerComponent\Id : 487
                   ServerComponent\Type : Feature
                   ServerComponent\UniqueName : FS-SMB1
                   ServerComponent\Deploys\Update\Name : SMB1Protocol

When I try to enable it I get the following error that says source file could not be found.

 C:\Users\myserver> Enable-WindowsOptionalFeature -Online -FeatureName smb1protocol -source e:\sources\install.wim:4
Enable-WindowsOptionalFeature : The source files could not be found.
Use the "Source" option to specify the location of the files that are required to restore the feature. For more information on specifying
a source location, see http://go.microsoft.com/fwlink/?LinkId=243077.
At line:1 char:1
+ Enable-WindowsOptionalFeature -Online -FeatureName smb1protocol -sour ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Enable-WindowsOptionalFeature], COMException
    + FullyQualifiedErrorId : Microsoft.Dism.Commands.EnableWindowsOptionalFeatureCommand

I also tried installing it and that didnt work either.

 C:\Users\myserver> install-windowsfeature SMB1Protocol -Source wim:E:\sources\install.wim:4
install-windowsfeature : ArgumentNotValid: The role, role service, or feature name is not valid: 'SMB1Protocol'. The name was not found.
At line:1 char:1
+ install-windowsfeature SMB1Protocol -Source wim:E:\sources\install.wi ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (SMB1Protocol:String) [Install-WindowsFeature], Exception
    + FullyQualifiedErrorId : NameDoesNotExist,Microsoft.Windows.ServerManager.Commands.AddWindowsFeatureCommand

Success Restart Needed Exit Code      Feature Result
------- -------------- ---------      --------------
False   No             InvalidArgs    {}

I am really stuck.. I would appericiate if someone could guide me.


Storage Replica on Windows 2019

$
0
0

I have 4 Windows 2019 virtual servers running in VMWare and on a single domain.  (2 in one site and 2 in another geographical site).  I want to build a Stretch Cluster using Storage Replica.  All servers have the same size and configuration of disks created.  Can I just add the 4 servers to cluster manager and then add the File Server role to the cluster to create a shared folder?  Or do I need to create a cluster at each site and then replicate the clusters across?  I'm a little confused at how this might work.  I'd also be willing to have 2 servers at one site and just a single server at the second site if that makes it easier.

Thanks in advance.


Re-using former Storage Pool drives in a PC

$
0
0

First off, sorry if this is in the wrong forum. This is new for me so forgive me if the technical terms aren't quite right.

My best friend passed away last year and I'm helping his family sort through his vast collection of technology. One item in particular was a Windows Server (sorry not sure what version) that was using multiple SATA drives in at least 2 storage pools. All the drives have been removed with the view of wiping them for re-use. Most I've been able to connect to my Windows 10 PC and delete the storage pool from them and then wipe. However, I still have 5 drives that aren't detected - either in Disk Manager or triggering an error in Storage Spaces.

Interestingly, I can see the drives in PowerShell using the Get-PhysicalDisk | FL command - example shown below. Is there any way that I can get Windows to detect the drives? Using the original server chassis isn't an option unfortunately so I'm doing each disk individually via an external dock.

Thanks in advance for any advice.

Cheers,

Darryl

Storage Spaces - unmark hard drive as "retired" after SATA controller failure

$
0
0

Hello,

I am having a problem with Storage Spaces - it won't start recovering the mirror set.
The mirror set got degraded because one of SATA ports on the controller went bad (I/O errors with any hard drive on that port).
After the hard drive is reconnected to another port, Storage Spaces still shows the drive as 'Retired'.

I can unmark the disk by running Set-PhysicalDisk -UniqueID id -Usage AutoSelect

Then, I can access a Simple (non-mirrored) thin-provisioned virtual disk located on that physical disk, and there are no I/O errors in Event Log, or any other errors.
But after I attempt to initiate Mirror set repair (Repair-VirtualDisk -FriendlyName mirror), the physical disk becomes 'Retired' again, although no I/O errors have occured.

Seems that somewhere in the metadata there is incorrect information about actual disk health state, and I need a way to reset it.

Thank you.

DFS Replication Issue for new Folders

$
0
0

Greetings,

I am working for Microsoft Partner and i am experiencing an issue with one of our customers. DFS replication had an issue with jet database and its resolved.

So replication is working fine for existing DFS replicated folders. But when i create a new folder and configure replication for this folder, i found that replication not happen.

Noting that DFS have two servers one in Main site and second in DR site. I tried to restart DFS server in DR site but still same issue.

Any thoughts?


DFS Replication Staging Quota

$
0
0

Grettings,

I am working on DFS implementation project. I have configured DFS replication groups to be on Drive Level. I noticed that by default there is 4 GB staging quota for each replicated folder.

Is this means if empty space on replicated drive become below 4 GB, replication will stop?

VSS Configuration in Windows 2019 Stretch CLuster

$
0
0

I currently have a Stretch Cluster running on Windows 2019 servers similar to the image below.  It's going to be used as File Servers.  I'd like to configure VSS on these servers to have file versions available for restore.  I have VSS setup on the SR-SRV01 server in the picture below.  Do I need to add VSS disks to the other servers and then failover the cluster to the other nodes so I can configure VSS on each node?  I'm trying to avoid a situation where the cluster fails over to a different node and then the VSS service doesn't run.   Thanks in advance.


CNAME to DFS Namespace Server

$
0
0

We have a group that made the poor decision to code mappings directly to our namespace server in some of their applications.

My group is in the process of retiring the old environment and we would like to decommission this server.

New namespace server has been setup as the replacement but we would like to create a CNAME if possible to keep the old server name and have it point to the new namespace server.

The CNAME has been created but when we browse the CNAME it does not list the folders.

namespace svr = newsvr01.domain.com (WIN2016 SVR)

old namespace svr = oldsvr01.domain.com

CNAME has been created for oldsvr01

If I browse \\newsvr01.domain.com I will see all the namespace folders

If I browse \\oldsvr01.domain.com I get this folder is empty.



Storage spaces Direct and SOFS file deleting problem

$
0
0

Hello All,

SETUP:

In our organisation we installed 2 node,  2-way mirror, all NVme S2D. On top of S2D we have SOFS role, and we have second "Compute" cluster witch saves all VHD's on that share.  Vhd are stored on Volume01, general share is on Volume02, Backup Volume03 (all with dedicated SOFS role).

Problem that we face was not noticed until we migrated all of our VM from old cluster to new one. So now our problem is even worse than before.

PROBLEM:

When we copy files on any of our shares "some" files "cannot" be deleted. We delete files and they lose all Security (no owner or any other security)  but files are still visible on storage but we cannot do anything with them. For first we thought it is security rights issue but then we tried to "Take offline" one of Cluster volumes and bring it online. And problematic files disappear.

Problem still persist and we cannot take our volume offline and online just for deleting files !!  ...files that wont be deleted are in most cases .iso, .exe, but there was some .pdf and so on. What is Interesting, we were doing some test for user folder redirection (documents, desktop, pictures), and if user puts same file (that won t be deleted) inside redirected folder user can delete it and there is no problem.  But if you access same file outside user machine a try to delete it ... \\sofs01\volume02\share\folder_redirection\user\desktop\file.zip   (and i give myself ownership or without same problem) after i click delete file disappears for second or two and appears again, but file is unusable. Can t do nothing with it.  User can't see it in it's shared folder cannot copy new file with same name or extansion i that folder either, while i can still see file on \\sofs01\volume02\share\folder_redirection\user\desktop\file.zip path outside users pc.  So we turn of and on volume and file is no more. 

 Later we figured out if we rename folder (after deletion) where files won't disappear instead shutting down entire volume files also  disappear like they should in first place. But when this folder will be full of files that are in use we will not be able to use this trick anymore.     

Some details of S2D:

 Get-ClusterS2D | FL

CacheMetadataReserveBytes : 34359738368
CacheModeHDD              : ReadWrite
CacheModeSSD              : WriteOnly
CachePageSizeKBytes       : 16
CacheState                : Disabled
Name                      : HCLSS2D01
ScmUse                    : Cache
State                     : Enabled

------------------------------

 Get-VirtualDisk

FriendlyName              ResiliencySettingName FaultDomainRedundancy OperationalStatus HealthStatus  Size FootprintOnPool StorageEfficiency
------------              --------------------- --------------------- ----------------- ------------  ---- --------------- -----------------
Volume02                  Mirror                1                     OK               Healthy       4 TB            8 TB            49,99%
Volume03                  Mirror                1                     OK               Healthy       2 TB            4 TB            49,98%
Volume01                  Mirror                1                     OK               Healthy      10 TB           20 TB            50,00%
ClusterPerformanceHistory Mirror                1                     OK                Healthy      16 GB          34 GB            47,06%

--------------------------------------

Get-StoragePool -FriendlyName "S2D on HCLSS2D01" | FL


ObjectId                          : {1}\\HCLSS2D01\root/Microsoft/Windows/Storage/Providers_v2\SPACES_StoragePool.ObjectId="{2c461cff-2d46-4283-8baf-f08f9ebe9d51}:S
                                    P:{e2e70bb7-158f-4273-88a6-528eef21e90e}"
PassThroughClass                  :
PassThroughIds                    :
PassThroughNamespace              :
PassThroughServer                 :
UniqueId                          : {e2e70bb7-158f-4273-88a6-528eef21e90e}
AllocatedSize                     : 35235911696384
ClearOnDeallocate                 : False
EnclosureAwareDefault             : False
FaultDomainAwarenessDefault       : StorageScaleUnit
FriendlyName                      : S2D on HCLSS2D01
HealthStatus                      : Healthy
IsClustered                       : True
IsPowerProtected                  : True
IsPrimordial                      : False
IsReadOnly                        : False
LogicalSectorSize                 : 4096
MediaTypeDefault                  : Unspecified
Name                              :
OperationalStatus                 : OK
OtherOperationalStatusDescription :
OtherUsageDescription             : Reserved for S2D
PhysicalSectorSize                : 4096
ProvisioningTypeDefault           : Fixed
ReadOnlyReason                    : None
RepairPolicy                      : Parallel
ResiliencySettingNameDefault      : Mirror
RetireMissingPhysicalDisks        : Never
Size                              : 102385846321152
SupportedProvisioningTypes        : Fixed
SupportsDeduplication             : True
ThinProvisioningAlertThresholds   : {70}
Usage                             : Other
Version                           : Windows Server 2019
WriteCacheSizeDefault             : Auto
WriteCacheSizeMax                 : 18446744073709551614
WriteCacheSizeMin                 : 0
PSComputerName                    :

----------------

All disks healthy and operational status OK.

Please Help, we are out of ideas and this problem only happends on brand new high availability S2D. 

Thanks all helpers  in advance :)

Audio books image building fails on Windows Server 2019 Core (Hypervizor and VM)

$
0
0

Hi ,

We are migrating from a 2008R2 server to 2019 core standard with Hyper-v role. 2019 Server VM's with GUI.

We burn audio books with files shared from 2008R2. Now I try do the same, from a Virtualized 2019 server.

The PC's (windows 7) that are handeling the cd's burning process can see the shares just fine, but the burning process does not work. The files are .mp3 files, and some files that create the DAISY structure of the CD. The filenames on the CD (The CD's are in DAISY format) are wrong and do not play on Daisy players. I understand that the burning process is our responsibility.

But the process runs fine when the data  resides on a c:\ drive.

- Data shared residing on the c:\ drive of the hypervizor =The process runs fine (premastering of the image to burn)

- Data shared residing on another volume of the hypervizor = We get an error in a log file of the CD burning software "Overflowed directory" or something

-Data shared  residing on a volume of the VM (.vhdx file) = an error

-Data shared  residing on the C:\ drive of the VM = all is fine.

All volumes are NTFS formatted, so are the clients. We also have XP machine's and they run fine after I activated SMB 1 on the servers. But I believe they copy the data to an external FAT32 drive and to the processing from there.

What is even stranger, If I format the VM volumes to FAT32 the process works, but the premastering is extremely slow. ( Very slow on IDE, half as slow on SCSI, on a gen1 VM). When going back to NTFS or ReFS, the shares are fine, but the burning goes wrong.

I'm not asking for a solution for the CD burning process.

But might there be a difference from sharing files from a SMB share on a c:\ drive then from another drive. Or could this be OS' related? The permissions between al drives seem to be the same.  I also think about ISO levels. I'm not an IT pro, but need to get this fixed.

So conclusion, all is fine from sharing files on c:\ drive's, virtualized or not. Any other drive's  do not work. The file sharing is always fine, but the burning process is not.

Thank you for reading this vague issue.

Kind regards,

Hendrik

 

which command is used for open the software center?

$
0
0
which command is used for open the software center from command line ?

iscsi initiator target not reconnecting on reboot

$
0
0

Windows server 2008 Standard on a VM (vsphere 4.1) - We are using the iscsi initiator to target storage on another bare metal Linux server. Although the reconnect fails on a reboot, we can manually reconnect by going into the iscsi initiator, targets tab, click Log on.., Click Advanced, Check the box for CHAP logon information, then click OK. Unfortunately, we can't seem to get it to save this logon info, nor can we check the box for Automatically restore this connection with the computer starts. If we try checking that box and clicking OK, we get the error "The target has already been logged in via an iscsi session"

What are we doing wrong here?

Windows 7 client can't find Server 2019

$
0
0

Strange issue I'm having issues finding the solution or what the problem is.

I have a couple Windows 7 clients that when I try and map or UNC path to any Windows Server 2019 server file share, error with "Windows can't find '\\<server name>'. Please check spelling and try again.

The Win7 client can ping the server though.

I've updated the network drivers

The 2019 share is ok. Other win7 clients and win10 clients can get to the share.

Makes no difference on the user... Admin or general user.

No errors on any information in the event logs, client or server.

Same client can connect to share on Server 2016, 2012 and 2008 without issue.

Thanks in advance

RS

How to set permission to let everyone can read but creator edit and delete only?

$
0
0

Dear All,

How to set permission to let everyone can read but creator edit and delete only?

Thank you

Mira


Folder/File Ownership on Subcontainers and objects

$
0
0

Hello all!  Currently on our NetApp system we have NTFS permissions setup in a huge mess.  We are moving to a security group based permission method but need to repair the permissions.  My issue is with taking ownership of the folders to apply the permissions to.  So lets say we have a main folder named Folder A.  I can begin a take ownership of this folder and all subcontainers however it does not seem to do this properly.  With the possibility of hundreds of sub folders inside subfolders it seems like Windows creates a collection of what it can see and takes ownership of those folders then stops.  I need it to again take ownership of the subfolders within the folders it just took ownership of.

I have run the takeownership numerous times and get a bit further but it has already been 3 days now of running these permission changes.  I have tried Powershell scripts, takeown command, and anything I have found which looks promising but nothing seems to accomplish the task without being ran hundreds of times.  With the number of files and folders in there I would be doing this for weeks on a single folder.

Has anyone ever worked on a solution for this?  Any ideas which would help?


FreeSpace after DFS implementation

$
0
0

Greetings,

Currently i am working on DFS implementation for one of our customers. I created replications group for each Drive (around 5 Drives).

Customer noticed that free space decreased after DFS implementation on DFS Server. We installed Tree size software to check folders spaces tree. we found that around 15 GB utilized on Partition sized 200 GB and its not clear from where this utilization come from. Another partition have the same around 20 GB utilized but not clear from where.

I wanted to check size of folder "DFSRprivate" on each partition but i got access denied.

How do i know space utilized by DFS? noting that for each replication group i keep default 4GB staging quota without change.

Regards,

 

Storage spaces with dual Parity vs. RAID – what’s the benefit of SS?

$
0
0

I’m currently working on building a server for archive purposes and have the option of going either hardware based RAID or Storage spaces Parity.

Currently I have 14 x 10 TB SAS disk in the cabinet that can hold up to 36 disks in total, so that we have the possibility of expanding it later if needed.

However I’ve been trying to wrap my head around the way Storage spaces with dual Parity utilizes the disks compared to traditional RAID and I’m really struggling to see any benefit of using Storage spaces in my situation, so I hope that someone can enlighten me.

If I create a storage pool with 14 disks in it and then make a virtual disk with all 14 disks in it, setting NumberOfColumns=14 I get a virtual disk with 110 TB in it. So if I look at it from a traditional RAID perspective, that would, in terms of available storage, equal a RAID 6 with a hot spare.

However in contrast to a traditional RAID6, where I can expand with 1 disk more at a time, here I must add 14 disks in order to expand the virtual disk.

If I create instead use NumberOfColumns=7 to reduce the amount of disks needed to expand the virtual disk later on, I can only create a virtual disk 100 TB so that cost me 10 TB extra. So again if I look at it from a RAID perspective, that would look like a RAID60 with 2 x 7 disks in RAID6 combined in a RAID0.

However in a real RAID60 I’d actually be able to have up to 4 disks failing on me without losing data (assuming I lose 2 disks from each RAID6) but that doesn’t seem to be the case with Storage spaces because here the Virtual disk seems to go offline as soon as I remove 3 random disks, leaving me to believe that the data is striped across 7 “random” disks and that they’re not isolated in 2 groups of 7 disks where a stripe is written to one of the 2 groups, like you’d see in a traditional RAID60.

So I don’t have the redundancy of a RAID60 but I “loose” the same amount of space as if I was running RAID6. And I still has to expand the virtual disk in groups of 7 disks.

Am I missing something in this equation?

And what’s then the benefit of using Storage spaces with dual Parity compared to a hardware controller with e.g. RAID6 support (which my server also has)?

With regards,

Martin Moustgaard



Workfolders issues

$
0
0

Hi all,

Since the most recent official Windows 10 update (1803 April update) I'm experiencing some issues.

* In explorer, some pictures get thumbnails, some don't (as seen in the figure). Tried deleting the thumbnail cache but that doesn't help.

* The Photos app doesn't sync pictures from the WorkFolders anymore, and doens't show thumbnails anymore for videos.

* In exlporer the icons for 'Download state' aren't correct. Some folders have a 'Failed sync' icon, but all the files inside are downloaded correctly.

A clean install of the client operating system doesn't help. Anyone else experiencing these issues? Is a fix available soon?

Janjaap

DFS: Delegation information for the namespace cannot be queried. The specified domain either does not exist or could not be contacted.

$
0
0

Hi Experts,

Need urgent help on the error mentioned in title. I am getting error "DFS: Delegation information for the namespace cannot be queried. The specified domain either does not exist or could not be contacted." when trying to connect in DFS Management console.

Users are not facing any issue while accessing data via DFS. We cannot administer DFS anymore due to this error.

Thanks,

Shashi

 


Shashi

Volume Drive Becomes Unavailable - Server 2019

$
0
0

In my environment we have a file server that stores the roaming profiles for all of the students on campus. The server is a Dell PowerEdge R540 running Windows Server 2019. There were 2 SSDs combined with RAID 1 for the O/S. Then the remaining 4 HDs were combined to create one volume totaling in about 5 TB to store all of the student data. This is not setup with RAID. Every single day the storage volume becomes unavailable and we have to restart the server for it to appear again/become accessible. When this happens you double-click on the drive and the wheel just spins.

Has anyone experienced this issue running Windows Server 2019? This is all brand new equipment and hard drives.

Viewing all 4075 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>