Quantcast
Channel: File Services and Storage forum
Viewing all 4075 articles
Browse latest View live

taring on Server for NFS mangles symbolic links

$
0
0
Hello,

I'm running into an issue where symbolic links are mangled when taring files which reside on both Server 2003 and Server 2008 running Server for NFS, serving to Redhat Linux 4.6, 5.3, 5.7 and Ubuntu 11.10 machines. Note that the test below, the user's home area is mounted on a Server for NFS 2008 R2 installation.  The issue does not exist if I create a tar with files on the local machine. Absolute links are not impacted, relative links are.  Help in fixing this for relative links is greatly appreciated.


[~/TEST]$ dd if=/dev/zero of=OneGig.txt bs=1024 count=1000
[~/TEST]$ ln -s OneGig.txt test
[~/TEST]$ ln -s /home/USER/TEST/OneGig.txt test2

[~/TEST]$ ls -l
total 1000
-rw-r--r-- 1 1024000 Aug 24 14:11 OneGig.txt
lrwxrwxrwx 1 0 Aug 24 14:12 test -> OneGig.txt
lrwxrwxrwx 1 0 Aug 24 14:12 test2 -> /home/USER/TEST/OneGig.txt

[~/TEST]$ cd ..
[~]$ tar cvf TEST.tar TEST/
TEST/
TEST/OneGig.txt
TEST/test
TEST/test2
[~]$ tar tvf TEST.tar
drwxr-xr-x 0 2012-08-24 14:12:55 TEST/
-rw-r--r-- 1024000 2012-08-24 14:11:29 TEST/OneGig.txt
lrwxrwxrwx 0 2012-08-24 14:12:46 TEST/test -> O
lrwxrwxrwx 0 2012-08-24 14:12:55 TEST/test2 -> /home/USER/TEST/OneGig.txt

Users with Read-Only are able to create files & folders

$
0
0

Hello,

I've looked around and can't seem to find anyone else with this issue.
Currently, I have a network share configured to allow a certain AD group full control.

I have added "Authenticated Users" AD group and set their rights to read-only.

The share works essentially how it should with one issue.

If a member who is part of the "Authenticated Users" group attempts to create a file or folder, they will be prompted with the "Folder Access Denied" window. Their options are "Try Again" or "Cancel". Once the user clicks on the "Cancel" button, a folder is created with the default naming convention (New Folder, New Folder (1), New Folder (3), ect.). This can go on for as long as the user wishes to.

Is there a way to prevent a folder / file with the default naming convention to be created?

Any info helps,

Thank you.

Export a list Shares & NTFS permissions

$
0
0

I would like a script or tool which can create a list of server shares, along with the relevant NTFS permissions on the directory structure and exports to either a .txt or .csv. Full group and member names would be nice.

Any ideas? Thanks.

Unable to map network share using logon script

$
0
0

Hello,

I'm currently having an issue where a newly created share is not mapping along with other drives in the script.
This script is able to map five other network locations except this one.

The commands I use are:

net use u: /delete

net use u: \\PathOfShare

I noticed the cmd prompt window is stuck and the output reads the syntax is incorrect.
Even though I have other shares that are mapped with the identical syntax.

Is there a change that has to be made to the share itself?
Any info helps.

Thank you.

Difficulty adding new member to existing dfs replication - "cannot be added to the folder. the file exists"

$
0
0

Hello,  We have an existing dfs namespace simply named domain.com\dfs that has 3 replicated folders, Applications, Users, BranchData.  At each branch office there is a local file and print server, and each office is in its on site in AD Sites and Services.  This has been working perfectly for a number of years, but now we have a new office under construction and I am working on getting the new office server ready for deployment.  When I try to add this new server with the new member wizard (simply by right clicking on our domain.com\dfs under Replicaiton in DFS Management, I can get through to finish out but I get an error for each folder:

\\server\Applications.  The folder target \\server\applications cannot be added to the folder.  The file exists.

\\server\Users.  The folder target \\server\users cannot be added to the folder.  The file exists.

\\server\BranchData.  The folder target \\server\BranchData cannot be added to the folder.  The file exists.

I close out of the wizard, and those folders are created and shared properly on the server.  However for local path it shows <not Defined> and has a yellow exclamation point next to it.

Someone on another forum said to try it from another DFS Management console.  I did try from the new server (Server 2016), an existing Server 2012 R2, an existing Server 2008 R2, and also my Windows 10 1803 with the RSAT tools.  All DFS Management consoles produce the same result.  I tried creating the folders and shares on the new server first, and I also tried it with those three folders and shares not on the new server at all... both ways it fails on that error.  This is the first 2016 server in the domain so I wonder if that's part of the difficulty.

Do Virtual Disks in Storage Spaces remain online during certain operations?

$
0
0

I have a couple of questions about Storage Spaces that I can't seem to find definitive answers for.

1) In a Storage Pool configured for Parallel Rebuild (adequate free space, RepairPolicy set to Parallel, RetireMissingPhysicalDisks set to Always). If a physical disk fails do Virtual Disks remain online and accessible during the parallel rebuild process?

2) Adequate number of additional physical disks are added to a Storage Pool. Do Virtual Disks remain online and accessible while running Optimize-StoragePool?

Many thanks in advance!

Can't take ownership of files with take own get access diened

$
0
0

Good afternoon all,

I've some files where can't take the ownership of. I've tried every and searched allot of technet and the rest of google.

When is ue:

Takeown /f *.* /r /a /d y

i get like 4 succes messages and 16 INFO: Access is denied.

I need to copy/cut the files to new storage, can someone help me please??

I am a member of the adminsitrator group

Search Indexing on Fileserver shows files with no access permissions

$
0
0

Hi @all,

we have a strange issue with a file server running Windows server 2008 R2 with Index service installed and configured.

The Fileserver uses NTFS/Share permissions to restrict access to specific users and groups, ABE is enabled for the shares.

The Index services indexed the file server data and if a user now searches for example *.docx files, as a result all *.docx files will be displayed also if the user has NO permissions to see / access this file.

Does anyone have a idea why this happens? Is there a way to change the configuration so that the search results only displays results of files where the user has access?

thanks in advance


You don't currently have permission to access this folder (Access deny)

$
0
0

Hi,

I have two file servers with DFS role.

In source server (STOR01) - i can access to folder and disks. On new detestation (STOR02) server, I have a problem with same folders and disks, using same account (administrative).

If i try open some folders with custom permission, I receive message "You don't currently have permission to access this folder".

After press "Continue", my account add to that folders and I can access it.

And custom permissions are set for disk, I receive access deny.

Permission for disk is:

Same permissions on STOR01 server, but there no problems with access. Also from STOR01 server I can access this disk like \\stor02\g$, but from stor02 I receive, -  resource is not available error.

My account is domain admin and a member of stor02 local administrator group.

Please help me understand, what is the reason

Users that are not in the ACL_COMPUTER_ADMINISTRATOR group are not able to access the Home Directory

$
0
0

We have the need to revamp our computer administrator policy for security reasons, and we have an ACL that makes the user a local admin on the machine they login to. When we remove this permission the user cannot no longer access their home dir as assigned in AD. The permissions are set like this...

The Home folder share permissions are

Domain Users - Full Control

The Home folder NTFS permissions are

CREATOR OWNER - Full Control

SYSTEM - Full Control

DOMAIN ADMIN - Full Control

Domain Users - Traverse folder / execute file, List folder / Read data, Create folders / append data

Administrators (local) full control

Unless the user is an admin, they cannot access any part of the home dir, including their own home folder.

Any help here would be greatly appreciated.


Jeremy Robertson Network Admin

SMB share vs 'normal' share

$
0
0

In Windows 2016 via Server Manager -> file and storage services -> shares, you have the option to choose SMB share - Quick/Advanced/Application.

If you right click a folder, go to properties and then create a share you don't have these options.

So what kind of share is created if you create a share via the folder options ?

Remove PhysicalDisk from Storage Pool without replacement?

$
0
0

Currently I'm testing the Advantages and Disadvantages of Storage Pools in my Home-Environment. The test server looks like this:

Windows Server 2012 R2 Datacenter ("Student-Edition")
2*400 GB HDD
4*1000 GB HDD
2*3000 GB HDD

First, I wanted to test a "replacement" Case: I created a storage Pool of 2*1 + 2*3 added some data to a (mirrored) virtual hard disk, and disconnected one 1 TB drive. The Pool gets reported as unhealty, and with a second Disk i was able to do the regular replacement (Attach new Physical Disk, Remove old & Rebuild)

Now I wanted to test the case, how I can rebuild full functionality, when no proper replacement-disk is available:

Therefore I created another Pool of some Harddisks, added some data (arround 200 GB), disconnected one drive and tried various ways to get back to health - nothing worked:

Using the UI it's impossible to do anything at all. After selecting "remove" on the disconnected drive, it simple keeps telling me, that there is no proper replacement disk found.

Then I followed some instructions i found elsewhere on technet:

1. mark the missing disks as "retired":
 Set-PhysicalDisk -FriendlyName <PhysicalDiskxxx> -Usage Retired

2. rebuild each of your virtual disks:

Repair-VirtualDisk -FriendlyName <Virtual Diskxxx>

3. Once everything is finished, try to remove the disk from the pool again:

 Remove-PhysicalDisk -FriendlyName <PhysicalDiskxxx>

1+2 worked fine, and I noted the disconnected disks usage dropping from ~ 30GB to 256MB

All virtual Disks reporting a "health" state now.

However, it was not possible to remove the disk: The ui told me, that it's gonna rebuild after clicking "remove" - but simply nothing happened - not after several hours (virtual Drives are "thin", so not much data to move).

Using the Powershell it keeps telling me, that theres an Issue with the "FriendlyName" Property:

I noted, that the Disk was prior shown as "PhysicalDisk-14" - but that did not change anyting.

So I followed another approach, using the following command:

It looked promising, but finally failed with "Not enough available capacity" - which is ofc. not true. (Only assigned around 1000 GB to the virtual disks, of which I used around 200)

So, the question is: How do i remove a (disconnected, retired) disk from a storage pool that has enough capacity left WITHOUT replacing the Disk immediately?




add-folder-targets

$
0
0

This link https://docs.microsoft.com/en-us/windows-server/storage/dfs-namespaces/add-folder-targets

has this note in it. 

Folders can contain folder targets or other DFS folders, but not both, at the same level in the folder hierarchy.


Can someone explain a bit more what that means? As far as i can tell i can make new folders just fine at any level of the hierarchy even if i have a folder target there. 

*EDIT*

To me it seems what i can not do is make a target in anything that is a target up the hierarchy

ADMT-Security Migrate Tool Completed with Errors (ERR2:7144 Could not open file 'C:\Program Files\WindowsApps\.......)

$
0
0

Dear Team,

I have migrated Domain controller from old domain to new domain by using with ADMT. I can migrate Group, User and Service, but when I continue to migrate with Security Migrate Tool, it show completed with error.

Please advise how to solve this problem.

Below is the log:

2016-11-02 10:05:37 ERR2:7144 Could not open file 'C:\Program Files\WindowsApps\Microsoft.3DBuilder_10.0.0.0_x64__8wekyb3d8bbwe' (1314)  A required privilege is not held by the client.
2016-11-02 10:05:37 ERR2:7144 Could not open file 'C:\Program Files\WindowsApps\Microsoft.Appconnector_1.3.3.0_neutral__8wekyb3d8bbwe' (1314)  A required privilege is not held by the client.
2016-11-02 10:05:37 ERR2:7144 Could not open file 'C:\Program Files\WindowsApps\Microsoft.BingFinance_4.3.193.0_x86__8wekyb3d8bbwe' (1314)  A required privilege is not held by the client.
2016-11-02 10:05:37 ERR2:7144 Could not open file 'C:\Program Files\WindowsApps\Microsoft.BingNews_4.8.239.0_neutral_~_8wekyb3d8bbwe' (1314)  A required privilege is not held by the client.
2016-11-02 10:05:37 ERR2:7144 Could not open file 'C:\Program Files\WindowsApps\Microsoft.BingNews_4.8.239.0_x86__8wekyb3d8bbwe' (1314)  A required privilege is not held by the client.
2016-11-02 10:05:37 ERR2:7144 Could not open file 'C:\Program Files\WindowsApps\Microsoft.BingSports_4.3.193.0_x86__8wekyb3d8bbwe' (1314)  A required privilege is not held by the client.

Many thanks!

VONG Dimanche 

creating multi-resilient volume in WS2019 doesn't honor resiliency settings

$
0
0

Hello. I'm trying to create multi-resilient volume in WS2019 (standalone server).

The process I use is like this:

1) create the pool

New-StoragePool -FriendlyName TieredPool -StorageSubSystemFriendlyName"Windows Storage*" -PhysicalDisks (Get-PhysicalDisk -CanPool $true)

2) create the storage tier templates

$tier_mirror = New-StorageTier -StoragePoolFriendlyName TieredPool -FriendlyName Mirror -MediaType SSD -ResiliencySettingName Mirror -NumberOfColumns 2 -Interleave 256kB -PhysicalDiskRedundancy 1
$tier_parity = New-StorageTier -StoragePoolFriendlyName TieredPool -FriendlyName Parity -MediaType HDD -ResiliencySettingName Parity -NumberOfColumns 3 -Interleave 256kB -PhysicalDiskRedundancy 1

3) create the multi-res. volume

New-VirtualDisk -StoragePoolFriendlyName TieredPool -FriendlyName TieredDisk -StorageTiers @($tier_mirror,$tier_parity) -StorageTierSizes 1600GB, 22000GB -ProvisioningType Fixed

The issue is that the volume is created, but doesn't honor the template tiers' resiliency settings, so in effect both 'concrete' tiers are created asMIRROR, instead of Mirror+Parity:

PS C:\Users\admin> Get-VirtualDisk -FriendlyName tiereddisk | Get-StorageTier
FriendlyName      TierClass   MediaType ResiliencySettingName FaultDomainRedundancy     Size FootprintOnPool StorageEfficiency
------------      ---------   --------- --------------------- ---------------------     ---- --------------- -----------------
TieredDisk-Parity Capacity    HDD       Mirror                1                     21.48 TB        42.97 TB           50,00 %
TieredDisk-Mirror Performance SSD       Mirror                1                      1.56 TB         3.13 TB          50,00 %

For what it's worth, I get the same results when using New-Volume. Actually in WS2019 Preview version, New-Volume would produce this problem, but using New-VirtualDisk would set the resiliency settings correctly.

Is this a bug in the final release of WS2019?


DFS 2019

$
0
0
Any issue with having mixed 2019 and 2012 DFS Namespace servers?

Error 0x80070299 copying file to ReFS

$
0
0

We're retiring a 2008r2 file server using NTFS and migrating the files to a Server 2016 server using ReFS.  Using robocopy to pre-seed the files, a small percentage of the files failed to copy.  The robocopy log reported:

ERROR 665 (0x00000299)...The requested operation could not be completed due to a file system limitation

Trying to manually copy the file generated the error:

Error 0x80070299 the requested operation could not be completed due to a file system limitation

The files copy correctly to NTFS volumes, but trying to copy to any ReFS volume on any server generates the error.  If I copied the file to a FAT32 partition (to strip the NTFS metadata), it would then copy to an ReFS volume with no error, but trying to strip the attributes by going to the file properties, Details tab, and using the "Remove properties and Personal Information" option had no effect (it still failed to copy).

I was able to narrow it down to the presence of a particular Alternate Data Stream (ADS).  All the files that failed have an ADS called "AFP_Resource", which is apparently for Mac compatibility (https://msdn.microsoft.com/en-us/library/dn392833.aspx).  If I remove that data stream or clear the contents of it, the file will then copy with no error.

However, we have a lot of files that also have that ADS that do copy successfully.  We have a fair number of Mac users, so I'd prefer not to remove that data stream from all the files that have it.  Ideally I'd like to remove whatever is problematic about the data stream and leave the rest intact.  Alternatively, it would also be helpful if anyone could re-assure me that removing that data stream won't negatively impact our Mac users.  I suspect it's not important, but I'd rather not find out by stripping the stream from thousands of files and end up getting a bunch of phone calls. ;)

I suspect I'm going to end up using robocopy to identify the problematic files and then script removing this ADS just from those files, but if anyone has more info on this I would love to hear it.

More info below for those who might also be struggling with this.  It took me a few hours to track this down, so hopefully this will save someone else some time.

You can see what alternate datastreams exist for a file using either of the following:
dir /w
get-item <filename> -stream * | select Stream,Length

Remove a data stream:
remove-item <filename> -stream <stream name>

Clear contents of a data stream:
clear-content <filename> -stream <stream name>

View contents of a data stream:
get-content <filename> -stream <stream name>

Decent blog post explaining NTFS attributes (particularly $DATA, but also $STANDARD_INFORMATION, $FILE_NAME, etc.):
https://blogs.technet.microsoft.com/askcore/2009/10/16/the-four-stages-of-ntfs-file-growth/





2008 r2 File permissions/security takes forever

$
0
0
Is there a way to set changing permissions back to the way 2003 worked or tweak 2008? If i make a top level change to a folder that has millions of files on a share it looks at every file one by one to remove a securoty group.

DFS directory backlog remove files

$
0
0

I have a folder syncing inside a dfs share and 365 files inside the folder are backlogged. I have updated the Staging quota which was previously too small and the files are still backlogged. Would it be safe to remove the folder to see if the backlogged files are cleared?

DFS Server

$
0
0

We have move one server from one site to another but now DFS has stopped working.

Can you please give me some tips for troubleshooting it.


Viewing all 4075 articles
Browse latest View live