auto-mount VHD Disks

On Server 2008 R2 and newer, VHD files can be mounted using Windows Disk Management MMC (diskmgmt.msc):

vhd1

You can mount a VHD using the “Action” Menu:

vhd2

But as soon as the Server gets restarted, you need to re-mound the VHD File manually. Microsoft provides no way to do this automatically using onboard tools. So you have the following choice:

  1. either create a batch script that calls fsutil.exe to mount the VHD file on startup
  2. or to use the cool and easy VHDattach Tool: http://www.jmedved.com/vhdattach

This tools allows you to open existing VHD files and select “auto-mount” (see screenshot):

vhd3

I mounted a Backup Exec Dedup Store as VHD file. For my example, it was necessary to re-configure the Dedup Service to Delayed start in services.msc.

Windows Defragmentation Decision Process

Windows Server 2012 Defrag does not just defragment Volumes like in earlier versions. There’s a decision process behind, that selects the appropriate method for each volume.

Decision Process

The following commands are based on the new Optimize-Volume PowerShell cmdlet. Most of the parameters correspond to defrag.exe’s parameters. The Decision process works like this.

# For HDD, Fixed VHD, Storage Space:
Optimize-Volume -Analyze -Defrag

# Tiered Storage Space
Optimize-Volume -TierOptimize

# SSD with TRIM support
Optimize-Volume -Retrim

# Storage Space (Thinly provisioned), SAN Virtual Disk (Thinly provisioned), Dynamic VHD, Differencing VHD
Optimize-Volume -Analyze -SlabConsolidate -Retrim

# SSD without TRIM support, Removable FAT, Unknown
No operation.

Graphical Defrag Tool

The classical GUI Tools for Defrag still extists. If you open it, you’ll see theres a predefined schedule for a weekly defragmentation of your system volume. Depending on the type of storage you’re using, defrag only will run a short trim or other optimization at that time. In virtualized environments, you have either thin provisioned storage from vSphere or from storage. Because of this, Defrag will not start a classical defragmentation anymore on VM’s. Instead, a re-Trim / Slab-consolidation will start and takes only a few seconds / minutes to complete (depends on size).

PowerShell cmdlet

Server 2012 R2 also has a PowerShell cmdlet called “Optimize-Volume” that can be used instead of the classic defrag.exe tool. Both can handle the same functions, the cmdlet has an additional StorageTier Optimization function for Storage Spaces.

Information about the cmdlet is here:
http://technet.microsoft.com/en-us/library/hh848675.aspx

Search Service Cluster Edition

Windows Server File Services is a classic well known Service form Microsoft. If you use the Search bar on the top of every Windows Explorer Window since Windows 7, your fileserver will respond very fast with an result, but only if Search Service is installed. If not, you’ll see a slow and long time working search, that displays one file found aftern another.

Setup Steps

If you follow this order of steps, you’ll have success:

  • Configure Search Service as described below
  • Move Clustered File Server with all Drives to the other Node
  • Configure Search Service on other Node(s)
  • Setup Clustered Service (details at the end of article)

Here are the detailed configuration steps:

Search Service Configuration

Because Search Index can be used for multiple drives of a file server / cluster, we will use an additional, clustered Drive using letter S. The following configuration steps must be dont on both cluster nodes individually, while the file server cluster role is active on that node.

  • Create folder S:\Search, if it doen’t already exist
  • Stop service “Windows Search” and set startup type to “manual”

To force windows to use the new search index location also after a index reset, the following registry key must be modified.

HKLM\Software\Microsoft\Windows Search\
DataDirectory -> S:\Search\Data\
DefaultDataDirectory -> S:\Search\Data\

  • start Service „Windows Search“
  • check folder content: die search put in some files here?

Now we configure the folders to be indexed. The easiest way would be using the GUI in control panel. For easy access, just create a desktop icon for this command:

control /name Microsoft.IndexingOptions

  • click on “modify” to de-select existing indexed locations
  • add all to-be-indexed shares
  • stop Windows Search Service

Configuration complete – on this node. Now the same steps are required on the other node too.

Setup Clustered Generic Service

After configuring both Nodes with the steps above, we can create a Clustered Generic Service for Windows Search.

  • start Failover Cluster Manager
  • Add a “Generic Service” under your fileserver’s Role
  • Open Properties of the new Service and add a Dependency for Drive S:
  • right-click on the Search Service and choose “bring online” to start
  • test if Failover works by doing Failover and re-check the Search Configuration

Done.

Sources:

Quick Guide to Setup Dynamic Access Control

Dynamic Access Control enables the functionality to deploy file or object permissions based on claims instead of access control lists (acl). As claims, we use pre-defined properties from active directory user objects like department, Title or Manager information. Almost everything from Active Directory can be used. So configuring DAC works like this:

Let’s assume, you want to restrict access on all Folders with “TopSecret” Confidentiality for users in Department “Agents” of Company “Investigate&Co”.

(1) Define Claims.

Open AD Administrative Center and go to Dynamic Access Control, where you open Claim Types. Now add all Properties you wish to use for filtering and grouping all of your Employees.

Claims will be used to give permissions. For our example, we add Department and Company.

1

(2) Define Resource Properties.

Still in AD Administrative Center, select Resource Properties in the left navigation tree. There appears a list of available Properties that can be enabled for use. For our example, enable Confidentiality and right-click on it to edit the properties. At the bottom, there’s a dialog to add Suggested Values. Hit “add” and create a additional “Top Secret” Entry with Value “4000” (just greater than the highest).

2

(3) Configure a Recource Property List.

There’s already a default configured “Global Resource Property List” that contains the most of the default available values. Just ensure the “Confidentiality” Property is listed here.

3

(4) Create a Central Access Rule.

File System Permissions are no longer configured on the file server itself, the cool thins is they’re now configured central and maintained general. So we don’t define a access policy for Folder XY and make decisions about recursion like done in past. What we do now is to define what kind of data can be accessed by what type of user.

For our example, we create this rule: Objects classified as “Top Secret” can only be accessed by users that are members of Department “Agents” and work for Company “Investigate&Co”. This Rule looks like this:

4

(5) Create a Central Access Policy.

A policy is a collection of all or many Central Access Rules. You assign a policy to a file server. With this functionality, you can define multiple and/or different policies for different servers.

5

(6) Configure Kerberos and KDC to support claims and Kerberos armoring.

Edit the Default Domain Controller Policy to enable the following Kerberos and KDC setting.

6

…and for Kerberos too…

7

(the difference betweek this two pictures is the selection of KDC and Kerberos in the left tree)

(7) Assign the Policy to the Fileserver.

Create a GPO either on root and use security filtering or link it directly to a OU that only contains the file server(s). This OU gets the following settings.

8

Under Computer Configuration/Windows Settings/Security Settings/File System, there’s a “Central Access Policy” Setting where you can define the DAC Policy we just created.

(8) Setup Fileserver, Verify Permissions.

Your file server needs the “File Server” role and “File Server Resource Manager” (FSRM) being installed to have the classification tab enabled on folders. Using “gpudate /force”, we ensure the new policy gets downloaded.

To verify our example rule is running, I created a folder called “Obama” and set the Confidentiality manually to “Top Secret”.

9

This isn’t the way you will set the properties on your data (mind the huge effort to do this on big filers). In production, you create rules in FSRM to automatically classify data based on rules. But this is another story.

So after classifying my folder, let’s use old good known “Effective Access” Tab in the Advanced Security Settings of the Folder to verify access of my User “Agent007” who’s in Department “Agents” and works for Company “Investigate&Co” and for JuniorAgent123 who’s in Department “Junior Agents”.

Folder NTFS Security Settings:

10

Effective Permissions for Agent007:

11

Effective Permissions for JuniorAgent123:

12

Conclusion

NTFS permissions gives the basic permissions for the users on objects and the DAC is used to restrict access by classification rules on top of it.

(9) Using File Classification Infrastructure of FSRM

If you open FSRM under the “Classification Management” tree, you see our enabled “Confidentiality” Property for Objects in here als “Global” Scope Property.

13

The Classification Rules tree is empty by default, you can create Rules here to classify files and folders. For example, a rule could classify files by scanning their content for credit card numbers and assign them the classification “Financial Data”. To get some examples and templates, there’s a downloadable package from Microsoft:

14

http://www.microsoft.com/en-us/download/details.aspx?id=27123

This Solution Accelerator is designed to help enable an organization to identify, classify, and protect data on their file servers. The out-of-the-box classification and rule examples help organizations build and deploy their policies to protect critical information on the file servers in their environment.

JetPack for DHCP DB maintenance missing?

During my learning courses of Server 2012, I just tried to do a DHCP Database maintenance using JetPack. I really didn’t found that executable, so I also tried doing the same under Server 2008r2. No success. Know why? JetPack is only installed in combination with the WINS Role. Who does still use WINS?!? (Sorry for that.)

So if you don’t want to install the WINS Role only to get the JetPack executable back, there is one other way.

  1. Open Explorer, Browse to %windir%\System32
  2. Use the Search Box and enter “JetPack”
  3. Copy the executable to %windir%\System32\dhcp
  4. Run your maintenance

Source:

Technet Article; Jetpack.exe on Windows 2008 server

KB145881 How to Use Jetpack.exe to Compact a WINS or DHCP Database

Boot directly from vhd File

Yes I knew about mounting vhd files as drive in Windows 8/2012, and ISO files can also be mounted directly in windows explorer. But here’s a very easy way to mount AND directly add the vhd image into BCD boot menu to boot up from:

(1) Copy the Extracted VHD file to C:\BootVHD\Server2012.vhd

(2) Mount the copied VHD file as a virtual Drive Letter

  • Right-click on the “Command Prompt” shortcut and select “Run as Administrator”
  • run “DISKPART.EXE” from the Command Prompt
  • At the “DISKPART>” prompt type the following commands, pressing Enter after each:
  • SELECT VDISK FILE=”C:\BootVHD\Server2012.vhd”
  • ATTACH VDISK
  • EXIT

(3) Wait for the VHD file to be mounted as new Drive Letter.  When completed, this new drive letter will display in “My Computer” and “Windows Explorer”

(4) Add a new OS Boot Menu Choice for Windows Server 2012

  • Right-click on the “Command Prompt” shortcut and select “Run as Administrator”
  • Run “BCDBOOT <mounted_drive_letter>:\WINDOWS” from the Command Prompt

(5) Reboot and select “Windows Server 2012” for the OS Boot Menu displayed

Done.

Source: http://blogs.technet.com/b/keithmayer/p/earlyexpertlabsetup.aspx#.UYtQlsp0bIg

Dedup Report on Server 2012

Some of you maybe already heard of this new outstanding feature coming with Windows Server 2012. I used the chance to test De-duplication with 4TB of backup data on a server at my workplace.

BTW: don’t confuse with “nobody used that”-feature on Server 2008. Server 2012 has a real Dedup function now.

Total size of physical Partition: 5.46 TB
Dedup started: 21-December-2012 / 21.12.2012 (dates are in dd.mm.yyyy format)

Date

Free___ Space

Used___ Space

Unopti-mizedSize

Saved___ Space

Savings Rate

InPolicy FilesCount

Optimized FilesCount

21.12.2012

375 GB

5090 GB

5090 GB

0 GB

0 %

18479

0

09.01.2013

1120 GB

4340 GB

5320 GB

999 GB

18 %

17751

8368

13.01.2013

3330 GB

2120 GB

6070 GB

3950 GB

65 %

18180

18186

16.01.2013

3210 GB

2250 GB

5930 GB

3680 GB

62 %

18001

18003

20.01.2013

3340 GB

2110 GB

5660 GB

3540 GB

62 %

18623

18627

23.01.2013

3220 GB

2240 GB

5560 GB

3320 GB

59 %

18410

18414

27.01.2013

3310 GB

2150 GB

5750 GB

3600 GB

62 %

18668

18676

30.01.2013

3150 GB

2310 GB

5610 GB

3300 GB

58 %

18045

18031

UnOptimizedSize equals to the real size of the Data on the volume.

2-4-2013 10-03-58 AM

After about three weeks warm-up time, Dedup started being very efficient at January 13. Since then, data was written and deleted daily but it seems like there wasn’t any performance nor free space bottleneck in that time.

Cluster Shared Volumes (CSV)

Server 2008 R2 / 2012 came with a new failover Cluster feature called Cluster Shared Volumes (abbrev: CSV). This is a new feature that enables accessing a LUN from multiple a Windows Failover Cluster Nodes at the same time.  In past, this was not possible on Windows Failover Clusters.

Lets take a look on to the details.

Advantages

  • all nodes in a cluster can access the LUN at the same time, no failover needed
  • if a node’s storage connection fails / has issues, the node can send the write/read requests over LAN to another Node to write/read the needed stuff for him (“The cluster will re-route the communication through an intact part of the SAN or network”, Technet [1] )

Disadvantages

  • From Technet: “Be sure to review carefully what your backup application backs up. Also, for management operating-system based backup, ask your backup application vendor about the compatibility of your backup application with Hyper-V and with Cluster Shared Volumes.” [3]
  • a MUST for Hyper-V, but no advantage for Applications that don’t run more than one instance on the same volume at the same time (e.g. SQL failover cluster, one instance, two servers)
  • NOT SUPPORTED for SQL Server clustered Workloads [4]

Manuals

add storage to Clustered Shared Volumes in Windows Server 2012

Sources

[1] Understanding Cluster Shared Volumes in a Failover Cluster
[2] Recommendations for Using Cluster Shared Volumes in a Failover Cluster
[3] Backing Up Cluster Shared Volumes

[4] Use Cluster Shared Volumes in a Windows Server 2012 Failover Cluster

.

Enable Deduplication on NTFS Volumes

Maybe you’ve already hear that Server 2012 has deduplication for NTFS integrated by just installing the file server role and selecting deduplication. It’s not activated by just installing it, but not really hard to activate too.

Preparation

Before starting, the following conditions must be met

  • It must be a NTFS volume; ReFS is not supported
  • There’s some free space, I would recomment to have at least 10%
  • only fixed disks are supported, no USB and other removable ones
  • system and boot volumes are not supported

Q: How do I know my volume’s a good candidate for dedup?

A: There’s a evaluation tool on board of Server 2012, you can get usage help by just typing  ddpeval on a command line. In my case, I’ve evaluated just a subfolder that uses 1TB diskspace:

12-14-2012 8-53-58 AM

More about preparing for Dedup and ddpeval @ http://technet.microsoft.com/en-us/library/hh831700.aspx

Enable Deduplication for a Volume

Assuming you have a disk D: that has a lot of data on it that maybe’s good candidate for dedup. To start deduplicating data just open a PowerShell Console as elevated Administrator and hit this commands.

Enable-DedupVolume D:
Set-DedupVolume D: -MinimumFileAgeDays 1
.

The first command activates deduplication on Volume D:, the second tells dedup to start after a file’s accessed day in past. Default would be 5 days, but I prefer to directly dedup files after one day.

Deduplication data is stored in the “System Volume Information” Folder at the root of the volume in subfolder “dedup”. A lot of files there are named *chunk* and represent chunks of original files.

More about the PowerShell cmdlets @ http://technet.microsoft.com/en-us/library/hh848450.aspx

Open Questions

  • what does happen if I move a deduplicated NTFS Volume to an older Server, i.e. 2008 r2 or whatever?
  • is there a way to get Deduplication for Server 2008 r2 too?

.

Source: http://technet.microsoft.com/en-us/library/hh831434.aspx

HowTo configure the CAU Cluster Role

Windows Server 2012 (currently only available as a preview version) now supports “Cluster Aware Updating” (CAU). This means you only need to click on Update Cluster and the CAU-Tools takes care of updating the cluster including failing-over the services, installing updates and rebooting the servers.

Feature Description:

  • Puts each node of the cluster into node maintenance mode
  • Moves the clustered roles off the node
  • Installs the updates and any dependent updates
  • Performs a restart if necessary
  • Brings the node out of maintenance mode
  • Restores the clustered roles on the node
  • Moves to update the next node

Source: http://technet.microsoft.com/en-us/library/hh831694.aspx

Installation
This Feature can be used in two ways: Self Updating Mode and Remote Updating Mode. All the functionality is available together with the Windows RSAT Feature “Failover Clustering Tools”.

Self-updating mode
If you enable self-updating, it will be configured as additional Workload on the Failover Cluster and starts based on a configured schedule.

Create an additional Computer account
Before you start, you must create an additional computer account for the Cluster, lets call it “SERVER01CAU” here. This account is needed for CAU to run as Workload on the Cluster. Edit the security settings in Active Directory on this account and add the Cluster’s own Computer account as User with full rights. In my case it’s “SERVER01Win$”.

This isn’t documented yet anywhere on Microsoft’s Technet pages, but if you don’t create the AD Computer account you’ll get error messages and the configuration fails. In the Cluster’s Eventlog, you’ll then see error messages telling you that the cluster haven’t had enough rights to create an AD account.

Configuration using the GUI
To configure self-updating use the “Cluster-Aware Updating” from the Administrative Tools. If you have installed server core (recommended) you must use the PowerShell command, the GUI is not available.

Picture(1) – Cluster-Aware Updating GUI
Picture(2) – CAU Configure cluster self-updating options

Configuration using PowerShell Command
To configure self-updating using PowerShell, you can use the “Set-CauClusterRole” cmdlet. Hint: You can use another server / workstation with RSAT installed to generate the PowerShell command if you use the GUI Assistant. The PowerShell command that will configure the Cluster is displayed in the details before you click Finish.

Set-CauClusterRole -ClusterName SERVER01Win-Force -CauPluginName WindowsUpdateAgent -MaxRetriesPerNode 3 -CauPluginArguments @{ ‘IncludeRecommendedUpdates’ = ‘False’ } -StartDate “06/11/2012 03:00:00” -DaysOfWeek 127 -IntervalWeeks 1;Enable-CauClusterRole -ClusterName SERVER01Win -Force -ConfigurationName SERVER01CAU;

When you let Windows configure self-updating using both ways, GUI and PowerShell, this will install a workload on the destination Failover Cluster and runs based on the schedule provided at configuration time.

The workload is not visible on the Failover Cluster GUI, but you can display it using the following command:

Get-ClusterResource -Cluster SERVER01Win

You get a list of all resources on your Cluster where the resource “ClusterAwareUpdatingResource” is the new self-updating Workload.

PS > Get-ClusterResource -Cluster SERVER01Win | ? {$_.ResourceType -like “*Updating*”} | Ft -Auto

Name                State  OwnerGroup  ResourceType
—-                —–  ———-  ————
CAUSERVEfw8Resource Online CAUSERVEfw8 ClusterAwareUpdatingResource

Remote-updating mode
This mode does not install anything additional on the Cluster or the nodes, but needs another Server running to issue the commands or run the PowerShell scripts that trigger the Update mechanism. This third Machine is called “Update Coordinator” and only needs the CAU Tools installed (RSAT Failover Clustering Tools).

You can either use the same GUI as you see on Screenshot (2) to Apply updates and let the computer you’re using coordinate the Update installation process. Or you use the following PowerShell cmdlet:

Invoke-CauRun -ClusterName SERVER01Win -Force -CauPluginName WindowsUpdateAgent -MaxRetriesPerNode 3 -CauPluginArguments @{ ‘IncludeRecommendedUpdates’ = ‘False’ };

So because this can also be invoked using a script during the night, you have the option if you prefer to install the CAU using self-updating Cluster Feature