Install a Server 2012 Failover Cluster (with SQL Server)

Last week my colleagues and I tried to install a failover cluster on Server 2012. It wasn’t the first time, but this time we did it for an upcoming production SQL Server Installation. Here are some hint’s you should consider and steps you must go trough when setting up a cluster.

Naming Convention for this Article

Nodes: SRV01c1, SRV01c2
Cluster Name: SRV01Win
SQL Cluster Name: SRV01

Preparation Checklist

Computer Accounts for the Cluster, on…

…Server 2008:

  • DO NOT PREPARE accounts for the cluster
  • just move them to the right OU afterwards

…Server 2012

  • Prepare AD Accounts like SRV01, SRV01win, SRV01dtc, SRV01c1, SRV01c2
  • disable the prepared accounts in AD
  • logged on user must have full rights on accounts to join domain

Additional Steps

  • get IP addresses, one for each hostname

Installation Checklist

  • Install Windows 2012 on SRV01C1 and SRV01c2
  • connect DTC & Quorum disks to c1 & c2
  • Validate Cluster, check things and details like Network Adapter bindings and order
  • create Cluster SRV01win using Failover Management GUI
  • set Quorum to Node & Disk majority (make sure you’re using the right disk for that)
  • if necessary, install DTC (you only need that for active/active clustering)
  • check Eventlog for errors and solve them

ISSUE: DNS FQDN missing / could not register

After setting up the Windows Failover Cluster using the Failover Clustering Feature GUI, the Installation ended successfully but we weren’t able to ping the Host SERVER-WIN. Everythin seemed to be okay, but Eventlog listed this errors:

*** ERROR1: Eventid 1228 ***

Cluster network name resource ‘Cluster Name’ encountered an error enabling the network name on this node. The reason for the failure was: ‘Unable to obtain a logon token’. The error code was ‘1326’.

You may take the network name resource offline and online again to retry.

*** ERROR2: Eventid 1196 ***

Cluster network name resource ‘Cluster Name’ failed registration of one or more associated DNS name(s) for the following reason: The handle is invalid. .

Ensure that the network adapters associated with dependent IP address resources are configured with at least one accessible DNS server.

*** ERROR3: Eventid 1205 ***

The Cluster service failed to bring clustered service or application ‘SQL Server (MSSQLSERVER)’ completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered service or application.

After troubleshooting a while and asking Google various questions, we tried to solved that using ipconfig /registerdns on the active node. This didn’t help, but some minutes later we found that there’s a quick and easy way to let Windows re-try the IP registration by just telling the Cluster GUI “repair” on the Host’s name:

2-25-2013 9-33-49 AM

Set up SQL Server

  • start SQL Setup from installation media / ISO file
  • do not install SQL as usual, choose the cluster installation option
  • Provide setup neccessary information like Clustername “SRV01”, IP and the Features you want to install
  • if everything is successful, repeat the installation on node 2; setup will now detect the installed features and repeat the installation the same way as on node 1

Setup Test Cluster using VM’s

If you’re planning to setup a Cluster, you maybe like to test everything first. There’s a VMware setup guide on Windows Clustering using VM’s in vSphere [1] and their limitations [2]. If you decide to prepare a test environment with two VM’s, consider the following advantages and disadvantages.


  • no additional SAN Storage / LUN needed
  • no extra SCSI connection must be built up
  • works the same way like real


  • VM’s using physical SCSI are not able to be vMotion’ed to another Host


[1] Microsoft Cluster Service (MSCS) support on ESXi/ESX

[2] Microsoft Clustering on VMware vSphere: Guidelines for Supported Configurations

Microsoft iSCSI SW Target 3.3 vs. Starwind iSCSI Target free Part 1/2

On Server 2008r2 as well as Server 2012 you can install a iSCSI Target service to host LUNs as VHD files on you local NTFS file systems. This enables functionality to implement some special requirements for a modern infrastructure, such as providing cheap backup space to your environment, together with simultaneous access from multiple hosts. A Veeam Backup VM is a good example where such storage can be used for.

But there are also limitations on Microsofts iSCSI: Jake Rutski [1] already tested performance for random access by using IOmeter. He found that MS iSCSI is 10x less performance than Starwind’s free iSCSI initiator. May that be because of missing cache functionality?

A colleague found this on Technet:

Based on my research, the Microsoft iSCSI Target does not utilize the file system cache.  All VHDs are opened unbuffered and write through to guard against loss of data in case of a power loss. As a result, customers will not benefit from the caching even if you add more memory. More memory will only allow you to support more concurrent initiators since each iSCSI session consumes system memory.


As well as:

Be careful if the iscsi client is hyper-v VM host. Windows server 2012 iSCSI target uses unbuffered I/O, means no read cache, no writeback cache.  If you use storage pool or regular HBA card/onboard SATA, the performance will be really bad when there are many VMs.  The solution is to either use a hardware RAID card with decent write-back cache, or use a iscsi target software implements software writeback cache, such as StarWind or 3rd party iscsi appliance solution like nexenta. I tried both and they worked great.

Note: I don’t work for either Microsoft or StarWind.


What I now want to do is a comparison about sequential read and write speed of both SW Targets and publish the results here in my Blog the next days.


[1] StarWind iSCSI vs. Microsoft iSCSI @ Jake’s Blog

[2] Starwind iSCSI SAN free Edition