HP Server Drive Technology

Drive Types


SATA & SAS Connectivity


Solid State Drives


In Sales Builder sind folgende SSD Disktypen für einen ProLiant DL380p Gen8 verfügbar:

  • HP 80GB 6G SATA VE 2.5″ SC EG SSD (734360-821)
  • HP 100GB 3G SATA MLC 2.5″ SC EM SSD 653112-B21
  • HP 100GB 6G SATA ME 2.5″ SC EM SSD (691862-B21)
  • HP 120GB 6G SATA VE 2.5″ SC EB SSD (717965-B21)
  • HP 200GB 3G SATA MLC 2.5″ SC EM SSD (653118-B21)
  • HP 200GB 66 SATA ME 2.5″SC EM SSD (691864-821)
  • HP 200GB 6G SAS MLC 2.5″ SC EM SSD (658478-B21)
  • HP 200GB 6G SAS SLC 2.5″SC EP SSD (653078-B21)
  • HP 200GB SAS ME 2.5″ SC EM SSD (690825-821)
  • HP 240GB 6G SATA VE 2.5″ SC EV SSD (717969-821)
  • HP 40068 3G SATA MLC 2.5″ SC EM SSD (6531 20-821)
  • HP 400GB 6G SATA ME 2.5″ SC EM SSD (691 866-821)
  • HP 400GB 6G SAS SLC 2.5″ SC EP SSD (653082-821)
  • HP 400GB 6G SAS MLC 2.5″ SC EM SSD (653105-821)
  • HP 400GB SAS ME 2.5″ SC EM SSD (690827-821)
  • HP 480GB 6G SATA VE 2.5″ SC EV SSD (717S71-B21)
  • HP 800GB 66 SATA ME 2.5″ SC EM SSD (691 868-821)
  • HP 800GB 6G SATA VE 2.5″ SC EV SSD (717973-821)
  • HP 800GB 66 SAS MLC 2.5″ SC EM SSD (653109-821]
  • HP 800GB SAS ME 2.5″ SC EM SSD (690829-821)

Performance Vergleich


PCIe IO Accelerator Performance

Die IO Beschleuniger Karten bieten extrem schnelle block Storage Performance und eignen sich für folgende Applikationen (aber nicht nur):

  • Database and Database acceleration
  • Web servers
  • Video, rendering, animation


Intel Smart Response Technology

Intel SRT kombiniert die Kapazität einer HDD mit der Geschwindigkeit einer SSD indem über ein Software Layer Daten auf der SSD für schnelleren Zugriff zwischengespeichert werden.

Kontroller & Konfiguration

Um SRT zu benutzen muss der AHCI SATA Storage Controller in RAID mode konfiguriert sein. Die Disks werden einzeln als Volume bereitgestellt. SRT funktioniert nicht mit Harddisk welche zusätzlich über PCI Karten angeschlossen wurden.

Betriebmodus

Enhanced Mode ist ähnlich wie der write-trough Modus. Daten werden auf SSD und HDD geschrieben, gelesen wird von SSD.

Maximized Performance Mode entspricht dem write-back Modus. Daten werden auf SSD geschrieben aber nicht auf die HDD. Daten werden von der SSD nur entfernt wenn unbedingt nötig. Dadurch dass nicht mehr alle Daten auf der HDD vorliegen, ist das Risiko von Datenverlust etwas höher. Hat eine der beiden Komponenten (SSD, HDD, SRT Software) einen Fehler wird die Wiederherstellung von Daten schwierig.


Solid State Drives – Kompromisse

Je mehr Schreibvorgänge oder je kleiner die Kapazität, desto mehr P/E cycles (Program-Erase) werden benötigt um die SSD aufzuräumen und Platz für weitere Daten freizugeben. Caching mit einer SSD verringert deshalb die Lebenszeit.

Die SSD Kapazität sollte aufgrund der Arbeitslast, Anforderungen and die Ausdauer und Systenkonfiguration angepasst werden. Die SSD Cachegrösse ist nicht an “hard and fast rules” gebunden, aber ein guter erster Ansatz ist 4x die Menge des Arbeitsspeichers im System.

Hybrid Hard Drives (HHD’s), oder auch Solid State Hard Drives (SSHD) genannt, sind Festplatten mit integriertem NAND array und firmware, welche keine Änderungen as OS Software erfordern. Diese Festplatten sind nicht gleich effizient wie Software, welche User- und File-System Bewusstsein hat.




Microsoft iSCSI SW Target 3.3 vs. Starwind iSCSI Target free Part 1/2

On Server 2008r2 as well as Server 2012 you can install a iSCSI Target service to host LUNs as VHD files on you local NTFS file systems. This enables functionality to implement some special requirements for a modern infrastructure, such as providing cheap backup space to your environment, together with simultaneous access from multiple hosts. A Veeam Backup VM is a good example where such storage can be used for.

But there are also limitations on Microsofts iSCSI: Jake Rutski [1] already tested performance for random access by using IOmeter. He found that MS iSCSI is 10x less performance than Starwind’s free iSCSI initiator. May that be because of missing cache functionality?

A colleague found this on Technet:

Based on my research, the Microsoft iSCSI Target does not utilize the file system cache.  All VHDs are opened unbuffered and write through to guard against loss of data in case of a power loss. As a result, customers will not benefit from the caching even if you add more memory. More memory will only allow you to support more concurrent initiators since each iSCSI session consumes system memory.

Source: http://social.technet.microsoft.com/Forums/zh/winserverfiles/thread/7158b43a-f6f6-40db-9529-6d6f3b7306cc

As well as:

Be careful if the iscsi client is hyper-v VM host. Windows server 2012 iSCSI target uses unbuffered I/O, means no read cache, no writeback cache.  If you use storage pool or regular HBA card/onboard SATA, the performance will be really bad when there are many VMs.  The solution is to either use a hardware RAID card with decent write-back cache, or use a iscsi target software implements software writeback cache, such as StarWind or 3rd party iscsi appliance solution like nexenta. I tried both and they worked great.

Note: I don’t work for either Microsoft or StarWind.

Source: http://social.technet.microsoft.com/Forums/en-US/winserver8gen/thread/005c63dc-26e8-4aa4-8aa9-8707541a4a45/

What I now want to do is a comparison about sequential read and write speed of both SW Targets and publish the results here in my Blog the next days.

Sources:

[1] StarWind iSCSI vs. Microsoft iSCSI @ Jake’s Blog

[2] Starwind iSCSI SAN free Edition

NetApp Webcast Notes about Flash Storage

NetApp told how important Flash Storage has become and that classical disks will not die the next few years. They say, classical SAS disks 3.5″ 15k will disappear from market, but get replaced by 2.5″ 10k disks. Because they’re cheaper and faster. If I compare the IOPS, they’re not faster, but may be cheaper.

145IOPS @ 2.5″ 10k
177 IOPS @ 3.5″ 15k

If you’d like to calculate this by yourself like me, you can use this calculator from wmarrow: http://www.wmarow.com/strcalc/

IOPS per GB

Another interesting thing was, why bigger disks aren’t faster than smaller ones. Most people think, high density disks have less read/write head movements and smaller ways to read data from the platter. Yes, it is, but.

A 3.5″ seagate barracuda XT disk with 7200 rpm has an average IOPS of ~75. If you now calculate IOPS/GB, you get 0.036 IOPS per GB. If you compare to a disk that can store only 400GB, you have 0.18 IOPS per GB. Now compared to a faster 2.5″ 10k rpm disk with 400GB (~145 IOPS) , it’s 0.36 IOPS per GB. The smaller 3.5″ is 5x faster and the same size faster rotating, smaller disk 10x faster.

12-18-2012 8-36-05 AM

Cache Pool vs. Flash Cache

In a classical NetApp Storage, Flash Cache is used for Controller caching. They use Cache sizes like 12/24/40GB for FAS 2200/3200 Series and up to 3/6/16TB for FAS 6200 Series. Using this huge amount of read cache, the underlying disks are able to write down a lot more write IO’s directly. That’s why NetApp also doesn’t talk about write cache when they’re selling a FAS, they only tell you about read cache in the systems.

By the way, NetApp doesn’t support Storage Tiering. But that’s because they say, the Flash Pools that support the Array is like their top tier in tiering, but works even better than classical tiering.

Flash Pools are disk pools composed of SSDs to support and/or extend the existing Flash Cache. The difference is that you’re allowed to control for which storage pool the cache can be used. This is an advantage as soon as you calculate reqired IOPS for a server LUN by yourself. Here’s an example; Target is 40’000 IOPS and 2TB of disk space is needed.

Example calculation of IOPS with SAS Disks and Flash Cache

calc1512

However, you’ve also the possibility to create standard disk pools using SSD disk drives. Pay attention and don’t confuse with “Flash Pools” (see above). SSD disk pools can be used to directly assign LUNs to servers with high load, or systems where you cannot accept high cache rewarming time e.g. after a host / os failure. SSD Pools give you low capacity but high read/write performance.

Flash Accel

Didn’t heard about that before? Me too. It’s a technology to use SSDs installed at ESX hosts for read caching the FAS arrays. Using a vCenter Plugin, the FAS controls the local SSD disks and puts requested data on it for caching. NetApp sais, that could be even faster than stored on the array itself. And you’ve got more additional, cheap and lots of read cache.

FlashAccel

– Flash Accel is the light green part in the picture.

Sources

Don’t know for what time this link stays available, I’ve listened to this recorded Webcast [german] using this link:
http://app.communicate.netapp.com/e/er?s=1184&lid=46668&elq=ed9821a3f0304198a61f279bb431487e

The Presentation was also available to download as PDF here (written in german):
Effizienter-Einsatz-von-Flash-Technologien-im-Data-Center_43_Final

Storage Top 10 Best Practices

Proper configuration of IO subsystems is critical to the optimal performance and operation of SQL Server systems. Below are some of the most common best practices that the SQL Server team recommends with respect to storage configuration for SQL Server.

Source: http://sqlcat.com/top10lists/archive/2007/11/21/storage-top-10-best-practices.aspx

(1) Understand IO characteristics and requirements

In order to be successful in designing and deploying storage for your SQL Server application, you need to have an understanding of your application’s IO characteristics and a basic understanding of SQL Server IO patterns. Performance monitor is the best place to capture this information for an existing application. Some of the questions you should ask yourself here are:

* What is the read vs. write ratio of the application?
* What are the typical IO rates (IO per second, MB/s & size of the IOs)? Monitor the perfmon counters:

# Average read bytes/sec, average write bytes/sec
# Reads/sec, writes/sec
# Disk read bytes/sec, disk write bytes/sec
# Average disk sec/read, average disk sec/write
# Average disk queue length

* How much IO is sequential in nature, and how much IO is random in nature? Is this primarily an OLTP application or a Relational Data Warehouse application?

To understand the core characteristics of SQL Server IO, refer to [http://technet.microsoft.com/de-de/library/cc966500%28en-us%29.aspx SQL Server 2000 I/O Basics].

(2) More and faster spindles are better for performance

* Ensure that you have an adequate number of spindles to support your IO requirements with an acceptable latency.
* Use filegroups for administration requirements such as backup / restore, partial database availability, etc.
* Use data files to “stripe” the database across your specific IO configuration (physical disks, LUNs, etc.).

(3) Try not to “over” optimize the design of the storage

Simpler designs generally offer good performance and more flexibility.

* Unless you understand the application very well avoid trying to over optimize the IO by selectively placing objects on separate spindles.
* Make sure to give thought to the growth strategy up front. As your data size grows, how will you manage growth of data files / LUNs / RAID groups? It is much better to design for this up front than to rebalance data files or LUN(s) later in a production deployment.

(4) Validate configurations prior to deployment

* Do basic throughput testing of the IO subsystem prior to deploying SQL Server. Make sure these tests are able to achieve your IO requirements with an acceptable latency. SQLIO is one such tool which can be used for this. A document is included with the tool with basics of testing an IO subsystem. Download the SQLIO Disk Subsystem Benchmark Tool.
* Understand that the of purpose running the SQLIO tests is not to simulate SQL Server’s exact IO characteristics but rather to test maximum throughput achievable by the IO subsystem for common SQL Server IO types.
* IOMETER can be used as an alternative to SQLIO.

(5) Always place log files on RAID 1+0 (or RAID 1) disks

This provides:

* Better protection from hardware failure, and
* Better write performance.

Note: In general RAID 1+0 will provide better throughput for write-intensive applications. The amount of performance gained will vary based on the HW vendor’s RAID implementations. Most common alternative to RAID 1+0 is RAID 5. Generally, RAID 1+0 provides better write performance than any other RAID level providing data protection, including RAID 5.

(6) Isolate log from data at the physical disk level

* When this is not possible (e.g., consolidated SQL environments) consider I/O characteristics and group similar I/O characteristics (i.e. all logs) on common spindles.
* Combining heterogeneous workloads (workloads with very different IO and latency characteristics) can have negative effects on overall performance (e.g., placing Exchange and SQL data on the same physical spindles).

(7) Consider configuration of TEMPDB database

* Make sure to move TEMPDB to adequate storage and pre-size after installing SQL Server.
* Performance may benefit if TEMPDB is placed on RAID 1+0 (dependent on TEMPDB usage).
* For the TEMPDB database, create 1 data file per CPU, as described in #8 below.

(8) Lining up the number of data files with CPU’s has scalability advantages

…for allocation intensive workloads.

* It is recommended to have .25 to 1 data files (per filegroup) for each CPU on the host server.
* This is especially true for TEMPDB where the recommendation is 1 data file per CPU.
* Dual core counts as 2 CPUs; logical procs (hyperthreading) do not.

(9) Don’t overlook some of SQL Server basics 

* Data files should be of equal size – SQL Server uses a proportional fill algorithm that favors allocations in files with more free space.
* Pre-size data and log files.
* Do not rely on AUTOGROW, instead manage the growth of these files manually. You may leave AUTOGROW ON for safety reasons, but you should proactively manage the growth of the data files.

(10) Don’t overlook storage configuration bases

* Use up-to-date HBA drivers recommended by the storage vendor
* Utilize storage vendor specific drivers from the HBA manufactures website
* Tune HBA driver settings as needed for your IO volumes. In general driver specific settings should come from the storage vendor. However we have found that Queue Depth defaults are usually not deep enough to support SQL Server IO volumes.
* Ensure that the storage array firmware is up to the latest recommended level.
* Use multipath software to achieve balancing across HBA’s and LUN’s and ensure this is functioning properly
* Simplifies configuration & offers advantages for availability
* Microsoft Multipath I/O (MPIO): Vendors build Device Specific Modules (DSM) on top of Driver Development Kit provided by Microsoft.