NetApp Webcast Notes about Flash Storage

NetApp told how important Flash Storage has become and that classical disks will not die the next few years. They say, classical SAS disks 3.5″ 15k will disappear from market, but get replaced by 2.5″ 10k disks. Because they’re cheaper and faster. If I compare the IOPS, they’re not faster, but may be cheaper.

145IOPS @ 2.5″ 10k
177 IOPS @ 3.5″ 15k

If you’d like to calculate this by yourself like me, you can use this calculator from wmarrow:


Another interesting thing was, why bigger disks aren’t faster than smaller ones. Most people think, high density disks have less read/write head movements and smaller ways to read data from the platter. Yes, it is, but.

A 3.5″ seagate barracuda XT disk with 7200 rpm has an average IOPS of ~75. If you now calculate IOPS/GB, you get 0.036 IOPS per GB. If you compare to a disk that can store only 400GB, you have 0.18 IOPS per GB. Now compared to a faster 2.5″ 10k rpm disk with 400GB (~145 IOPS) , it’s 0.36 IOPS per GB. The smaller 3.5″ is 5x faster and the same size faster rotating, smaller disk 10x faster.

12-18-2012 8-36-05 AM

Cache Pool vs. Flash Cache

In a classical NetApp Storage, Flash Cache is used for Controller caching. They use Cache sizes like 12/24/40GB for FAS 2200/3200 Series and up to 3/6/16TB for FAS 6200 Series. Using this huge amount of read cache, the underlying disks are able to write down a lot more write IO’s directly. That’s why NetApp also doesn’t talk about write cache when they’re selling a FAS, they only tell you about read cache in the systems.

By the way, NetApp doesn’t support Storage Tiering. But that’s because they say, the Flash Pools that support the Array is like their top tier in tiering, but works even better than classical tiering.

Flash Pools are disk pools composed of SSDs to support and/or extend the existing Flash Cache. The difference is that you’re allowed to control for which storage pool the cache can be used. This is an advantage as soon as you calculate reqired IOPS for a server LUN by yourself. Here’s an example; Target is 40’000 IOPS and 2TB of disk space is needed.

Example calculation of IOPS with SAS Disks and Flash Cache


However, you’ve also the possibility to create standard disk pools using SSD disk drives. Pay attention and don’t confuse with “Flash Pools” (see above). SSD disk pools can be used to directly assign LUNs to servers with high load, or systems where you cannot accept high cache rewarming time e.g. after a host / os failure. SSD Pools give you low capacity but high read/write performance.

Flash Accel

Didn’t heard about that before? Me too. It’s a technology to use SSDs installed at ESX hosts for read caching the FAS arrays. Using a vCenter Plugin, the FAS controls the local SSD disks and puts requested data on it for caching. NetApp sais, that could be even faster than stored on the array itself. And you’ve got more additional, cheap and lots of read cache.


– Flash Accel is the light green part in the picture.


Don’t know for what time this link stays available, I’ve listened to this recorded Webcast [german] using this link:

The Presentation was also available to download as PDF here (written in german):

Enable Deduplication on NTFS Volumes

Maybe you’ve already hear that Server 2012 has deduplication for NTFS integrated by just installing the file server role and selecting deduplication. It’s not activated by just installing it, but not really hard to activate too.


Before starting, the following conditions must be met

  • It must be a NTFS volume; ReFS is not supported
  • There’s some free space, I would recomment to have at least 10%
  • only fixed disks are supported, no USB and other removable ones
  • system and boot volumes are not supported

Q: How do I know my volume’s a good candidate for dedup?

A: There’s a evaluation tool on board of Server 2012, you can get usage help by just typing¬† ddpeval on a command line. In my case, I’ve evaluated just a subfolder that uses 1TB diskspace:

12-14-2012 8-53-58 AM

More about preparing for Dedup and ddpeval @

Enable Deduplication for a Volume

Assuming you have a disk D: that has a lot of data on it that maybe’s good candidate for dedup. To start deduplicating data just open a PowerShell Console as elevated Administrator and hit this commands.

Enable-DedupVolume D:
Set-DedupVolume D: -MinimumFileAgeDays 1

The first command activates deduplication on Volume D:, the second tells dedup to start after a file’s accessed day in past. Default would be 5 days, but I prefer to directly dedup files after one day.

Deduplication data is stored in the “System Volume Information” Folder at the root of the volume in subfolder “dedup”. A lot of files there are named *chunk* and represent chunks of original files.

More about the PowerShell cmdlets @

Open Questions

  • what does happen if I move a deduplicated NTFS Volume to an older Server, i.e. 2008 r2 or whatever?
  • is there a way to get Deduplication for Server 2008 r2 too?



Windows Update error 800B0001

I’ve done some quick research with Google and found the following.

If you receive Windows Update error 800b0001, it means that Windows Update or Microsoft Update cannot determine the cryptographic service provider, or a file Windows Update requires (named catalog store) is corrupted. The System Update Readiness Tool can correct some conditions that cause this error.

In Article KB947821 they explain a way in Server 2012 and Win8 to use dism to scan the image health. For “older” Operating Systems, there’s a Tool that can help repair Windows Update.

So in Server 2012 and Win8, just run the following commands as elevated admin:

DISM.exe /Online /Cleanup-image /Scanhealth
DISM.exe /Online /Cleanup-image /Restorehealth

Run Windows Update again, Error hopefully solved.