Windows with Chocolate(y)

Chocolatey is a Linux-apt-get-like Packet Manager for Windows that uses PowerShell. Software you’ve installed using Chocolatey can easily be upgraded at once by just starting a command like “choco upgrade all”.

Actually this is the coolest and easiest way to get control of all the Flash, Java, Acrobat and all other thousand tools on your computer.

Do I have to reinstall all my Tools?

No. Just install everything again with choco over your actual setup. This works great and is a good way to start with Chocolatey.

So tell me how

To eat the chocolate, start an elevated cmd.exe prompt and run this command (copy-paste):

@powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString(''))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin

// If you want to be sure, this command is from here:

Important: choco only works in elevated promts, never try to run without elevation. But you can run choco in cmd.exe as well as powershell.exe; I recommend to use powershell because choco is based on PowerShell.


One important tweak is to disable the confirmation prompt. Each time you install something, choco asks for confirmation. This can get painful if you’d like to upgrade for example 20 packages at once.

To disable the prompt, run this command:

choco feature enable -n allowGlobalConfirmation

Let’s install

Now you can use choco with this parameters.

choco -?                 # help
choco install <paket>    # install
choco remove <paket>     # uninstall
choco list <suchbegriff> # search for a packet
choco upgrade all        # upgrades all installed packages
choco list -l            # shows locally installed choco packages

My favorites

Here’s my package list for all computers.

choco install 7zip AdobeAIR altap-salamander cdburnerxp GoogleChrome dropbox firefox flashplayerplugin jre8 keepass mRemoteNG notepadplusplus Silverlight skype teamviewer vlc

Other packages, that might interest you:

cdburnerxp classic-shell Firefox handbrake IrfanView iTunes lastpass thunderbird Gpg4win

And there are a lot more. Just search for it using “choco list <whatever>”.

abount paging and swapping

(or Pagefiles vs. Swap / Swapfiles)



Linux will swap existing pages to disk on a per process basis. In principle, a process might allocate 1 Gb of buffer, but always be using the same one physical page.


The advantage of this is that it is relatively simple to grasp and memory for a program is always allocated contiguously, the downside is that performance on a machine can become absolutely abysmal when the system ends up in a state where things are constantly swapping. The algorithm also involves the repeated swapping in and out of data that will not be used in the foreseeable future.


With swapping, parts of memory which are not in use are written to disk; this enables one to run several programs whose total memory consumption is greater than the amount of physical memory. When a program makes a request for a part of memory that was written to the disk, that part has to be loaded into memory. To make room for it, another part has to be written to the disk (effectively the two parts swap places – hence the name). This “extension” of physical memory is generally known as “virtual memory”.


Modern systems use both paging and swapping, and pages are what is being swapped in and out of memory.



In contrast, paging files function by moving “pages” of a program from system memory into the paging file. These pages are 4KB in size. The entire program does not get swapped wholesale into the paging file.

While swapping occurs when there is heavy demand on the system memory, paging can occur preemptively. This means that the operating system can page out parts of a program when it is minimized or left idle for some time.

Swapfiles were used in old iterations of Microsoft Windows, prior to Windows 95. From Windows 95 onwards, all Windows versions use only paging files.


Windows 8 supports both paging and swapping. Paging will hold items that haven’t been accessed in a long time whereas swapping holds items that were recently taken out of memory. The items in pagingfile may not be accessed again for a long time whereas the items in swapfile might be accessed much sooner.


  1. “swapping” means put a whole program / process to swapspace on disk, while during “paging” only pages of memory are put onto disk (page = 4kb)
  2. because of its granularity, paging is often more efficient than swapping out the whoule process
  3. both mechanism can lead to performance degradation because content must be read out of multiple time slower disk storage instead of memory
  4. Windows 8 has both, pagefile and swapfile; the c:\swapfile.sys is used for the apps only

auto-mount VHD Disks

On Server 2008 R2 and newer, VHD files can be mounted using Windows Disk Management MMC (diskmgmt.msc):


You can mount a VHD using the “Action” Menu:


But as soon as the Server gets restarted, you need to re-mound the VHD File manually. Microsoft provides no way to do this automatically using onboard tools. So you have the following choice:

  1. either create a batch script that calls fsutil.exe to mount the VHD file on startup
  2. or to use the cool and easy VHDattach Tool:

This tools allows you to open existing VHD files and select “auto-mount” (see screenshot):


I mounted a Backup Exec Dedup Store as VHD file. For my example, it was necessary to re-configure the Dedup Service to Delayed start in services.msc.

Windows Defragmentation Decision Process

Windows Server 2012 Defrag does not just defragment Volumes like in earlier versions. There’s a decision process behind, that selects the appropriate method for each volume.

Decision Process

The following commands are based on the new Optimize-Volume PowerShell cmdlet. Most of the parameters correspond to defrag.exe’s parameters. The Decision process works like this.

# For HDD, Fixed VHD, Storage Space:
Optimize-Volume -Analyze -Defrag

# Tiered Storage Space
Optimize-Volume -TierOptimize

# SSD with TRIM support
Optimize-Volume -Retrim

# Storage Space (Thinly provisioned), SAN Virtual Disk (Thinly provisioned), Dynamic VHD, Differencing VHD
Optimize-Volume -Analyze -SlabConsolidate -Retrim

# SSD without TRIM support, Removable FAT, Unknown
No operation.

Graphical Defrag Tool

The classical GUI Tools for Defrag still extists. If you open it, you’ll see theres a predefined schedule for a weekly defragmentation of your system volume. Depending on the type of storage you’re using, defrag only will run a short trim or other optimization at that time. In virtualized environments, you have either thin provisioned storage from vSphere or from storage. Because of this, Defrag will not start a classical defragmentation anymore on VM’s. Instead, a re-Trim / Slab-consolidation will start and takes only a few seconds / minutes to complete (depends on size).

PowerShell cmdlet

Server 2012 R2 also has a PowerShell cmdlet called “Optimize-Volume” that can be used instead of the classic defrag.exe tool. Both can handle the same functions, the cmdlet has an additional StorageTier Optimization function for Storage Spaces.

Information about the cmdlet is here:

Disable ISATAP / 6to4 / teredo Adapters

Did you even wonder about those tunnel adapters appearing in ipconfig? You need them as soon as you’re working in a dual ip stack like using IPv4 and IPv6 addresses the same time. Ok, thats a very basic version of what it does, but if you consider details just read the following articles:

If you don’t need them, use the following commands to disable those adapters and remove’m from ipconfig.

  • netsh int isa set state disabled
  • netsh in 6to4 set state disable
  • netsh in teredo set state disable



Debugging Bluescreens using WinDebug

When Windows stops with a “Bluescreen of Death” (short: BSOD), there may be the chance that just a single driver causing that issue. Maybe if you just installed an update or something new.

If a BSOD occours, Windows writes either a Minidump file to c:\windows\minidump.dmp or creates a full memory dump to c:\windows\memory.dmp (replace c:\windows\ by your %systemroot%). This file can be read-in using Microsoft’s debugging tool, included in the Windows SDK here:

Debugging Tools

This SDK contains a set of Tools, but you only need to select the Debugging Tools during Setup. After Setup, you’ll find “Debugging Tools x64” in your Startmenu, hidden under “Windows Kits”. If you start WinDbg, you may think you’ve started a 16-bit application, but it only does look like.

Configure Symbol Path

Before opening a Crash Dump, the symbol sources have to be set. Instead of downloading several gigabytes of Symbol Data, you can put in a http address to online symbol files.

  • File -> Symbol File Path
  • Enter the following:


Open a Crash Dump

Now, open the Crash Dump file

  • File -> Open Crash Dump

A new windows opens. If you fly over the first 50 lines of text, you’ll see you have to enter a command to start an analysis. At the bottom of the new windows, there’s a “kd>” prompt, enter now:

!analyze -v

First output after the command will be the STOP Error, some pages lower you get an “IMAGE_NAME” and other details about driver name and so on.

Offline Root CA (ORCA)

Diese Anleitung dient der Erstellung einer offline Root CA, kurz ORCA, und besteht aus folgenden Schritten:

  • Vorbereiten der Datei “CAPolicy.inf” für die eigenständige Zertifizierungsstelle
  • Installieren & Konfigurieren der eigenständigen Stammzertifizierungsstelle
  • Konfiguraiton der Website mit dem Public Key Zertifikat
  • Verteilen der Stammzertifizierungsstelle über die Domäne



Die CAPolicy.inf Datei wird unter C:\Windows\CAPolicy.inf abgelegt und dient den Setup Prozess als Vorgabe für Einstellungen (Policy).

Signature="$Windows NT$"


Notice="Legal Policy Statement"


Durch Verwenden der Einstellung “CRLDeltaPeriodUnits=0” in der Datei “CAPolicy.inf” wird die Veröffentlichung von Deltazertifikatsperrlisten deaktiviert. Dies ist die korrekte Einstellung für eine Offline-Stammzertifizierungsstelle.

Folgendes muss also im Textfile editiert werden:

  • OID
  • Notice
  • URL


Die im Beispiel der CAPolicy.inf angezeigte Objektkennung (OID, Object-ID) ist die Microsoft-OID. Einzelne Organisationen sollten eigene OIDs anfordern. Der Sinn einer OID ist eine Weltweit einheitliche ID zu vergeben. Dies wird z.B. auch für die ID vergabe von grossen Dokumenten verwenden. Ein bekanner Verwendungszweck ist auch SNMP. Die OID soll auch für interne CA’s angefordert werden:

Kostenlose Registration:


Notice/URL: Zertifikatverwendungserklärung

Aus “Certificate Practice Statement (CPS)” genannt; Das CPS ist ein Statement darüber, wie die CA Zertifikate ausstellt. Eine genauere Beschreibung befindet sich in RFC2527 in Kapitel “3.5 CERTIFICATION PRACTICE STATEMENT” @

A CPS must contain detailed information on:

  • The procedures used to issue certificates, including the procedures followed to verify the relationship between an entity and its public key. In the case of RPKI, this refers to how the authority to use certain IP address resource is verified.
  • The detailed life cycle of the certificates that are issued, i.e. how to request a certificate, how the certificate is issued, its expiration date, how it is renewed, as well as how and why it may be cancelled.
  • Technical aspects such as a detailed description of the facilities, physical access controls, operational roles, separation of responsibilities, and the auditing controls to be implemented.
  • Technical aspects relating to the generation of cryptographic material and private key management.

Beispiel von Verisign:

Installation der CA

Feature Install & Configure

Die zwei folgenden Befehle erledigen all diese Schritte in einem:

  • Add-WindowsFeature Adcs-Cert-Authority -IncludeManagementTools
  • Install-AdcsCertificationAuthority –CAType StandaloneRootCA –CACommonName “Contoso Root CA” –KeyLength 4096 –HashAlgorithm SHA256 –CryptoProviderName “RSA#Microsoft Software Key Storage Provider”

Anleitung über’s GUI

  • Über den Server Manager die Role “Active Directory Certificate Services” auswählen
  • Bei den folgenden Role Services nur die “Certification Authority” selektieren
  • Nach der Installation nicht schliessend sondern “Configure …” im Text anklicken
  • Es erscheint ein Begrüssungsbildschirm; Admin Acocunt angeben, ORCA1\Administrator oder z.B. ORCA1\CAAdmin wenn der Account umbenannt wurde
  • Role Service “Certification Authority” selektieren
  • Standalone CA, Root CA, New private key
  • Cryptographic options:
    • RSA#Microsoft Software Key Storage Provider
    • Key Length: 4096
    • Hash algorithm: SHA256

-> Abwärtskompatibilität für XP ist nicht nötig bei der Root CA, dafür bei der Subordinate CA

  • Name of the CA
    • Common name for this CA: “Contoso Root CA”
  • Validity Period
    • 20 Years

Konfiguration bestätigen und Konfigurieren lassen.

CA Setup

  1. cervsrv.msc starten
  2. Rechtsklick auf “Revoked Certificates” -> Properties, “Publish Delta CRLs” muss deaktiviert sein
  3. Folgende Befehle ausführen um die Standard Pfade für CFD und AIA festzulegen.

–> Gültigkeit für Auszustellende Zertifikate auf (z.B.) fünf Jahre erhöhen

  • certutil -setreg ca\ValidityPeriodUnits 5

Empfehlung für die Gültigkeit: die Untergeordnete CA ist immer halb so lange gültig wie die Übergeordnete.
Regel für Gültigkeit: die Untergeordnete CA oder Zertifikate dürfen nicht länger gültig sein als die CA.

Publish Certificate URL

Danach muss das CRT File aus dem Pfad C:\Windows\system32\CertSrv\CertEnroll auf dem Webserver mit der angegebenen URL publiziert werden. Diese URL wird in jedem erstellten Zertifikat hartkodiert gespeichert und kann nicht mehr nachträglich angepasst werden.

Security Settings

Folgende NTFS Berechtigungen sind auf dem Ordner nötig.


Request Filtering

CRL’s von Subordinate CA’s verwenden “+” Zeichen im Dateinamen, weshalb das “double escaping” im Request Filtering aktiviert werden muss. (KB942076)


CA in Domain vertrauen

Der Trust kann über zwei Wege hergestellt werden: Group Policy oder AD Zertifikatespeicher. Microsoft nimmt in der Anleitung den AD Zertifikatespeicher.

AD Zertifikatespeicher

Das Zertifikat wird im Configuration Store des AD über folgenden Befehl hinzugefügt.

  • certutil –dspublish –f orca1_ContosoRootCA.crt RootCA

Das Zertifikat ist anschliessend an folgendem Ort gespeichert:


Group Policy

Alternativ kann das Zertifikat über die Group Policy an alle Clients verteilt werden, das Ergebnis sähe dann so aus:



negative Ping times & losing Performance data

Did you ever see something like this?

21.05 ping time

The problem also appears to loose performance graph data like this:

21.05 perf

But what’s causing this phenomen? I found this Article in MS KB895980:


This problem occurs when the computer has the AMD Cool’n’Quiet technology (AMD dual cores) enabled in the BIOS or some Intel multi core processors. Multi core or multiprocessor systems may encounter Time Stamp Counter (TSC) drift when the time between different cores is not synchronized. The operating systems which use TSC as a timekeeping resource may experience the issue. Newer operating systems typically do not use the TSC by default if other timers are available in the system which can be used as a timekeeping source. Other available timers include the PM_Timer and the High Precision Event Timer (HPET).

Weird, isn’t it? I decided to use the boot.ini Switch “/userpmtimer” to successfully solve that problem.

Get (Calculate) Windows Product Key using PowerShell

There are some tools in the web that can read out Product Keys from a running Windows installation. There’s also a cool guy that wrote a PowerShell Script that can do the same. Christian Haberl published his Script on his blog here:

Source Code:

function Get-ProductKey {

$value = (get-itemproperty “HKLM:\\SOFTWARE\Microsoft\Windows NT\CurrentVersion”
$ProductKey = “”
for ($i = 24; $i -ge 0; $i–
) {
$r =
for ($j = 14; $j -ge 0; $j–
) {
$r = ($r * 256) -bxor $value[$j
$value[$j] = [math]::Floor([double]($r/
$r = $r %
$ProductKey = $map[$r] +
if (($i % 5) -eq 0 -and $i -ne
0) {
$ProductKey = “-” +
echo “Product Key:” $ProductKey


Very cool! Thanks Christian for publishing that code.

Split private key from exported pfx file

Split private key from exported pfx file

During setup of some applications, you need specify a certificate and the private key separately. To get the key, you can use OpenSSLto split it out:

pkcs12 -in c:\temp\mmcsecuresite.p12 -out bla.key -nodes -nocerts

Now you have the key and only need to export the certificate without the private key to save it as a Certificate.