Browsed by
Category: Windows

Implement Private Cloud infrastructure using S2D, VMM and WAP – Part 1

Implement Private Cloud infrastructure using S2D, VMM and WAP – Part 1

The goal of this series is to give a full overview on how to plan, deploy, and manage Storage Spaces Direct infrastructure using Windows Server 2016, System Center Virtual Machine 2016 and Windows Azure Pack (WAP). Some contents will come from, and I will add a POC implementation to give you a real world situation. But first, let’s understand what is Storage Spaces Direct (S2D).

Storage Spaces Direct is a new feature in Windows Server 2016 and an extension of the existing software defined storage stack. Storage Spaces Direct uses industry-standard servers with local-attached drives to create highly available, highly scalable software-defined storage at a fraction of the cost of traditional SAN or NAS arrays. Its converged (we will call “disaggregated” later to avoid confusion) or hyper-converged architecture radically simplifies procurement and deployment, while feature like caching, storage tiers, and erasure coding, together with the latest hardware innovation like RDMA networking and NVME drives, deliver unrivaled efficiency and performance.

Storage Spaces Direct is only available in Windows Server 2016 Datacenter edition.

As mentioned earlier, when it comes to S2D you can choose two distinct deployment options:

Disaggregated (Converged)

Storage and computer in separate clusters. The disaggregated deployment option is composed of a Scale-out File Server (SoFS) cluster leveraging Storage Spaces Direct to provide network-attached storage over SMB3 file shares, and one or several Hyper-V clusters. This approach allows for scaling compute/workload independently from the storage cluster, essential for larger-scale deployments such as Hyper-V IaaS (Infrastructure as a Server) for service providers and enterprises with on-premises data centers.


One cluster for compute and storage. The hyper-converged deployment option runs Hyper-V virtual machines or SQL Server databases directly on the servers providing the storage, storing their files on the local volumes. This eliminates the need to configure file server access and permissions, and reduces hardware (and licensing) costs for small-to-medium business or remote office/branch office deployments

Storage Spaces Direct is the evolution of Storage Spaces, first introduced in Windows Server 2012. It leverages many of the features you know today in Windows Server, such as Failover Cluster, the Cluster Shared Volume (CSV) file system, Server Message Block (SMB) 3, and of course Storage Spaces. It also introduces new technology, most notably the Software Storage Bus.

Below is an overview of the Storage Spaces Direct stack which applies to both deployment options:

Which implies the following technologies and associated requirements:

Networking Hardware – Storage Spaces Direct uses SMB3, including SMB Direct and SMB Multichannel, over Ethernet to communicate between servers. It is strongly recommended to use 10+ GbE with Remote-Direct Memory Access (RDMA), either iWARP or RoCE.

Storage Hardware – From 2 to 16 servers with local-attached SATA, SAS, or NVMe drives. Each server must have at least 2 solid-state drives (SSDs), and at least 4 additional drives. The number of additional disks must be a multiple of the number of SSD disks. Besides, the SATA and SAS devices should be behind a host-bus adapter (HBA) and SAS expander.

Failover Clustering – The built-in clustering feature of Windows Server is used to connect the servers.

Software Storage Bus – The Software Storage Bus is new in Storage Spaces Direct. It spans the cluster and establishes a software-defined storage fabric whereby all the servers can see all of each other’s local drives. You can think of it as replacing costly and restrictive Fibre Channel or Shared SAS cabling.

Storage Bus Layer Cache – The Software Storage Bus dynamically binds the fastest drives present (e.g. SSD) to slower drives (e.g. HDD) to provide server-side read/write caching that accelerates IO and boosts throughput.

Storage Pool – The collection of drives that form the basis of Storage Spaces is called the storage pool. It is automatically created, and all eligible drives are automatically discovered and added to it. The best practice is to use one storage pool per cluster, with the default settings.

Storage Spaces – Storage Spaces provides fault tolerance to virtual disks using mirroring, erasure coding, or both. You can think of it as distributed, software-defined RAID using the drives in the pool. In Storage Spaces Direct, these virtual disks typically have resiliency to two simultaneous drive or server failures (e.g. using 3-way mirroring, with each data copy in a different server) through chassis and rack fault tolerance is also available with fault domain awareness.

Resilient File System (ReFS) – ReFS is the premier file system purpose-built for virtualization. It includes dramatic accelerations for .vhdx file operations such as creation, expansion, and checkpoint merging, and built-in checksums to detect and correct bit errors. It also introduces real-time tiers that rotate data between so-called “hot” and “cold” storage tiers in real-time based on usage.

Cluster Shared Volumes – The CSV file system unifies all the ReFS volumes into a single namespace accessible through any server, so that to each server, every volume looks and acts like it’s mounted locally. The best practice is to configure one CSV per host present in the cluster.

Scale-Out File Server – This final layer is necessary for disaggregated (converged) deployments only. It provides remote file access using the SMB3 access protocol to clients, such as another cluster running Hyper-V, over the network, effectively turning Storage Spaces Direct into network-attached storage (NAS).

As you can see, there are a lot of hardware and software component involved in this kind of implementation. This is the reason why Microsoft unveiled the Windows Server Software-Defined (WSSD) program. You can find more information in this article. Moreover, in the next part, we will see how to implement this kind of infrastructure in the real world.

Windows Server Software-Defined Datacenter (WSSD) Program

Windows Server Software-Defined Datacenter (WSSD) Program

Microsoft publicly unveiled its new software-defined datacenter certification program for storage and hyper-converged systems running Windows Server 2016. The Windows Server Software-Defined Datacenter (WSSD) program is a Product Group and OEM program to develop and apply prescriptive guidance including configuration and validation of certified hardware SKUs (appliances). The goal is to minimize implementation risk by moving the complex hardware/software integration, and solution validation steps to a pre-purchase phase.

The WSSD Reference Architecture is an OEM-specific document that informs OEMs of the design principles, best practices, and other elements required to bring software-defined datacenter solutions to market in a holistic way. It is meant to assist OEMs with creating, configuring, and validating their WSSD Offerings (solution SKUs, appliances).

As part of the WSSD program, OEMs are responsible for:

  • Designing their specific solution SKUs to align with a specific WSSD design pattern (OEMs care and are designing multiple SKUs to align with several of the WSSD design patterns and to provide alternate scale/size options)
  • Ensuring their selected hardware meets Windows Server 2016 logo requirements, and the WSSD AQs (Additional Qualifiers define the hardware component level capabilities and features sets such as supported NIC offloads)
  • Specifying up each solution SKU using the standard provisioning automation
  • Setting up each solution SKU using the standard validation tooling
  • Validating each solution SKU using the standard stress testing tooling

The activities in the list above are OEM led, performed in partnership with Microsoft, and enabled through Microsoft’s WSSD reference architecture and tooling.

Partners offer thee kinds of WSSD solutions:

  • Hyper-Converged Infrastructure (HCI) Standard – Highly virtualized compute and storage are combined in the same server-node cluster, making them easier to deploy, manage, and scale.
  • Hyper-Converged Infrastructure (HCI) Premium – Comprehensive “software-defined datacenter in a box” adds software-defined networking and Security Assurance features to HCI Standard. This makes it easy to scale compute, storage, and networking up and down to meet demand (just like public cloud services)
  • Software-Defined Storage (SDS) – Built on server-node clusters, this enterprise-grade, shared-storage solution replaces traditional external storage device at much lower cost while support for all-flash NVMe drives delivers unrivaled performance. You can quickly add storage capacity as your needs grow over time.

The specific WSSD-certified offerings announced by Microsoft include DataON, Fujitsu, Lenovo, QCT and Supermicro. While some obvious large providers of software-defined and engineering hyper-converged systems weren’t on the initial list like Cisco Systems and Dell EMC, Microsoft said it expects to add more partners over time.

Fix Storage Spaces Direct Storage Pool

Fix Storage Spaces Direct Storage Pool

During the implementation of WSSD POC for one of my customers, I faced issue with S2D storage pool and more precisely with the disks. In fact, after the creation of the S2D Hyper-Converged cluster and so the storage pool, I noticed that the listed capacity was not aligned with expectation and actual hardware configuration.

As you can see below, I quickly identified the issue. Actually, if you want to have more details about your S2D storage pool, we can simply run these PowerShell commands on one of the nodes of the cluster.

In this case, I had an issue with 5 physical disks and more precisely with 4 cache and 1 capacity disk on the same physical host. The hardware failure was very unlikely in this case, I decided to confirm by having a look through the DRAC. And as expected, the hardware was not in cause.

If you are facing this kind of behavior or even a real hardware failure issue, the first thing to do is to retire the disks to the storage pool using these PowerShell commands.

After replacement of physical disks (in this case, it was not needed as hardware was healthy), you can check that the disks are detected with the CanPool parameter set to True, and you add them to the existing S2D storage pool using these Powershell commands.


You can use the following PowerShell command which will turn on the LED of the defaillant disks on a server (to help identify)

Configure Hyper-V NAT Virtual Switch and NAT Forwarding

Configure Hyper-V NAT Virtual Switch and NAT Forwarding

Windows Server 2016 and Windows 10 adds the native ability for a NAT forwarding Hyper-V switch. This is really handful software-defined networking (SDN) or even lab environment. By default, there is no inbound access from the LAN to the virtual machines that are connected to an NAT-enabled (Internal) virtual switch. And you might want to access isolated virtual machines in your lab through RDP with your laptop. Actually, the old way was to create a specific virtual machine in this lab to act as a gateway. You can find more information in Microsoft documentation.

To create a new NAT switch using subnet on your Hyper-V host, use these PowerShell commands:

Of course, you will need to map your virtual machine network adapter to the right virtual switch and assign an IP in this subnet to your virtual machine and set address as the default gateway. Then if you want to access this virtual machine through RDP for example, run this PowerShell command:

With this configuration, you will be able to connect to your isolated lab virtual machine (IP: through your host “public” IP and the port 50000 using RDP without additional VM to configure.

You will need to configure the firewall of the Hyper-V host (and even maybe your router if pointing to a public address) accordingly to the NAT mapping rule.
Multiple NAT networks are not supported.
Windows Server 2016 Hyper-V Checkpoints

Windows Server 2016 Hyper-V Checkpoints

Checkpoints enable you to capture point-in-time snapshots of a VM. This gives you an easy method of quickly restoring to a known working configuration, making them useful before installing or updating an application. When a checkpoint is created, the original VHD becomes read-only, and all changes are captured in an AVHD file. Conversely, when a checkpoint is deleted, the contents of the AVHD are merged with the original disk, which becomes the primary writable file.

Prior to Windows Server 2016, the only checkpoint type available was the standard checkpoint, which takes a snapshot of both the disk and the memory state at the time that the checkpoint is taken. But Windows Server 2016 introduces production checkpoints, with uses the Volume Shadows Copy Service on Windows guests or File System Freeze on Linux guests. This enables you to take a consistent snapshot of a VM without the running memory.

Production checkpoints are used by default on Windows Server 2016. And if taking production checkpoint fails, by default the host attempts to create a standard checkpoint.

You can configure the type of checkpoint a VM uses by using the Set-VM cmdlet.

To set the VM to only use production checkpoints, without the ability to fall back to a standard checkpoint, replace the Production option with ProductionOnly.

Checkpoints can also be configured from Hyper-V Manager by editing the settings of a VM.

Perform Remote Management of Hyper-V Hosts

Perform Remote Management of Hyper-V Hosts

Performing remote management of Hyper-V hosts within the same domain simply requires the permissions or delegation discussed in this previous article. However, managing a Hyper-V server that is in a Workgroup is slightly more complicated.

First, the Hyper-V server must have PowerShell remoting enabled. This is easily accomplished by running the Enable-PSRemoting cmdlet.

The network provided on the server must be set to Private. Otherwise, you also need to specify the -SkipNetworkProfileCheck parameter.

The second task on the Hyper-V host is to enable the WSMan credential role as a server. To accomplish this, run the following command:

The more complicated steps occur on the computer from which you plan to manage the Hyper-V. First, you must trust the Hyper-V server from the remote client. If the Hyper-V host is named LAB01, run the following command:

Then still on the remote client, you must also enable the WSMan credential role as a client, and specify the server to manage remotely through this command:

Finally, you will also need to configure the local policy (or a Group policy if you plan to have multiple remote management points on your domain) to allow credentials to be passed.


For each of the client settings, TrustedHosts, Delegate Computer, and WSMan, you can use a wildcard mask (*) as a substitute for specifying multiple Hyper-V hosts.

Beginning with Windows 10 and Windows Server 2016, you also have the option to specify different credentials to manage Hyper-V host from Hyper-V Manager. But the above steps must still be taken if the remote host is in a workgroup.

Enable Nested Virtualization on Hyper-V and Windows Server 2016

Enable Nested Virtualization on Hyper-V and Windows Server 2016

As you should know, through the latest version of Hyper-V coming with Windows Server 2016 & Windows 10 you can enable Nested Virtualization, which means you can install Hyper-V role on a Hyper-V virtual machine.

But in order to activate this functionality you need to meet some requirements, otherwise you will face this kind of error.

  • Dynamic Memory must be disabled on the virtual machine containing the nested instance of Hyper-V
  • VM must have more than 1 vCPU
  • MAC address Spoofing must be enabled on the NIC attached to the virtual machine. This setting can be found in the advanced settings under the NIC in the virtual machine’s properties.
  • Virtual Machine version must be 8.0
  • Virtualization Extensions need to be exposed to the VM as seen below.

By default the virtualization extensions setting is disabled. To enable this setting, you have to use this command:

You need to power off the virtual machine to apply most of these settings.

Once all these settings have been applied, you can now install Hyper-V role and features on your virtual machine.

Virtual machines that are being used with nested virtualization no longer support these features:

  • Runtime memory size
  • Dynamic memory
  • Checkpoints
  • Live migration
Error: ‘NanoServerPackage’ cannot be installed because the catalog signature in ‘’ does not match the hash

Error: ‘NanoServerPackage’ cannot be installed because the catalog signature in ‘’ does not match the hash

After a Nano Server has been installed, you can manage the server roles and features by using the PackageManagement provider. To install the provider, run the Install-PackageProvider NanoServerPackage command. But by running this command you can encounter this error as below.

To resolve this issue Microsoft suggests to reboot the machine but in the case, it does not work you can use these commands as a workaround.

After you have the package provider installed, you can use the following PowerShell cmdlets to find and add packages to Nano server:

  • Find-NanoServerPackage
  • Save-NanoServerPackage
  • Install-NanoServerPackage

For example, in this case we will configure this Nano server as an Hyper-V host through the use of Compute package.

For more information on Nano Server and all of the installation parameters, visit
Enable Deduplication on Windows 10

Enable Deduplication on Windows 10

Because of many comments about the fact that after upgrading your Windows 10 computer to a new version (most of the time Insiders release), deduplication features are not working and so your deduplicated volumes and data are not accessible anymore. Let me remind you that, it is a non-Microsoft supported deduplication package which is built for a specific version of Windows 10 based on Windows Server 2016 native features. It means that I cannot create a specific Windows 10 build version of this package without having the Windows Server 2016 corresponding build.
Use this package at your own risks, and note that I am not responsible for any data loss/business loss, device corruption or any other type of loss due to the use of this package.

Any build (based on 16237.1001)

The following package is compatible with any W10 build starting from 10240 (and x64 bit system). This package is different from the others as we cannot simply use DISM. This version is based on work of members of forum and adapted by me with 16237.1001 source files. You can find instructions and credits in the readme.txt file and download the package here.

Build 16237.1001

Based on Windows Server 2016 insider build, I created the deduplication package for W10 build 16237.1001 that you can find here. Note that DISM requires the exact same build to deploy the package and so the features. And as Microsoft is providing a lot of minor builds, I am working on the creation of package which is “minor build agnostic”.

Build 14393.0

Hello, I was very busy these last few months and I got no time to work on this blog… Anyway, I made the new Dedup Package for build 14393.0 that I tested on my W10 14393.187 and it is fully functional. You can find this package directly from here (md5: 48cdbfddcc4a2266950ad93a6cfe2b9f).As always, to install deduplication feature on your Windows 10 computer, you will just need to launch install.cmd file as administrator. Enjoy.

Build 14300.1000

You will find the deduplication package for build 14300.1000 here (md5 : 6a7ba5b2d6353cc42ff2c001894f64b4). As usual now, to install deduplication feature on your Windows 10 computer, you will just need to launch install.cmd file as administrator. For information, this package is only working for x64 platform (don’t forget to open x64 version of PowerShell to access deduplication cmdlets).

Note that I can only build this package if I have the linked Windows Server 2016 build, so if you need a special package for a build of Windows 10 contacts me with the link or the .iso of the appropriate Windows Server 2016 build.

Build 14291.1001

You will find the deduplication package for build 14291.1001 here (md5 : b150cd2fe60e314e24cedeafeb6f1f42). To install deduplication feature on your Windows 10 computer, you will just need to launch install.cmd file as administrator.

Build 10586

You will find the new package based on Windows Server 2016 TP4 build 10586 here (md5 : 21251c030d3c1a5572bd0f12473c623c). To install deduplication feature on your Windows 10 computer, you will just have to launch install.cmd file as administrator and voila!

You don’t need anymore to be part of Microsoft Insider Program for this build. So just skip text above until PS module usage here. If you want more information about available cmdlets and usage, you can read my article here.

Build 10514

Until now if you wanted to make use of deduplication on your Windows client operating system, especially on Windows 8.1 you had to reuse deduplication module of Windows Server 2012. But as you probably know, Windows 10 still does not provide this functionality by native, and the old module used for Windows 8.1 is not compatible… So perhaps you still have not migrated to Windows 10 because of this ?! Well, I’m glad to announce that those dark times are about to end. A friend of mine ( worked with other people on this project during the summer to bring this functionality to Windows 10. Now let’s see how we can do this.

First download the package here. (md5 : b7ed10bf8b8fbc312a7b35d2ffd0eef3)

Then you have to join Microsoft Insider Program.

When you are part of the insider program. You can now unzip the downloaded package (copy to your local disk) and run Install.cmd as administrator.

At this time you will need to restart your computer. When it’s done, open a PowerShell prompt (as administrator) and change your execution policy (if not already done) to Bypass.

Then, you have to import the PS module, enable deduplication on volume and finally start the job.

You can follow the execution of the job with the command Get-DedupJob and have a status of savedspace and savingsrate with command Get-DedupVolume.

As you can see below, here the concrete result of deduplication. I have a folder with all my Hyper-V machines that normally would take 376 GB but thanks to deduplication, it only takes 81 GB.

Manage Virtual Machines Using Windows PowerShell Direct

Manage Virtual Machines Using Windows PowerShell Direct

Coming with Windows Server 2016, PowerShell Direct is a new feature which gives you a way to run Windows PowerShell commands in a VM from the host. Windows PowerShell Direct runs between the host and the VM. This means it doesn’t require networking or firewall requirements, and it works regardless of your remote management configuration.

Windows PowerShell Direct works much like remote Windows PowerShell except that you do not need network connectivity. To connect to the VM from a host, use the Enter-PSSession cmdlet.

You will be prompted for credentials and then you can manage the VM from this PSSession. The Invoke-Command cmdlet has been updated to perform similar tasks; for example, you can execute a script from the host against the VM.

To enter into a PowerShell direct session, you must be logged onto the host as a Hyper-V administrator. The VM must be running locally and already booted to the OS.