Browsed by
Month: July 2017

Implement Private Cloud infrastructure using S2D, VMM and WAP – Part 1

Implement Private Cloud infrastructure using S2D, VMM and WAP – Part 1

The goal of this series is to give a full overview on how to plan, deploy, and manage Storage Spaces Direct infrastructure using Windows Server 2016, System Center Virtual Machine 2016 and Windows Azure Pack (WAP). Some contents will come from, and I will add a POC implementation to give you a real world situation. But first, let’s understand what is Storage Spaces Direct (S2D).

Storage Spaces Direct is a new feature in Windows Server 2016 and an extension of the existing software defined storage stack. Storage Spaces Direct uses industry-standard servers with local-attached drives to create highly available, highly scalable software-defined storage at a fraction of the cost of traditional SAN or NAS arrays. Its converged (we will call “disaggregated” later to avoid confusion) or hyper-converged architecture radically simplifies procurement and deployment, while feature like caching, storage tiers, and erasure coding, together with the latest hardware innovation like RDMA networking and NVME drives, deliver unrivaled efficiency and performance.

Storage Spaces Direct is only available in Windows Server 2016 Datacenter edition.

As mentioned earlier, when it comes to S2D you can choose two distinct deployment options:

Disaggregated (Converged)

Storage and computer in separate clusters. The disaggregated deployment option is composed of a Scale-out File Server (SoFS) cluster leveraging Storage Spaces Direct to provide network-attached storage over SMB3 file shares, and one or several Hyper-V clusters. This approach allows for scaling compute/workload independently from the storage cluster, essential for larger-scale deployments such as Hyper-V IaaS (Infrastructure as a Server) for service providers and enterprises with on-premises data centers.


One cluster for compute and storage. The hyper-converged deployment option runs Hyper-V virtual machines or SQL Server databases directly on the servers providing the storage, storing their files on the local volumes. This eliminates the need to configure file server access and permissions, and reduces hardware (and licensing) costs for small-to-medium business or remote office/branch office deployments

Storage Spaces Direct is the evolution of Storage Spaces, first introduced in Windows Server 2012. It leverages many of the features you know today in Windows Server, such as Failover Cluster, the Cluster Shared Volume (CSV) file system, Server Message Block (SMB) 3, and of course Storage Spaces. It also introduces new technology, most notably the Software Storage Bus.

Below is an overview of the Storage Spaces Direct stack which applies to both deployment options:

Which implies the following technologies and associated requirements:

Networking Hardware – Storage Spaces Direct uses SMB3, including SMB Direct and SMB Multichannel, over Ethernet to communicate between servers. It is strongly recommended to use 10+ GbE with Remote-Direct Memory Access (RDMA), either iWARP or RoCE.

Storage Hardware – From 2 to 16 servers with local-attached SATA, SAS, or NVMe drives. Each server must have at least 2 solid-state drives (SSDs), and at least 4 additional drives. The number of additional disks must be a multiple of the number of SSD disks. Besides, the SATA and SAS devices should be behind a host-bus adapter (HBA) and SAS expander.

Failover Clustering – The built-in clustering feature of Windows Server is used to connect the servers.

Software Storage Bus – The Software Storage Bus is new in Storage Spaces Direct. It spans the cluster and establishes a software-defined storage fabric whereby all the servers can see all of each other’s local drives. You can think of it as replacing costly and restrictive Fibre Channel or Shared SAS cabling.

Storage Bus Layer Cache – The Software Storage Bus dynamically binds the fastest drives present (e.g. SSD) to slower drives (e.g. HDD) to provide server-side read/write caching that accelerates IO and boosts throughput.

Storage Pool – The collection of drives that form the basis of Storage Spaces is called the storage pool. It is automatically created, and all eligible drives are automatically discovered and added to it. The best practice is to use one storage pool per cluster, with the default settings.

Storage Spaces – Storage Spaces provides fault tolerance to virtual disks using mirroring, erasure coding, or both. You can think of it as distributed, software-defined RAID using the drives in the pool. In Storage Spaces Direct, these virtual disks typically have resiliency to two simultaneous drive or server failures (e.g. using 3-way mirroring, with each data copy in a different server) through chassis and rack fault tolerance is also available with fault domain awareness.

Resilient File System (ReFS) – ReFS is the premier file system purpose-built for virtualization. It includes dramatic accelerations for .vhdx file operations such as creation, expansion, and checkpoint merging, and built-in checksums to detect and correct bit errors. It also introduces real-time tiers that rotate data between so-called “hot” and “cold” storage tiers in real-time based on usage.

Cluster Shared Volumes – The CSV file system unifies all the ReFS volumes into a single namespace accessible through any server, so that to each server, every volume looks and acts like it’s mounted locally. The best practice is to configure one CSV per host present in the cluster.

Scale-Out File Server – This final layer is necessary for disaggregated (converged) deployments only. It provides remote file access using the SMB3 access protocol to clients, such as another cluster running Hyper-V, over the network, effectively turning Storage Spaces Direct into network-attached storage (NAS).

As you can see, there are a lot of hardware and software component involved in this kind of implementation. This is the reason why Microsoft unveiled the Windows Server Software-Defined (WSSD) program. You can find more information in this article. Moreover, in the next part, we will see how to implement this kind of infrastructure in the real world.

Windows Server Software-Defined Datacenter (WSSD) Program

Windows Server Software-Defined Datacenter (WSSD) Program

Microsoft publicly unveiled its new software-defined datacenter certification program for storage and hyper-converged systems running Windows Server 2016. The Windows Server Software-Defined Datacenter (WSSD) program is a Product Group and OEM program to develop and apply prescriptive guidance including configuration and validation of certified hardware SKUs (appliances). The goal is to minimize implementation risk by moving the complex hardware/software integration, and solution validation steps to a pre-purchase phase.

The WSSD Reference Architecture is an OEM-specific document that informs OEMs of the design principles, best practices, and other elements required to bring software-defined datacenter solutions to market in a holistic way. It is meant to assist OEMs with creating, configuring, and validating their WSSD Offerings (solution SKUs, appliances).

As part of the WSSD program, OEMs are responsible for:

  • Designing their specific solution SKUs to align with a specific WSSD design pattern (OEMs care and are designing multiple SKUs to align with several of the WSSD design patterns and to provide alternate scale/size options)
  • Ensuring their selected hardware meets Windows Server 2016 logo requirements, and the WSSD AQs (Additional Qualifiers define the hardware component level capabilities and features sets such as supported NIC offloads)
  • Specifying up each solution SKU using the standard provisioning automation
  • Setting up each solution SKU using the standard validation tooling
  • Validating each solution SKU using the standard stress testing tooling

The activities in the list above are OEM led, performed in partnership with Microsoft, and enabled through Microsoft’s WSSD reference architecture and tooling.

Partners offer thee kinds of WSSD solutions:

  • Hyper-Converged Infrastructure (HCI) Standard – Highly virtualized compute and storage are combined in the same server-node cluster, making them easier to deploy, manage, and scale.
  • Hyper-Converged Infrastructure (HCI) Premium – Comprehensive “software-defined datacenter in a box” adds software-defined networking and Security Assurance features to HCI Standard. This makes it easy to scale compute, storage, and networking up and down to meet demand (just like public cloud services)
  • Software-Defined Storage (SDS) – Built on server-node clusters, this enterprise-grade, shared-storage solution replaces traditional external storage device at much lower cost while support for all-flash NVMe drives delivers unrivaled performance. You can quickly add storage capacity as your needs grow over time.

The specific WSSD-certified offerings announced by Microsoft include DataON, Fujitsu, Lenovo, QCT and Supermicro. While some obvious large providers of software-defined and engineering hyper-converged systems weren’t on the initial list like Cisco Systems and Dell EMC, Microsoft said it expects to add more partners over time.