Checkpoints enable you to capture point-in-time snapshots of a VM. This gives you an easy method of quickly restoring to a known working configuration, making them useful before installing or updating an application. When a checkpoint is created, the original VHD becomes read-only, and all changes are captured in an AVHD file. Conversely, when a checkpoint is deleted, the contents of the AVHD are merged with the original disk, which becomes the primary writable file.
Prior to Windows Server 2016, the only checkpoint type available was the standard checkpoint, which takes a snapshot of both the disk and the memory state at the time that the checkpoint is taken. But Windows Server 2016 introduces production checkpoints, with uses the Volume Shadows Copy Service on Windows guests or File System Freeze on Linux guests. This enables you to take a consistent snapshot of a VM without the running memory.
Production checkpoints are used by default on Windows Server 2016. And if taking production checkpoint fails, by default the host attempts to create a standard checkpoint.
You can configure the type of checkpoint a VM uses by using the Set-VM cmdlet.
To set the VM to only use production checkpoints, without the ability to fall back to a standard checkpoint, replace the Production option with ProductionOnly.
Checkpoints can also be configured from Hyper-V Manager by editing the settings of a VM.
Performing remote management of Hyper-V hosts within the same domain simply requires the permissions or delegation discussed in this previous article. However, managing a Hyper-V server that is in a Workgroup is slightly more complicated.
First, the Hyper-V server must have PowerShell remoting enabled. This is easily accomplished by running the Enable-PSRemoting cmdlet.
Note The network provided on the server must be set to Private. Otherwise, you also need to specify the -SkipNetworkProfileCheck parameter.
The second task on the Hyper-V host is to enable the WSMan credential role as a server. To accomplish this, run the following command:
The more complicated steps occur on the computer from which you plan to manage the Hyper-V. First, you must trust the Hyper-V server from the remote client. If the Hyper-V host is named LAB01, run the following command:
Finally, you will also need to configure the local policy (or a Group policy if you plan to have multiple remote management points on your domain) to allow credentials to be passed.
For each of the client settings, TrustedHosts, Delegate Computer, and WSMan, you can use a wildcard mask (*) as a substitute for specifying multiple Hyper-V hosts.
Beginning with Windows 10 and Windows Server 2016, you also have the option to specify different credentials to manage Hyper-V host from Hyper-V Manager. But the above steps must still be taken if the remote host is in a workgroup.
Enable Nested Virtualization on Hyper-V and Windows Server 2016
As you should know, through the latest version of Hyper-V coming with Windows Server 2016 & Windows 10 you can enable Nested Virtualization, which means you can install Hyper-V role on a Hyper-V virtual machine.
But in order to activate this functionality you need to meet some requirements, otherwise you will face this kind of error.
Dynamic Memory must be disabled on the virtual machine containing the nested instance of Hyper-V
VM must have more than 1 vCPU
MAC address Spoofing must be enabled on the NIC attached to the virtual machine. This setting can be found in the advanced settings under the NIC in the virtual machine’s properties.
Virtual Machine version must be 8.0
Virtualization Extensions need to be exposed to the VM as seen below.
By default the virtualization extensions setting is disabled. To enable this setting, you have to use this command:
After a Nano Server has been installed, you can manage the server roles and features by using the PackageManagement provider. To install the provider, run the Install-PackageProvider NanoServerPackage command. But by running this command you can encounter this error as below.
To resolve this issue Microsoft suggests to reboot the machine but in the case, it does not work you can use these commands as a workaround.
Because of many comments about the fact that after upgrading your Windows 10 computer to a new version (most of the time Insiders release), deduplication features are not working and so your deduplicated volumes and data are not accessible anymore. Let me remind you that, it is a non-Microsoft supported deduplication package which is built for a specific version of Windows 10 based on Windows Server 2016 native features. It means that I cannot create a specific Windows 10 build version of this package without having the Windows Server 2016 corresponding build.
Use this package at your own risks, and note that I am not responsible for any data loss/business loss, device corruption or any other type of loss due to the use of this package.
Hello, I was very busy these last few months and I got no time to work on this blog… Anyway, I made the new Dedup Package for build 14393.0 that I tested on my W10 14393.187 and it is fully functional. You can find this package directly from here (md5: 48cdbfddcc4a2266950ad93a6cfe2b9f).As always, to install deduplication feature on your Windows 10 computer, you will just need to launch install.cmd file as administrator. Enjoy.
You will find the deduplication package for build 14300.1000 here (md5 : 6a7ba5b2d6353cc42ff2c001894f64b4). As usual now, to install deduplication feature on your Windows 10 computer, you will just need to launch install.cmd file as administrator. For information, this package is only working for x64 platform (don’t forget to open x64 version of PowerShell to access deduplication cmdlets).
Note that I can only build this package if I have the linked Windows Server 2016 build, so if you need a special package for a build of Windows 10 contacts me with the link or the .iso of the appropriate Windows Server 2016 build.
You will find the deduplication package for build 14291.1001 here (md5 : b150cd2fe60e314e24cedeafeb6f1f42). To install deduplication feature on your Windows 10 computer, you will just need to launch install.cmd file as administrator.
You will find the new package based on Windows Server 2016 TP4 build 10586 here (md5 : 21251c030d3c1a5572bd0f12473c623c). To install deduplication feature on your Windows 10 computer, you will just have to launch install.cmd file as administrator and voila!
You don’t need anymore to be part of Microsoft Insider Program for this build. So just skip text above until PS module usage here. If you want more information about available cmdlets and usage, you can read my article here.
Until now if you wanted to make use of deduplication on your Windows client operating system, especially on Windows 8.1 you had to reuse deduplication module of Windows Server 2012. But as you probably know, Windows 10 still does not provide this functionality by native, and the old module used for Windows 8.1 is not compatible… So perhaps you still have not migrated to Windows 10 because of this ?! Well, I’m glad to announce that those dark times are about to end. A friend of mine (http://www.slr-corp.fr) worked with other people on this project during the summer to bring this functionality to Windows 10. Now let’s see how we can do this.
First download the package here. (md5 : b7ed10bf8b8fbc312a7b35d2ffd0eef3)
Then you have to join Microsoft Insider Program.
When you are part of the insider program. You can now unzip the downloaded package (copy to your local disk) and run Install.cmd as administrator.
At this time you will need to restart your computer. When it’s done, open a PowerShell prompt (as administrator) and change your execution policy (if not already done) to Bypass.
Then, you have to import the PS module, enable deduplication on volume and finally start the job.
You can follow the execution of the job with the command Get-DedupJob and have a status of savedspace and savingsrate with command Get-DedupVolume.
As you can see below, here the concrete result of deduplication. I have a folder with all my Hyper-V machines that normally would take 376 GB but thanks to deduplication, it only takes 81 GB.
Manage Virtual Machines Using Windows PowerShell Direct
Coming with Windows Server 2016, PowerShell Direct is a new feature which gives you a way to run Windows PowerShell commands in a VM from the host. Windows PowerShell Direct runs between the host and the VM. This means it doesn’t require networking or firewall requirements, and it works regardless of your remote management configuration.
Windows PowerShell Direct works much like remote Windows PowerShell except that you do not need network connectivity. To connect to the VM from a host, use the Enter-PSSession cmdlet.
You will be prompted for credentials and then you can manage the VM from this PSSession. The Invoke-Command cmdlet has been updated to perform similar tasks; for example, you can execute a script from the host against the VM.
The most simple and effective method of enabling others to manage Hyper-V and virtual machines is to add them to the Hyper-V Administrators local security group for each of the Hyper-V hosts to which you plan to delegate management. However, this might not be the most secure method because doing so gives the new administrators permissions to change virtual switch and host settings in addition to VMs.
To delegate access to individual VMs, you need to modify the Hyper-V Authorization Manager store. This enables you to create task and role definitions to which you can delegate access. Find below the general steps to modifying the Hyper-V services authorization.
In order to accomplish this, launch an MMC (Microsoft Management Console) session, and add the Authorization Manager to the console.
Then right-click the Authorization Manager, and click Open Authorization Sore. In the window, ensure that XML File is selected and browse to %systemroot%\ProgramData\Microsoft\Windows\Hyper-V\ to select InitialStore.xml.
Expand Authorization Manager, Initial Store, Hyper-V services, Role Assignments. Note that by default, the only role assignment is an Administrator. To create new role assignment, expand Definitions and then right-click Task Definitions. Select New Task Definition.
Name the task definition “VM Operator” for example. And select operations that you would want the custom role to do.
Now that you have created a group of tasks, you can create the role that can use these tasks. For this, right-click Role Definitions, and then select New Role Definition. Name the Role Definition such as VM Operator role, and then click OK. There are now two role definitions.
Next, you can create the Role Assignment, which is what user accounts are linked to for the permissions. Right-click Role Assignments, and click New Role Assignment. Select the VM Operator Role, and then click OK.
Right-click the new role assignment, select Assign Users and Groups, and then click From Windows and Active Directory. Select a user or group that you plan to delegate the permissions to, and then click OK.
Installing a Two Tier PKI Hierarchy in Windows Server 2016 – Part 3
To finish this series, in this article we will configure DNS records and the website which will host AIA and CDP locations. In the end, we will have a fully operational Two Tier PKI Hierarchy in Windows Server 2016
You can retrieve the other articles of this series following these links:
You can obviously adapt theses steps to your environment and your needs as your configuration match to the AIA and CDP path options.
As explained at the beginning of this article, in this deployment we will use our subordinate CA to host the website serving AIA and CDP check requests. First, create the DNS alias based on an A record on our DNS pointing to our subordinate CA (AUTH01.lab.local).
Then create the associated website and the physical folder path.
You will need to give modification rights on your website root folder, subfolders and file to Cert Publishers AD group.
Once the configuration is done, simply copy your CRL file to CDP folder and the root CA to AIA folder. Then you can start certsrv service on the subordinate CA and check the configuration as below.
If you encounter some issue or want to have a more detailed view you can use the pkieview.msc console.
Finally, don’t forget to distribute the root CA certificate to your domain computers through GPO to validate the trust chain. Now you can use your two tier PKI to issue certificates and certificate policies in your domain!
I hope this article has been useful, don’t hesitate to ask questions in the comment section if you encounter some issues or if you need more information.
Installing a Two Tier PKI Hierarchy in Windows Server 2016 – Part 2
Like for the root CA, you need to install Active Directory Certificate Services role.
This time, in addition of the Certification Authority role service, you can install other available role service depending on your needs. In this deployment, we will only install the Certification Authority Web Enrollment role service to give end-users the possibility to request some certificates based on certificate templates from the web console.
Once the role services are successfully installed, you need to configure them.
As explained at the beginning of the article, this server will act as an Enterprise Subordinate CA. It must be a domain member and online to issue certificates or certificate policies.
As we don’t have yet a private key, we will create a new one based on standard security best practices. If you need more information about the hash algorithm and key length choice, you can have a look at the first part of my previous article here.
Then we require a certificate from the root CA to allow this subordinate CA to issue certificates. And since the root CA is not a domain member and not online, we can’t use the first option. We will need to save the request to a file and copy it on the root CA.
As you can see, we have a warning that recalls us to use the request generated by this wizard to obtain the corresponding certificate from the root CA.
To submit the request generated by the subordinate CA to the root CA, just copy the file you can see above and submit a new request in the certsrv console of root CA.
It will create a pending request that you will need to manually approve.
Once the certificate is issued, you will need to export it as a file. You can either export it as .CER or .P7B format.
Then, go back to your subordinate CA and before importing the generated certificate, you will need to import the root CA certificate (the first certificate of your hierarchy) into the Trusted Root Certificate Authorities computer store. If you don’t do this action, when you will try to import the certificate previously generated, the certificate chain will not be trusted as the parent certificate will be unknown.
If you followed previous steps, the root CA certificate should already have been copied to your subordinate server with the CRL file and the freshly created subordinate certificate.
At this point, if you try to install your subordinate CA certificate, you will get an error as you can see below because your server will not be able to verify the certificate chain as the revocation list is not available.
But if your remember we already configured on the root CA the path to reach AIA and CDP through a website based on an alias. We will finish the deployment of this hierarchy in part 3.
Installing a Two Tier PKI Hierarchy in Windows Server 2016 – Part 1
If you are new to the enterprise PKI concepts, let me give you some vocabulary and best practices. In Windows Server using AD CS role, your PKI can have several forms using the different component based on your needs.
Root Certification Authority (CA), is the root instance of the PKI trust chain. The first AD CS instance you install will need to be the root CA because this establishes the trust hierarchy.
Subordinate CA, is the child node in the PKI trust chain. A subordinate CA is one level under the root CA, or can be nested several levels deep under other higher level subordinate CAs.
Issuing CA, is/are subordinate CA that issue end-user certificates, however not all subordinate CAs need to be issuing CA.
Standalone CA, is an instance of AD CS service that is running on a non-domain joined server and does not integrate with AD.
Enterprise CA, is an instance of AD CS service that is running on a domain-joined server and integrates with AD.
You will also need to understand two components of a root CA, that are the Certificate Revocation List (CRL) that is part of Certification Revocation List Distribution Point (CDP), and the Authority Information Access (AIA).
CRL, is the list of all revoked certificate in the PKI hierarchy that is hosted by one or more CDPs.
AIA, define locations from which users can obtain the certificate for the root CA.
These files are hosted most of the time on an internal/public shared URL that can be accessed by anyone using a certificate from the root CA.
Best practices are something that will vary depending on your security needs, but in any case on of the main recommendation is that the root CA should be standalone, and offline most of the time. Because if anything happens to the root CA, the entire trust hierarchy is compromised, and it is much easier to revoke an issuing CA certificate and setup a new one than replacing the entire PKI infrastructure. But offline root CA will still be needed when the followings events occur:
Issuing CA certificate is expiring and needs to be renewed
Issuing CA certificate needs to be re-issued in order to change crypto parameters, such as the hashing algorithm. For example, if you need to migrate your CA to SHA-2 (see my article).
Issuing CA is compromised and needs to be revoked
A new issuing CA needs to join the trust hierarchy one level under the root CA.
The root CA certificate is about to expire and needs to be renewed
The root CA CRL is about to expire and needs to be regenerated.
For this deployment, we will use this infrastructure.
It is composed of an AD DS root domain (lab.local) based on two domain controllers (AD01.lab.local and AD02.lab.local), one offline standalone root CA, and an enterprise issuing CA (AUTH01.lab.local). Note that your standalone root CA does not even need to be connected to a network, in this case, you will need to use another way to transfer files during deployment.
In this first part, we will see how to deploy the Standalone Root CA. After installing your Windows Server 2016 (do not join the server to your domain), you will need to install AD CS role and configure your standalone root CA.
At this time you have a functional standalone root CA, but you will need to do some post-configuration. First even, if you put a validity period of 20 years during the configuration you will need to hard code it in the registry. You will need to modify the registry value of ValidityPeriodUnits in the registry key:
Then as this standalone root CA is not part of the domain and will be put offline, we will need to publish the CRL and AIA files to a custom URL hosted by another server (in this casa AUTH01.lab.local). In order to accomplish this, we need to run these two commands that will add registry keys, and restart certsvc service.
Now we can configure our custom location for CDP and AIA. For this, we will use an alias that will redirect to a website hosted by our enterprise issuing CA.
In this case, we will publish both CRL and AIA files to a website based on alias pki.lab.local. The advantage of using an alias is the possibility to move this website between web servers and even implement NLB for high availability.
To apply the changes we will need to restart again certsvc service.
Finally, we need to increase the CRL publication interval value because the root CA will be put offline and will not be able to generate a new CRL file each week. Actually, the CRL file will need to be regenerated if we implement a new CDP or another major change. In this case, we will just need to start the server hosting the root CA, implement the configuration and regenerate a new CRL to copy on issuing CAs.
After the CRL generation, you can retrieve both CRL and AIA files on C:\Windows\System32\CertSrv\CertEnroll. You will need to copy these files for a later use a network share if your server is connected to a network or on USB drive if it is a physical server and not connected to a network.
In part 2, we will see how to deploy the second component which is the Enterprise Subordinate Issuing CA.