Browsed by
Month: January 2016

Installing a Two Tier PKI Hierarchy in Windows Server 2016 – Part 3

Installing a Two Tier PKI Hierarchy in Windows Server 2016 – Part 3

To finish this series, in this article we will configure DNS records and the website which will host AIA and CDP locations. In the end, we will have a fully operational Two Tier PKI Hierarchy in Windows Server 2016

You can retrieve the other articles of this series following these links:

You can obviously adapt theses steps to your environment and your needs as your configuration match to the AIA and CDP path options.
As explained at the beginning of this article, in this deployment we will use our subordinate CA to host the website serving AIA and CDP check requests. First, create the DNS alias based on an A record on our DNS pointing to our subordinate CA (AUTH01.lab.local).

Then create the associated website and the physical folder path.

You will need to give modification rights on your website root folder, subfolders and file to Cert Publishers AD group.

Once the configuration is done, simply copy your CRL file to CDP folder and the root CA to AIA folder. Then you can start certsrv service on the subordinate CA and check the configuration as below.

Note
If you encounter some issue or want to have a more detailed view you can use the pkieview.msc console.

Finally, don’t forget to distribute the root CA certificate to your domain computers through GPO to validate the trust chain. Now you can use your two tier PKI to issue certificates and certificate policies in your domain!

I hope this article has been useful, don’t hesitate to ask questions in the comment section if you encounter some issues or if you need more information.

Installing a Two Tier PKI Hierarchy in Windows Server 2016 – Part 2

Installing a Two Tier PKI Hierarchy in Windows Server 2016 – Part 2

To continue this series, in this article we will continue the deployment of our Two Tier PKI Hierarchy in Windows Server 2016 by deploying the Enterprise Subordinate Issuing CA.

You can retrieve the other articles of this series following these links:

Like for the root CA, you need to install Active Directory Certificate Services role.

This time, in addition of the Certification Authority role service, you can install other available role service depending on your needs. In this deployment, we will only install the Certification Authority Web Enrollment role service to give end-users the possibility to request some certificates based on certificate templates from the web console.

Once the role services are successfully installed, you need to configure them.

As explained at the beginning of the article, this server will act as an Enterprise Subordinate CA. It must be a domain member and online to issue certificates or certificate policies.

As we don’t have yet a private key, we will create a new one based on standard security best practices. If you need more information about the hash algorithm and key length choice, you can have a look at the first part of my previous article here.

Then we require a certificate from the root CA to allow this subordinate CA to issue certificates. And since the root CA is not a domain member and not online, we can’t use the first option. We will need to save the request to a file and copy it on the root CA.

As you can see, we have a warning that recalls us to use the request generated by this wizard to obtain the corresponding certificate from the root CA.

To submit the request generated by the subordinate CA to the root CA, just copy the file you can see above and submit a new request in the certsrv console of root CA.

It will create a pending request that you will need to manually approve.

Once the certificate is issued, you will need to export it as a file. You can either export it as .CER or .P7B format.

Then, go back to your subordinate CA and before importing the generated certificate, you will need to import the root CA certificate (the first certificate of your hierarchy) into the Trusted Root Certificate Authorities computer store. If you don’t do this action, when you will try to import the certificate previously generated, the certificate chain will not be trusted as the parent certificate will be unknown.
If you followed previous steps, the root CA certificate should already have been copied to your subordinate server with the CRL file and the freshly created subordinate certificate.

At this point, if you try to install your subordinate CA certificate, you will get an error as you can see below because your server will not be able to verify the certificate chain as the revocation list is not available.

But if your remember we already configured on the root CA the path to reach AIA and CDP through a website based on an alias. We will finish the deployment of this hierarchy in part 3.

Installing a Two Tier PKI Hierarchy in Windows Server 2016 – Part 1

Installing a Two Tier PKI Hierarchy in Windows Server 2016 – Part 1

In this series, we will see how to deploy a two tier PKI hierarchy in Windows Server 2016:

If you are new to the enterprise PKI concepts, let me give you some vocabulary and best practices. In Windows Server using AD CS role, your PKI can have several forms using the different component based on your needs.

  • Root Certification Authority (CA), is the root instance of the PKI trust chain. The first AD CS instance you install will need to be the root CA because this establishes the trust hierarchy.
  • Subordinate CA, is the child node in the PKI trust chain. A subordinate CA is one level under the root CA, or can be nested several levels deep under other higher level subordinate CAs.
  • Issuing CA, is/are subordinate CA that issue end-user certificates, however not all subordinate CAs need to be issuing CA.
  • Standalone CA, is an instance of AD CS service that is running on a non-domain joined server and does not integrate with AD.
  • Enterprise CA, is an instance of AD CS service that is running on a domain-joined server and integrates with AD.

You will also need to understand two components of a root CA, that are the Certificate Revocation List (CRL) that is part of Certification Revocation List Distribution Point (CDP), and the Authority Information Access (AIA).

  • CRL, is the list of all revoked certificate in the PKI hierarchy that is hosted by one or more CDPs.
  • AIA, define locations from which users can obtain the certificate for the root CA.

These files are hosted most of the time on an internal/public shared URL that can be accessed by anyone using a certificate from the root CA.

Best practices are something that will vary depending on your security needs, but in any case on of the main recommendation is that the root CA should be standalone, and offline most of the time. Because if anything happens to the root CA, the entire trust hierarchy is compromised, and it is much easier to revoke an issuing CA certificate and setup a new one than replacing the entire PKI infrastructure. But offline root CA will still be needed when the followings events occur:

  • Issuing CA certificate is expiring and needs to be renewed
  • Issuing CA certificate needs to be re-issued in order to change crypto parameters, such as the hashing algorithm. For example, if you need to migrate your CA to SHA-2 (see my article).
  • Issuing CA is compromised and needs to be revoked
  • A new issuing CA needs to join the trust hierarchy one level under the root CA.
  • The root CA certificate is about to expire and needs to be renewed
  • The root CA CRL is about to expire and needs to be regenerated.

For this deployment, we will use this infrastructure.

It is composed of an AD DS root domain (lab.local) based on two domain controllers (AD01.lab.local and AD02.lab.local), one offline standalone root CA, and an enterprise issuing CA (AUTH01.lab.local). Note that your standalone root CA does not even need to be connected to a network, in this case, you will need to use another way to transfer files during deployment.

In this first part, we will see how to deploy the Standalone Root CA. After installing your Windows Server 2016 (do not join the server to your domain), you will need to install AD CS role and configure your standalone root CA.

At this time you have a functional standalone root CA, but you will need to do some post-configuration. First even, if you put a validity period of 20 years during the configuration you will need to hard code it in the registry. You will need to modify the registry value of ValidityPeriodUnits in the registry key:

Then as this standalone root CA is not part of the domain and will be put offline, we will need to publish the CRL and AIA files to a custom URL hosted by another server (in this casa AUTH01.lab.local). In order to accomplish this, we need to run these two commands that will add registry keys, and restart certsvc service.

Now we can configure our custom location for CDP and AIA. For this, we will use an alias that will redirect to a website hosted by our enterprise issuing CA.

In this case, we will publish both CRL and AIA files to a website based on alias pki.lab.local. The advantage of using an alias is the possibility to move this website between web servers and even implement NLB for high availability.

Note
To apply the changes we will need to restart again certsvc service.

Finally, we need to increase the CRL publication interval value because the root CA will be put offline and will not be able to generate a new CRL file each week. Actually, the CRL file will need to be regenerated if we implement a new CDP or another major change. In this case, we will just need to start the server hosting the root CA, implement the configuration and regenerate a new CRL to copy on issuing CAs.

After the CRL generation, you can retrieve both CRL and AIA files on C:\Windows\System32\CertSrv\CertEnroll. You will need to copy these files for a later use a network share if your server is connected to a network or on USB drive if it is a physical server and not connected to a network.

In part 2, we will see how to deploy the second component which is the Enterprise Subordinate Issuing CA.

Modify PowerShell Version of Orchestrator “Run .Net Script” Activity

Modify PowerShell Version of Orchestrator “Run .Net Script” Activity

By default System Center Orchestrator 2012 (event R2 latest CU), Run .Net Script activity will launch PowerShell in v2 and x86 mode. But nowadays, the Orchestrator runbook service is installed on Windows Server 2012/Windows Server 2012 R2 or even Windows Server 2016 who used natively at least Powershell v3 and above. Besides a lot of useful modules and cmdlets are implemented in PowerShell v3 and above.
In fact, you can already get around the problem by running the script remotely on another server or an instance of PowerShell shell using: PowerShell {Your Script}, but in both cases, we loose the possibility to publish all variables to the data bus.

In order to change this, if the server executing the Orchestrator runbook service use PowerShell v3 or above, you can change a registry value to use the same version in your activity.

The registry value OnlyUseLatestCLR, that needs to be equal to as above (0 by default) is located:

To apply the changes you don’t even need to restart Orchestrator services. In fact, you can verify that change has occurred by running this activity in a runbook.

Deduplication Basics & Best Practices

Deduplication Basics & Best Practices

After receiving many questions about deduplication module for Windows 10 (article). I decided to write this little article to give some best practices and usages. Note that data deduplication is disabled by default and is not supported for certain volumes, such as any volume that is not an NTFS file system or any volume that is smaller than 2 GB. You can retrieve the entire list of deduplication cmdlets here.

When you want to enable data deduplication on one or more volumes you need to use the Enable-DedupVolume cmdlet. You can use the Set-DedupVolume cmdlet to customize the data deduplication settings afterward. The most important parameter in this command is UsageType, it specifies the expected type of workload for the volume. This parameter sets several low-level settings to a default value that are appropriate to the usage type you specify. The acceptable values for this parameter are:

  • HyperV – A volume for Hyper-V storage.
  • Backup – A volume that is optimized for virtualized backup servers.
  • Default – A general purpose volume. This is the default value.

Once you enable deduplication, you may want to customize data deduplication settings on one or more volumes. Actually, this cmdlet uses a lot of parameters that are not available in the Enable-DedupVolume. You can especially find parameters to exclude an array of extension types or an array of names of root folders that the deduplication engines exclude from data deduplication and optimization. Two other parameters are also important. There are MinimumFileAgeDays that specifies the number of days to wait before deduplication engine optimizes these files and MinimumFileSize that specifies the minimum size threshold, in bytes, for files that are optimized. This last parameter can be useful for an Hyper-V usage, for example, you can force deduplication of .vhdx files but not small configuration files. One last useful parameter is ChuckRedundancyThreshold, it specifies the number of identical chunks of data that the deduplication engine encounters before the server creates a redundant copy of the data chunk. This increases the reliability of the server by adding redundancy to the most referenced chunks of data. In fact, deduplication detects corruptions and the deduplication scrubbing job restores the corrupted chunks from a redundant copy if it is available. The default value is 100 and the minimum value you can set is 20, note that a low value reduces the effectiveness of data deduplication by creating more redundant copies of a chunk, and consumes more memory and disk space.

Then you need to launch your deduplication job by using the Start-DedupJob cmdlet. Note that the deduplication job can queue if the server is running another job on the same volume (than you can check using the Get-DedupJob cmdlet) or if the computer does not have sufficient resources to the run the job. The machine marks the queued jobs that you start with this cmdlet as manual jobs and gives the priority of the manual job over scheduled jobs. Thanks to the parameters Cores and Memory you can control the maximum percentage of physical cores and memory that a job uses. You can also use the parameter StopWhenSystemBusy to indicate that the server stops the job when the system is busy and retries later (this can be particularly useful for a scheduled job). But the most important parameter for this cmdlet is the Type, it specifies the type of data deduplication job. The acceptable values for this parameter are:

  • Optimization – A type to launch data deduplication process.
  • Garbage Collection – A type to free up all deleted or unreferenced data on the volume.
  • Scrubbing – A type to validate the integrity of all data on the volume.
  • Unoptimization – A type to revert data deduplication process.

But what about real life? In my case, I use deduplication engine for my Data and Virtual volumes. The first one contains a bunch of different files from standard Office documents to movies or even games, so I will prefer to use default type with a minimum file size of 5 GB and a minimum file age days of 15 and a chunk redundancy data of 80.
The second volume is used to store one of my virtual lab with a lot of virtual machines running under Hyper-V. In this case, I will use HyperV type, with a minimum file size of 512 mega and a minimum file age days of 0.

Then, especially if you work with a large amount of data on your deduplicated volume. For example, if you delete 10 virtual machines on your volume, you will need to run a manual garbage collection and scrubbing job to clean old chunks and free up some space, and check the integrity of remaining data like you can see in the example below.