VMWare vSphere 6.7 Key Management with HyTrust KeyControl

Recently I’ve changed jobs and joined a smaller company in the vicinity of my home. Still doing security stuff of course, but now with a focus on infrastructure again. As I soon discovered the company uses VMWare for it’s virtualization, which is great but a change for me personally. Having been a Hyper-V user for as long as I can remember I needed to educate myself quickly, specially on the security part of the product.

One thing that I was missing in the former vSphere product was the absence of encryption for the VM guest when it comes to using a TPM chip. As of the latest and the greatest release this is now an optional piece of virtual hardware that you can add for Windows Server 2016 or Windows 10 X64 editions. Cool stuff!!!

As I found out the hard way it’s a bit of a daunting task at first. Looking back at Hyper-V it’s just selecting a checkbox and all the magic happens under the hood. For a lab situation that’s wonderful, but for production purposes, where you put all the secrets on one box, perhaps not so much. With VCenter you need to enable the “Enterprise” way of working before you can add a Virtual TPM chip to a guest. Needless to say you can create a similar setup in Hyper-V, but this way of working with VCenter kind of forces you to think bigger, which isn’t necessarily a bad thing. So, in this guide I’ll show you how to create that infrastructure you need to enable the virtual TPM chip so you can encrypt those disks that hold confidential data, or, as in my case, just mess around with BitLocker drive encryption.

VMWare utilizes a Key Management System (KMS) for the storage of high confidential keys, such as those generated in a virtual TMP. Look at it as a high secure server that stores the keys to the kingdom. To make use of those keys a trust needs to be established between the VCenter and the KMS machine. Once a Windows guest requests a TPM operation, the VCenter requests new keys to protect the TPM itself on behalf of the guest. That request is then honored by the KMS server.

To get started with the setup we’ll be focusing on one of the vendor, HyTrust. This vendor provides  a preconfigured OVA template with their product KeyControl for easy setup. Just follow these steps to create your KMS server.

First go to the following link and download the KeyControl OVA file.


Extract it when done and import the OVA into your VCenter. I’ll assume that you know how to do that. Midway of the OVA deployment you’ll be asked a couple of question, like configuration, hostname, IP address etc. As for the configuration I’ve selected “Demo”.


This is really just a minimal install that will allow you to test drive the product for a month. A trial license key is shipped with the product that includes access to all product features and allows you to configure up to two KeyControl nodes and to protect up to five virtual machines. The trial license is automatically activated when you configure the first KeyControl node. If you need to extend the trial, you can apply here:



Add the network settings and complete the setup.

Once the OVA is deployed, power it on and login to the console of the VM. The first question the setup asks is to create a password for the root user. Please note this is not the password that we’ll be using later on in this post!


Proceed to the next screen when you’re done.


As this is not a cluster setup, we can select the default here and continue.

On the final setup screen you can note the IP address that you’ve used during the setup. Logoff in the next screen and close the console session. We won’t be needing it for our lab setup.

Next thing we need to do is go to the web interface. Point your browser to the fqdn or IP address of the KeyControl server. You’ll be presented with a wizard that will take you through the final steps of the process. Click on the “login” on the top right.


Use secroot/secroot as the default login.

Accept the “End-User License Agreement” and create a new password. As for the email notifications I’ve selected the “Disable email notifications” options, click “Continue”. Same applies to “Automatic Vitals Reporting”, click “Save and Continue”.


In the HyTrust KeyControl web interface klink on the “KMIP” button on the top and set the “State” to “Enable“. Click “Apply” to enable the KMS functionality, click “Proceed” on the “Overwrite all existing KMIP Server settings?“ pop-up. Please note the port number configured on this page (“5696”). This is the default port that we’ll be using later on.

Next, click on “Client Certificates” – “Actions” – “Create Certificate”.


In the “Create a New Client Certificate” pop-up screen fill in a name and an expiration date. Please note that you should leave the password blank! If a password is added the wizard for importing it into VCenter will fail. I don’t necessarily agree with this way of working but that’s the way it works for now. Click “Create” to generate the certificate. Only thing left here is to download the certificate so we can import it into VCenter at a later stage.

Select the certificate and in the “Actions” menu select “Download Certificate”.


Save it on a secure location on your system. You can safely logoff from the KeyControl server as we won’t be needing it anymore for this demo.

Let’s move to our VCenter server. On the top node of your VCenter node select the name of your host and click “Configure” on the right.


On the configuration options below open de “More” node and select “Key management Server”. Click “Add” on the right. Fill in the data according to your infrastructure requirements.


For the cluster name I’ve chosen VKMS, it’s something that you can refer to later if you’re setting up a cluster. Click “Add”. If VCenter can make a successful connection to the VKMS host it will preset an overview of the certificate it will add to it’s configuration.


Click “Trust” to continue.

In the exercise above we’ve let our VCenter setup trust the VKMS server. All that remains is that we do the same for the VKMS server. You would expect to be logging into the VKMS again, but this can also be done from VCenter itself. That’s the reason we created the certificate in the beginning of this post. Select the VKMS host we just added.


The configuration window will open where we can select “Make KMS trust VCenter”. In the pop-up that appears we have to select a method how we’ll want to let our VKMS trust VCenter.


As we own the certificates we’ll use the third option “KMS Certificate and Private key”. Click “Next”.

Now here’s the trick that got me stuck for a while. In the “Upload KMS Credentials” page, the wizard asks for a KMS Certificate and private key. After a couple of attempts and doing a bit of reading, it appears that selecting the certificate we created twice does the trick.


Click the “Upload a file” and browse to the certificates we downloaded from the KeyControl server earlier. Select the certificate name we created earlier and select “Open”. Back in the wizard select “Establish trust”. On success everything will be set to green on the VCenter side.


And that’s really it!

All remains is creating a new virtual machine where you need to select: “Enable Windows Virtualization Based Security” on the “Select a Guest OS” page.


This will give you the option to add a Trusted Platform Module (TPM) chip at any time.

I hope that this blog post was helpful, and as always, feedback is appreciated.


Do you want to know a secret?

Do you want to know a secret? You probably do, question is, would you like it if anyone else knew your secret as well? I am guessing not. That is why, on the Internet, we use encryption for the data that we send and receive, just to make sure that someone else is not listening in on our conversation. Encryption is not only used when directly communicating with each other, it is also used for something we call “integrity”, a.k.a. “I want to make sure that I get what the other end is sending without anyone modifying it in transit”. So, use encryption anywhere!

The basis of encryption is that the parties that want to exchange information agree on a common secret. However, the challenge that they have is to exchange that secret without anyone else knowing it. And that’s where the fun begins. Let us take the following as an example.


Let’s suppose that Michael wants to send a confidential message to Rick without the ‘Evil dude” in the middle listening in. Obviously, they want to use encryption for that, but for this to work, they both need to agree on a secret that only they know. This secret needs to be transferred in such a way that only Michael and Rick know its content. So they need to figure out a way to do that without our “Evil dude” getting his hands on it.

The basis for this “key exchange” involves a mathematical solution called modulus divisions. In its most simplistic form it’s like how many times does x fit into y and what remains is the result. Let’s take this example:

23 Mod 13 = 10

The statement above says, “How many times does 13 fit into 23 and what is the remainder”? That’s really it. Now it can get very complex, but in essence, this is modules divisions. For our encryption to work we can’t just take any number, it needs to be a prime number. The reasoning behind it is that we need to use a unique capability from a prime number later on in the calculation. Remember that a prime number is a whole number that cannot be made by multiplying other whole numbers, if you can make it by multiplying other whole numbers it is a Composite Number.

Back to our modules example “23 Mod 13”, where we call 23 the generator and 13 the exponent. So first Michael and Rick agree on the generator (G) and the exponent (P) that they will use. As this is all send in clear text, our “Evil dude” will receive a copy as well.


To start with the fun, Michael will add a random (private) number to the generator and send the result to Rick. For kicks let’s add something like 15. So we get the following:

2315 Mod 13 = 12

The resulting number of 12 is send, in clear text (public information), to Rick, needless to say that our ears dropping person in the middle also gets a copy. Rick, receiving the package now does exactly the same with a random number he makes up. Let’s say the number 26. The result is

2326 Mod 13 = 9

This resulting number is send back to Michael for him to use.


So now both Michael and Rick have some public and private information. Michael has a private number (Let’s call it a key) of 15 and received a public key of 9. Rick has a private key of 26 and received a public key of 12. Our hacker has all the public information.

Now comes the magic. To calculate the shared secret both Michael and Rick will use the public information that they received and use it with their own key to generate the shared secret for further communications. So it goes like this.


Michael: (Rick’s public key)9 15 (Michael’s Private key) Mod 13 = (shared key)1

Rick: (Michael’s public key)12 26 (Rick’s private key) Mod 13 = (shared key)1

So what we have done here is take the public information we received from the person we want to set up a secure communications with and use exponentiation to calculate the secret. Once we have generated our secret we can now use that to further encrypt all our messages. Coincidental we could also use multiply both secrets and use that as an exponential in both cases. This is possible only by using prime numbers.

23(15*26=390) Mod 13 = 1

What I’ve shown you here are the essentials for public key exchange using Diffie-Hellman key exchange which is a popular cryptographic algorithm that allows Internet protocols to agree on a shared key and negotiate a secure connection. It is fundamental to many protocols including HTTPS, SSH, IPsec and protocols that rely on Transport Layer Security (TLS). Obviously, in my example, I’ve over simplified things just to make it understandable, in practice it’s a bit more complex using larger numbers, but the technique is the same.

As always if you have any questions, comments or remarks, just let me know.

Managing sudo using Active Directory

In my previous post I explained how you could, in just a few steps, join an Ubuntu machine into an Active Directory domain. After a lot of online and offline feedback (Thanks everyone!) I thought it would be time for a follow-up post. This time I’m addressing centralized management of sudo users. Meaning who can execute commands as sudo on defined linux desktops (in my case Ubuntu) , although it’s exactly the same on a server class installation.

My setup is very straightforward and is basically a copy of my previous post. I’ve installed one Windows Server 2016 domain controller in the domain “corp.bitsofwater.com” and joined a Ubuntu 18.04 LTS to that domain. Users from that domain can happily login to the Ubuntu workstation using their domain credentials. In this post we’ll have to deal with two users:

  • Domain Administrator: Administrator
  • Local Root user: Superuser

So our mission will be to configure the domain and Ubuntu desktop in such a way that members of a domain security group will be able to use sudo once logged in on the Ubuntu desktop.

The first task at hand is to make Active Directory capable for supporting centralized sudo management. Out of the box AD can’t be used for this because of the simple fact that it’s missing the attributes in the AD schema. The cool part is that the people that created and maintain sudo made it very easy for us to extend active directory with the appropriate attributes. First we need to download the latest sudo package from https://www.sudo.ws/. Click on “download” and get the latest package. In my case I’ve created this blog based on the 1.8.23 release. Once the package is extracted (7Zip is an excellent client on Windows for that) you end up with a folder that contains the file that we need. Browse to the doc folder.


There should be a file named  “schema.ActiveDirectory“. This file is used to extent the Active Directory Schema to include the sudo attributes that we will be using. In your Active Directory environment, login to the domain controller that has the schema master role with an account that has the privileges to extent the schema. In my case I’ll just use the all-powerful Administrator account. At a minimum copy the “schema.ActiveDirectory” file to that server and open up an elevated command prompt. In the command-prompt browse to the location where you stored the schema extension file. Execute the following “ldifde.exe” command, leave the command-line exactly as is, only replace the latter part of the line to reflect your domain structure.

ldifde -i -f schema.ActiveDirectory -c dc=X dc=corp,dc=bitsofwater,dc=com


Open “adsiedit.msc” and connect to the “Default naming context“. In the root create a new organization unit and give it a name “sudoers“. This is the default location sudo will look for user defined rules in AD, don’t worry, I’ll show you later on in this post how you can change that.

What we need to do next is create a new object that will contain our attributes. On the organizational unit that you just created, right click and select “new-object“. In the next window select “sudoRule”.


If you don’t see the object class immediately that probably means that the class hasn’t been replicated to your DC yet, just give it some time. In the next screen use the name “default”. In my experience it really doesn’t matter what name you give the object. I tend to use a more descriptive name like, desktops or servers. That makes it a bit more clear on what purpose the object serves. At the last screen click finish to create the “sudoRole” object.

Now we need to edit the attributes of the default object we just created. This way we can configure the behavior of the sudo commands into what can (or cannot) be executed on the Linux host. Select properties on the “cn=default,ou=sudoers” object. Just for testing purposes, we’re going to enable all sudo privileges on all machines. Edit these attributes:

  • sudoCommand: ALL
  • sudoHost: ALL
  • sudoUser: ALL


Next, login at the ubuntu machine as a root user, in my case “SuperUser”, start a terminal, open the bash shell as sudo and restart the sssd service with “systemctl restart sssd

to test the configuration, use “su administrator”, enter the password. Now do “sudo bash“, “sudo -l“, or whatever sudo command you would like to use. After entering your password you should be having the ability to execute as sudo.


Easy right! Now we have the basics covered, here are some hints on tuning it a bit here and there.

Domain Security Groups

In any infrastructure of some size it’s highly unlikely that you will have many individual users that will be listed on the “sudoUser” attribute. Instead having a group listed there makes more sense. Instead of a user that can be set by just using the “sAMAccountName” a group will need to contain a % sign at the beginning. So suppose we want to copy the behavior of Windows and add the “Domain Admins” to every machine (Which is a terrible idea btw!) the sudoUser attribute would look like this:


  • sudoUser: %Domain Admins

Note! Although you would expect there to be an escape character (\) after “%Domain” to cope with the space, this seems to be completely handled by the sssd service. Very cool if you ask me!

Note! As you can see there’s also a possibility to use wildcard characters for the attributes, (See sudoHost). There are more options available listed here:


Organizational Units

By default sudo will look for an organizational container with the name “sudoers” in the root of the directory. There is however a way to tell sudo to look elsewhere. For example if you have two OU’s assigned to different groups, you could use that setup to apply different sudo configurations. So let’s assume we have the “OUa” and “OUb”. The Ubuntu machine in our example belongs to OUb and we want to point it to a OU named Linux that resides within “OUb”.


Actually it’s not hard to do. Create the organizational structure I described and create a new “sudoRole” object in there. On the Ubuntu host, open the file “etc/sssd/sssd.conf” as sudo. Add the following line under the [domain/YourDomain]:

ldap_sudo_search_base = ou=linux,ou=OUb,dc=corp,dc=bitsofwater,dc=com


restart the sssd service and you are ready to go.

Troubleshooting Tips

Sometimes the caching mechanism from the sssd service is so persistent that the entries for sudo users remain, even after several reboot. One command in particular has helped me to clear the cache is:

sss_cache -E

If that doesn’t work, stop the sssd service and delete the database files in “/var/lib/sss/db“.

rm /var/lib/sss/db/*

Just remember that the cache receives a full update every six hours and an incremental one every 15 minutes. But that really doesn’t help when the cache is broken. Those values can be altered by setting:

  • ldap_sudo_full_refresh_interval=86400
  • ldap_sudo_smart_refresh_interval=3600

Configuration files

Sometimes, specially between sssd versions or linux distributions, not all the files are appropriately configured. For example I noticed significant differences between Ubuntu and Redhat. Make sure that these two files are configured as shown here:

sudoers: files sss

services = nss, pam, sudo

I hope that this was useful to you! As always, you can leave comments, remarks or questions below or send me a direct message.

Join Ubuntu 18.04 to Active Directory

At work, we are building a data ingress environment for analytical purposes. The setup will include both Windows and Linux based machines for managing the infrastructure and data processing. One of my tasks (next to the usual security hardening) was to investigate how we could add the Linux based nodes to the Windows Active Directory domain for simplified management. Turns out that there are a couple of way of accomplishing that task. It’s not really that straight forward as it is with Windows but once you get the right tools and know what files to edit it’s really not that hard. With this post I want to share my experiences and show you step-by-step on how to add a Linux based host to a Windows Active Directory.

System Security Services Daemon

I’ve tried a couple of options/packages for joining a Linux machine into a Windows based Active Directory domain, but in the end, for me, using the System Security Services Daemon (SSSD) was the most effective way to accomplish my task at hand.  The SSSD is like the intermediary that helps you to configure the system without you needing to know what files you need to edit (Although it can be very useful). The other benefit that I discovered is that it’s available on all major distributions, like RedHat or Ubuntu. So What I will be describing here will be useful in many situations. Let’s dive in.


My setup is straightforward. A single Domain Controller, named DC01 in the “corp.bitsofwater.com” domain. Next to the DC role it also hosts the DNS role. The client computer is an Ubuntu 18.04 machine, named “Ubuntu18”, and is configured to use the DNS server on DC01. I’ve checked connectivity to DC01 with a simple ICMP ping and name resolution with NSLookup. Both work as expected.


First thing we need to do is install all the appropriate packages. This post will focus on Ubuntu 18.04, but it’s almost the same on other distributions that use apt (or yum) as their package manager. Open up a terminal, gain root privileges, install these packages:

  • Realmd
  • sssd
  • sssd-tools
  • libnss-sss
  • libpam-sss
  • krb5-user
  • adcli
  • samba-common-bin


apt install -y realmd sssd sssd-tools libnss-sss libpam-sss krb5-user adcli samba-common-bin

During the installation of the “krb5-user” you’ll be prompted for the domain name. Fill in your domain name in capital letters. See my example below.


If for some reason this pop-up does not appear (That happened to me once) or you want to change it afterwards, edit the file “krb5.conf” file in the “/etc/” directory. I always add these two entries to the file:

  • dns_lookup_realm = true
  • dns_lookup_kdc = true

That will explicitly tell the client to use DNS for all lookups instead of expecting it to be present in the “krb5.conf” file.

More info about configuration options can be found here:

Timing is everything

Using Kerberos authentication relies heavily on the correct time being set at both ends. It should always be within a maximum of 5 minutes difference between the two entities trying to authenticate. On Ubuntu, “timesyncd” is responsible for all thing related with time. First we need to point the client to the closest time source. Usually this is the DC that will provide the correct time, but any time source will do. Edit the following file to add the NTP source as displayed in the example:



Use these steps to set the correct time:

  • timedatectl set-ntp true (Set the NTP sync to true)
  • systemctl restart systemd-timesyncd.service (restart the service)
  • timedatectl –adjust-system-clock (Force sync)

After a while the time will start to sync. Use “timedatectl status” to get the actual status.

Configure realmd

Realmd is the configuration to add the linux host to a Kerberos realm like Active Directory. It consists out of tools and configuration options. The configuration is stored in the “realmd.conf” file that’s located in the “/etc/” directory.

The configuration that I found useful is the following:

 default-home = /home/%D/%U
 default-shell = /bin/bash

 default-client = sssd
 os-name = Ubuntu Workstation
 os-version = 18.04

 automatic-install = no

 fully-qualified-names = yes
 automatic-id-mapping = no
 user-principal = yes
 manage-system = yes

Information about the various options of the realmd.conf file can be found here:

Auto create home folders

Before we join the domain the system needs to be told that is needs to auto create users home folders. By default this is turned off for domain accounts and needs to be enabled first. This is easily done with the “pam-auth-update” tool. Type in that command while having root privileges and tick the box “Create home directory on login”.


The changes are saved to the file “/etc/pam.d/common-session“.

Testing Directory Access

Now that I have installed all the packages and configured the appropriate settings, I’m ready to test the setup. Ubuntu has a few very useful tools to see if Kerberos authentication will succeed. Use the following command to test it out:

Discover the domain

realm discover corp.bitsofwater.com

Get a Kerberos ticket for the user Admin

kinit Admin

Display the Kerberos ticket


Destroy the ticket



The reason I destroy the ticket first is that it will otherwise be used during the domain join that I’ll show you next.

Joining the domain

Now that Kerberos is successfully tested, I am ready to join the domain. The tools that I’ll be using was installed with the realmd package, “realm“. Use the following command:

realm join --verbose --user=admin --computer-ou=OU=Linux,DC=corp,DC=bitsofwater,DC=com corp.bitsofwater.com

In the example above I’ve turned on verbose output, told the command that I will be using the user named “Admin” to join the domain, put the created object into the “Linux” organizational unit in the “corp.bitsofwater.com” domain. Hit enter and you’ll be prompted for the password, enter it and the domain join is executed. If all goes well it ends with “Successfully enrolled machine in realm”. Easy right!?


Checking the domain, a new object is created in the organizational unit.


If you want to change any configuration setting at a later stage, edit the SSSD file located at “/etc/sssd/sssd.conf“. Only thing I changed is the entry “ldap_id_mapping”, changed it to “True” as I don’t have the POSIX attributes set in Active Directory. Without this set, I could not login because it can’t translate user id’s.

Login Screen

For domain users to be able to login On Ubuntu 16.04 the login screen need to reconfigured. Normally it would only list the local users without the possibility to login other, domain based users. This capability was enabled by editing the unity login screen located at “/usr/share/lightdm/lightdm.conf.d/50-unity-greeter.conf” and adding the line:


On Ubuntu 18.04 there’s no need to change the login screen anymore. Simply selecting “Not Listed?” on the screen will provide you with a username / password screen. Enter the username with or without the domain suffix.


And that’s it! login with a domain account, the user will be authenticate to Active Directory and a local home folder will automatically be created for the user.

I hope that you will find this information useful.

10-07-2018: Update after user feedback

Make sure that your active directory is prepared for IPv6 as Ubuntu 1804 combined with Windows 2016 seems to default to IPv6 under certain circumstances. A user and myself got this error message “Couldn’t join realm: Insufficient permissions to join the domain“. Kind of a bogus message, but it turned out to be missing IPv6 information in AD DNS. Solution was to fix DNS or disable IPv6.

In my example above I used the domain suffix during login. At that time I didn’t know that there was an option to select a default domain if you only enter the user name. Edit the [sssd] section in “/etc/sssd/sssd.conf” to include the following “default_domain_suffix“.




Converting a String to an Integer In PowerShell

For a project I’m working on (more on this site very soon), I ran into an issue with PowerShell variables that kept me busy for a few hours. What I was trying to do is get input from an XML file, put that value into a variable and use it to assign it as a value for the maximum memory that a virtual machine could use. Sounds easy, right? Well, not really. What happened is that I would get a conversion error:

Input string was not in a correct format.

This indicates that something wasn’t matching during a conversion and needed to be fixed. Let’s me explain.

So the value from the XML file is read into a variable as text (string). Basically it comes down to this:


When we query using gettype() we clearly see that the value is of a string type. Now, this happens because I included the “MB” part. Normally when you put a size value like “1024MB” directly into a variable inside a script, PowerShell is smart enough to figure out what you are actually trying to do. Use a certain size in Megabytes, Gigabytes etc, and converts it automatically to an integer. But in this case the value of “1024MB” is just text. I thought that PowerShell would equally be smart enough that putting an [INT] in front of the variable would somehow convert it. That didn’t work as expected. Hence the error in the beginning of this post.

The solution I wanted to create is get the value and split it at the MB or GB part, leaving just the numbers. After I would have figured out what kind of value it would be, convert it to an integer by doing some math. It was during this experiment that I discovered an easier way. Just divide the string by 1 and PowerShell smartness kicks in again! See the magic at work:


So just by dividing it by 1 it’s converted to the correct integer type. How cool (and easy) is that!

Hope this trick can be put to good use and saves you a lot of research time!

PowerShell Module Test-TCPConnection

I have this love versus disappointment relationship with PowerShell. It can provide a lot of great stuff for us in automation, but sometimes the thing that looks likes the best ever since sliced bread can be a bit of a disappointment. Take the cmdlet “Test-NetConnection” for example. It’s absolutely wonderful in what it does. It assists you in doing a network diagnosis with just a single command. Much more than you could ever get out of just a simple icmp ping. However, the latter is just the problem with this PowerShell cmdlet. You cannot run it without it using a ping. The real downside to that is when the port is not available, blocked by a firewall or something else is blocking it, you have to wait for it to get a time out. And that takes time, a lot of time. If it was just for one host, I could live with it, but if you need to figure out if 200 servers are reachable, you are going to be in the office for a while. Hence the creation of my ever first cmdlet!

Based on my disappointment for the “Test-NetConnection” cmdlet and my desire to learn more on PowerShell I created my first cmdlet that does exactly what “Test-NetConnection” does with respect to a port query but without the icmp ping involved. I have dubbed it “Test-TCPConnection”, because, well that it what it does. Being the nerd that I am, I have included a full help in the module itself but I will list on my site as well. Use “help Test-TCPConnection –Online” to connect to the online help page.

To make the “Test-TCPConnection” module work, extract the folder and place it in your PowerShell Modules folder. Use the following command to see your individual module path’s.

$env:PSModulePath -Split ";"

After reloading your PowerShell environment, the module will be loaded automatically. You can check the commands the module exposes with:

Get-Command -Module TestTCPConnection


Getting the syntax is easily accomplished with:

Get-Command Test-TCPConnection -Syntax


Any feedback is always appreciated. Use the comment option down below or send me a message using the contact page.

Download version from the PowerShell Gallery:

Replacing DiskPart with PowerShell

In my previous life, I was a deployment person. MDT, WDS, WinPE & bare-metal installation were all part of my life. For managing disks, physical or virtual I always used “diskpart.exe” to create the disk layout, create bootable partitions and everything surrounding the magic of disks and partitions. Since I am trying to do as much as possible now with PowerShell I thought it would be fun to see if “diskpart.exe” could be replaced with pure PowerShell CMDLET’s? To give you a heads-up, it can for 99%, almost there! Just one feature I could not find is setting GPT attributes. According to this article, a Recovery Partition should have the attribute of “GPT_ATTRIBUTE_PLATFORM_REQUIRED” & “GPT_BASIC_DATA_ATTRIBUTE_NO_DRIVE_LETTER” resulting in a value of “0x8000000000000001”. Using “diskpart.exe” to query for information from a default installation of Windows 10 results in the correct attributes.


I did expect that setting a partition to the guid value for a recovery partition with PowerShell (“de94bba4-06d1-4d40-a16a-bfd50179d6ac”) ,would also take care of both the attributes. It partially does that, just the “GPT_BASIC_DATA_ATTRIBUTE_NO_DRIVE_LETTER” is set, so the drive is hidden for the OS. The other one is not set and according to my research, it is simply not available in PowerShell. Therefore, you will still need “diskpart.exe”.

During my experimentations, I have concluded that PowerShell’s “Disk”, “Partition” and “Volume” cmdlets are tricky to work with. It takes time to understand how to handle them, but it eventually works. In my opinion “diskpart.exe” is still more powerful when it comes down to pure handling disks, however PowerShell has far better support for handling the dynamics surrounding scripting and error handling. Still it is not so difficult to combine them both, as you will see in my example.

Here is my code or download the script below. Please note that I have put a “return” at the top of the script so you do not destroy your disk the first time you run the script.

# Define the disk
$Disk = Get-Disk | where Number -EQ "0"
$DiskNumber = $Disk.DiskNumber

# Clean the disk and convert to GPT
if ($disk.PartitionStyle -eq "RAW")

Initialize-Disk -Number $Disk.DiskNumber -PartitionStyle GPT

} Else {

Clear-Disk -Number $DiskNumber -RemoveData -RemoveOEM -Confirm:$false
 Initialize-Disk -Number $DiskNumber -PartitionStyle GPT


#Create the System Partition Partition
$SystemPartition = New-Partition -DiskNumber $DiskNumber -Size 512MB
$SystemPartition | Format-Volume -FileSystem FAT32 -NewFileSystemLabel "system"
$systemPartition | Set-Partition -GptType "{c12a7328-f81f-11d2-ba4b-00a0c93ec93b}"
$SystemPartition | Set-Partition -NewDriveLetter "S"

#Create Microsoft Reserved Partition
New-Partition -DiskNumber $DiskNumber -Size 128MB -GptType "{e3c9e316-0b5c-4db8-817d-f92df00215ae}"

#Create Primary Partition
$PrimaryPartition = New-Partition -DiskNumber $DiskNumber -UseMaximumSize
$PrimaryPartition | Format-Volume -FileSystem NTFS
$PrimaryPartition | Set-Partition -GptType "{ebd0a0a2-b9e5-4433-87c0-68b6b72699c7}"
$PrimaryPartition | Set-Partition -NewDriveLetter "W"

#Shrink Primary Partition by 500MB for the Recovery Partition
$newSize = ($PrimaryPartition.Size - 524288000)
$PrimaryPartition | Resize-Partition -Size $newSize

#Create the Recovery Partition
$RecoveryPartition = New-Partition -DiskNumber $DiskNumber -UseMaximumSize
$RecoveryPartition | Format-Volume -FileSystem NTFS
$RecoveryPartition | Set-Partition -GptType "{de94bba4-06d1-4d40-a16a-bfd50179d6ac}"
$RecoveryPartition | Set-Partition -NewDriveLetter "R"

# Add "Required" attribute to the GPT Recovery partition. (No .Net Function available)
$partitionNumber = $RecoveryPartition.PartitionNumber
"select disk $DiskNumber"
"select partition $partitionNumber"
'gpt attributes=0x8000000000000001'

$DiskpartCMD | diskpart.exe

If anyone reading has a PowerShell solution to setting the attributes, please let me know.







Customizing the recovery partition after upgrading the OS from Windows 8.1 to Windows 10



Using Credential Manager in PowerShell

Using PowerShell remoting can be a fantastic experience, but the number of  times I had to enter credentials to make a new pssession is getting out of hand, or to put it better a painful hand. Wouldn’t it be great if you could store the credentials somewhere safe and retrieve it when necessary? Fortunately, you can! Since a long time Windows has had the option to store credentials in a local secured database and use it when needed. This is known as the “Credential Manager” and can be located in the control panel.


Within this tool you can store credentials for both Web Sites and network resources such as remote servers. Wouldn’t it be really cool if you could store credentials there once and retrieve them using PowerShell? Luckily, you can! Someone actually created a PowerShell Module called “CredentialManger” that does just what this post is about.

First, we need to install the module. It’s located in the PowerShell Gallery so you need to trust that Gallery (Use Set-PSRepository) and have NuGet installed. If you didn’t do this just yet, you’ll get additional messages, accept them to continue.

Install-Module -Name "CredentialManager"

To see what the module exposes use:

Get-Command -Module "CredentialManager"


Let’s start with creating new credentials. The CMDLET “New-StoredCredential” seems to do just that. I’ve constructed the following code to create a new set of credentials in the credential manager. The first part prompts for input and stores it in the $Secure variable. Next I create the new object and store it as persistant accross sessions by specifying the “-Persist” parameter. As it’s also a generic type of credentials I explicitly specify that also.

$Target = "YourServerName"
$UserName = "Administrator"
$Secure = Read-host -AsSecureString
New-StoredCredential -Target $Target -UserName $UserName -SecurePassword $Secure -Persist LocalMachine -Type Generic

The output will result in this:


Using the help system I figured out that “Get-StoredCredential” retrieves the objects stored in credential manager. To select the credentials we just created the “-Target” property needs to be specified. In this case we will be referring to the target server we just created, “servername”. Use this command to get all properties into a credentialized object:

Get-StoredCredential -Target "servername" –AsCredentialObject

Retreiving a single pair of credentials is easy, right!? Now what I am after is credentials that are stored for the use in PowerShell remoting. In my specific case to connect to a remote server I’m managing.

Eventually, I created this command-line:

Enter-PSSession -ComputerName "servername" -Authentication Credssp -Credential (Get-StoredCredential -Target "servername")

As you can see the “Get-StoredCredential” needs a target parameter to retrieve the credentials. That outputs a username and password (as System.Security.SecureString) that is then passed to the “Enter-PSSession” cmdlet.


My servername in this case is “WHV” and the credentials are stored as “HyperVManager/WHV”.

Removing credentials using the same module is very straightforward. The “Remove-StoredCredentials” does the trick. Again pointing it to the credentials using the “-Target” parameter.

Remove-StoredCredential -Target "servername"

I hope that this post helps you to use credentials in a safe and easy manner.

ByValue & ByPropertyName

Recently I had the pleasure of attending a PowerShell course. Although it was not exactly what I expected I still picked up a few things here and there. It was good to see that people attending the course got a bit more enthusiastic on PowerShell during the course and they acquired a solid base to start using PowerShell.

On of the items that was discussed was how to use the pipeline. Casting the output of a cmdlet to another. Something obvious as “get-service -name “bits”| start-service” was used during a demo. That’s great for starters, however it gets a little more confusing when cmdlets with a different noun are used. As an example, lets try these two cmdlets to play nice with each other.

Resolve-DnsName <something> | Test-NetConnection

Let first try these two cmdlets individually to see how they actually work. Remember you can always use the help system for more information.

help Resolve-DnsName -Full

Without knowing anything about the cmdlet itself, I’m guessing that it will need a hostname, fqdn or ip address. Just trying a hostname with the name “server” resolves into this:

ByValueProprtyName 01

So apparently I was correct. But how does this actually work? What does this cmdlet actually expect as input? In PowerShell this is really easy to figure out. The next command gets the parameter that’s required for this cmdlet.

get-help Resolve-DnsName -Parameter * | where {($_.Position -eq "0") -and ($_.Required -like "true")}

ByValueProprtyName 02

Here we see that a single input object is required, which is called “Name”. We didn’t specify the Name parameter but it still worked because according to the help system anything on position 0 (The first parameter) is automatically assigned to the first parameter. Let’s try this on “Test-NetConnection”.

ByValueProprtyName 03

Okay… weird that produces absolutely nothing. Not to worry, the cmdlet construction isn’t faulty, “Test-NetConnection” just doesn’t require any input. Running the cmdlet by itself checks connectivity to the Internet.

ByValueProprtyName 04

Since we want to try testing an internal server it would make sense that a certain parameter would accept input. Using “get-help” we would need to figure that out.

get-help Test-NetConnection -Parameter *

ByValueProprtyName 05

Because this cmdlet doesn’t require input, the parameter position “0” is not available. This is now transferred to position “1”. Looking at the help file a parameter with the name “Computername” would do the trick. Let’s try it first without specifying the actual parameter name itself.

ByValueProprtyName 06

Eureka! This works. So combining the two cmdlets would work. Let’s try that.

ByValueProprtyName 07

Hmm no go, bummer. “Test-NetConnection” apparently has no idea what we want to accomplish. Let’s figure out what the result of “Resolve-DnsName” actually is? We do this with

Resolve-DnsName server | gm

GM stands for Get-Members, or “give me all stuff from an object”.

ByValueProprtyName 08

What’s important here is the Typename, “Microsoft.DnsClient.Commands.DnsRecord_A”. This “Type” is the object type that is passed over the pipeline. This object type needs to be something that the receiving end understands. This is the PowerShell technique that is called “ByValue”. So the value type of the object that’s being passed needs to be of the same type.  PowerShell under the hood does it’s magic by trying to match the output of the first cmdlet to the input of the second one IF the object type is the same. So let’s see what “Test-NetConnection” expect.

ByValueProprtyName 09

Using the “Get-help” function again we take a closer look at the “Computername” parameter. The input is specified directly after the parameter name “[<string>]”. So it expects the “ByValue” object name property that is passed over the pipeline to be of the string type. Let’s try the simplistic version first. Just pass a string as input.

“server” | Test-NetConnection

ByValueProprtyName 10

So, now we know that this actually works. Let’s convert the output of the “Resolve-Dnsname” and see what happens.

(Resolve-DnsName server).name.ToString() | Test-NetConnection

ByValueProprtyName 11

That seems to work as well! Please note that you could technically skip the “.ToString” part because the property “name” is already a string, regardless of the original type of the entire object. This wouldn’t work for other properties that have a different type. Use “GM” or the “.GetType()” method again to see the actual type.

ByValueProprtyName 12

Please note that by using “ByValue” you can only pass a single property over the pipeline!


One of the other options that we could also use is the “ByPropertyName” option. As we can see from the help text this option is available for the “Computername” parameter of “Test-NetConnection”.

What this simply means is that the property name of the output of the first cmdlet must match the input parameter of the second cmdlet. In our example the property “Name” of the “Resolve-DnsName” output must match the input parameter “Computername” of the “Test-NetConnection” cmdlet. It doesn’t by now but we can make it so by creating an expression like this:

Resolve-DnsName server | select @{Name="Computername";Expression={$_.name}} | Test-NetConnection

The select statement is where the magic happens. This “Name=” part tells PowerShell that a new property with the name “Computername” should be created. The “Expression” fills that newly created property with the value of the “Name” property of the first cmdlet. The best part here is that the Object type here doesn’t matter, as long as the type of the property matches. In this case it still needs to be a string. However the object type can remain a different type. In this case a “Selected.Microsoft.DnsClient.Commands.DnsRecord_A” object type.

ByValueProprtyName 13

Putting this all together we end up with the following command line:

Resolve-DnsName server | select @{Name="Computername";Expression={$_.name}} | Test-NetConnection

ByValueProprtyName 14

And there we have what we wanted to accomplish!!

As you can see there are multiple ways to go about constructing your entire command line and casting properties over the pipeline. Specially with the “ByValue” and “ByPropertyName” options. I hope that this post added in understanding the differences.


Learn About Using PowerShell Value Binding by Property Name

Two Ways To Accept Pipeline Input In PowerShell

Internet Explorer Hardening Mysteries

Today I had a very interesting problem with system hardening and a new application that we are going to use. This application moved from a form based management interface to a web based one. Under normal circumstances this doesn’t provide a huge challenge because management of these type of apps is done over the network from a client based machine. However, since we have a solution that’s sometimes only reachable on the box itself I have to make sure that the local instance of Internet Explorer actually still works after I apply system hardening polices. And that’s where it’s gets confusing. In the explanation below I’ll use a simple page hosted on IIS, accessed locally through Internet Explorer on Windows Server 2012 R2.

On this page:

  1. The Basics
  2. Default Setup
  3. Some Magic
  4. Site to Zone Assignment List
  5. Computer Configuration
  6. Security Zones
  7. The Effects of User Account Control
  8. The Enterprise Solution
  9. Summary
  10. References

The Basics

Before we start there are a few basics that we need to discuss. Internet Explorer has something called security zone assignment. This means that every site that you open in the browser is assessed and placed into one of these zones. Each zone holds a different security configuration, so you can interact differently with sites you trust, are under your control, run in your data centers or are hosted on the Internet. Opening the Internet Properties, Security page actually reveals 4 of the 5 zones. There’s one hidden one (zone 0) which is applicable to the local computer (system) only.

  1. Local Intranet
  2. Trusted Sites
  3. Internet
  4. Restricted sites

These are actually the internal number Internet explorer assigns to the zones. We’ll be using them later.

Next I want to focus on a security feature called “IE Enhanced Security Configuration” that was introduced some time ago, but is almost exclusively disabled on all systems that I’ve seen, which is really unfortunate. Let’s be clear on one thing though. Having a browser installed on a server class device is under almost any circumstance a no-go. Just don’t do it. Your networking people have most likely setup all kind of network devices keeping your network secure and browsing from a server isn’t really helping. Getting all kind of nastiness directly into your server segment is considered bad practice. Having said that, our setup unfortunately can’t work in a different way. We have to deal with a browser on the box, hence we harden Internet Explorer and “IE Enhanced Security Configuration” is part of that. Basically it raises the security settings of the security zones we discussed above and some stuff more. If you want to know the details, type this in the Internet Explorer address bar:


Last thing that I want to address is the addition of “Enhanced Protected mode” of Internet Explorer. This extra layer of security was introduced in Windows 8. Its intent was to further enhance the capabilities of the original implementation of protected mode. It does so by enabling a sandboxing technology, Microsoft named AppContainers. The concept of using AppContainers goes back to the release of Windows Vista where the concept of integrity levels was introduced. Those levels separate abilities of processes running on different levels on the operating system.

More information can be found here:

Default Setup

Having a web application on your box and connecting to it using localhost simply works out of the box. That’s because the security settings in the security zones are configured in such a way that it’s an allowed operation. As I mentioned in the beginning, all sites are evaluated and placed in the appropriate zone. When you connect to localhost, the site is evaluated, placed in the “Intranet Zone” and those security settings are then used. From a UI point of view, just open the “Internet Options” – “Security“. Select the “Local Intranet“, click “Sites“, those sites listed will automatically be placed in the “Local Intranet” zone, sounds logically, right!

Now on Server 2012 R2, from a registry point of view, it gets a little bit more complicated. Per default security zone information is stored in the context of the current user, but depending if you have “IE Enhanced Security Configuration” enabled (or not) it’s stored in a different place. The UI however does a good job on storing them in the appropriate place.

IE Enhanced Security Configuration turned on (default)

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\EscDomains

IE Enhanced Security Configuration turned off

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains

Some Magic

If you turn off “IE Enhanced Security Configuration“, the EscDomains default registry keys are completely cleared. Just take a look in the registry key or in the security tab. However, connecting to the localhost still works, even bringing up the properties of the webpage shows it being in the local intranet zone. Even though the localhost is not specifically mentioned anywhere the setting “Include all local (intranet sites not listed in other zones” is at play here. Anything not mentioned elsewhere and complies as being “local intranet” is placed in this zone. That’s why this still works. Restoring IE Enhanced Security Configuration will restore the default keys.

Site to Zone Assignment List

When you’re in a network managing a lot of machines you want to have a centralized way of managing devices. Usually that implies using group policies. Managing zone assignment for Internet Explorer is easy and at the same time very difficult because of the differences that you can encounter. Take the “Site to Zone Assignment List” policy as an example. This policy allows you to configure sites to fall into a certain zone. However this does not work when you have “IE Enhanced Security Configuration” enabled. And as far as I could find there’s no policy available to manage sites with a GPO when having “IE Enhanced Security Configuration” enabled, besides using group policy preferences. But that’s simply adding registry entries. This is a bit of a mess if you ask me as one would expect to have policies available for managing this.

Computer Configuration

As we have a requirement from the Internet Explorer STIG (Security Technical Implementation Guide) to use only computer based polices for Internet Explorer, this is actually the point where things start getting a bit more confusing. The policy I’m talking about is located here:

Computer Configuration – Administrative Templates – Windows Components – Internet Explorer – Security Zones: Use only machine settings

The idea behind this setting is that regardless if your user account falls into the scope of management for applying this policy, it’s simply always there. To be honest it isn’t even such a bad idea, it’s just that the execution is very different than using it on an user level. Let me explain.

If you set the above policy to enable, the text states. “If you enable this policy, changes that the user makes to a security zone will apply to all users of the computer”.

Right, so in my experience this simple is not true. Let’s give it a try. Enable the policy and open IE, go to “Internet Options“, “Security“, “Trusted sites“. Now add a website, for example bitsofwater.com to the trusted or intranet websites. Close the dialog and open the web site, next open the properties of the web site. Notice that it’s still in the “Internet” security zone! Hence the settings you make in the UI are not set in the registry and are not in effect. Funny part of the whole situation is that the keys actually end up in the correct registry place, or at least the place you would expect them to be.


Assuming the default setup of Windows with no additional policies, a notification popped up while opening the page. It informs you that the page is blocked, but does provide a means to add the web site. Let’s try the “Add” button and see what happens. Open up the properties of the web site and behold! The site is now placed in the appropriate zone! If you’re just a little bit like me you would what to know what magic is at play here. My choice of tools is again Process Monitor. There we see that the keys end up in a totally different place. In the “HKLM\Software\Wow6432Node” registry location.


So this means that a 32 bit process is actually writing the keys instead of an expected 64 bits process. Adding the architecture column in Process Monitor actually shows this little gotcha.

Now for some real confusion. Yes it’s getting worse. Open IE again, “Internet Options“, “Security“, “Trusted Sites“. Is the page listed? Nope, it’s not! So we can safely conclude that the UI does not play nice when the computer only policy is enabled. It’s registry only and the settings should be in the ‘Wow6432Node” registry key section.

Note! In this case it doesn’t matter if you turn of “IE Enhanced Security Configuration” the effect is still the same.

Note! Per default the Wow6432Node EscDomains registry is empty and is not populated. Turning “IE Enhanced Security Configuration” off and on again creates the keys. That sure causes a lot of confusion.

Security Zones

After I finally figured out what a mess the above actually was it got me wondering if this actually got a further effect on the zone configuration itself. Let’s try a little something. Per default the “Local Intranet” zone doesn’t have protected mode enabled. To get a little more advanced here, the setting in the zones registry configuration has the name “2500”, the value indicates its configuration state. More info can be found here:


Let’s start with the default. Set the policy “Security Zones: Use only machine settings” to off, applying the user policies again. A quick look at the current users hive shows that enabling protected mode for the “local Intranet” zone provides a value of 0 for the name 2500. So protected mode is on. A value of 3 turns it off.

Enable the “Security Zones: Use only machine settings” again, and see the effects. Opening the UI and selecting the “Security” – “Local Intranet” zone sets the “Enable Protected mode” to on. Looking in the registry again, the value ends up in:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\1\2500

So that seems to be doing alright.

Note! After further testing it seems that the key needs to be set in both the local machine and the Wow6432Node key to be effective. This should read, don’t mess around with registry keys when you have a policy available. Just think of the above as nice to know information.

The Effects of User Account Control

In the beginning of this post I mentioned that IE protected mode is built on integrity levels, which are a part of User Account Control. As most of you will start testing with the build in local Administrator account (or the domain administrator for that matter) the outcome will be very different. Per default the local administrator account is exempt from UAC and its effects. This directly implicates that protected mode for the local administrator is not in effect even when it’s set to enable. The full effect of having protected mode becomes visible when the policy “User Account Control: Use Admin Approval Mode for the built-in Administrator account” is set to enabled (a system reboot is required). After that the full implementation of protected mode is in effect. This has a direct effect of being unable to connect to the local host. As Applications that have protected mode enabled are prohibited from making a loopback connection. Solving it can be done by using the registry entry 2500 to the value of 3 (Set them in both the machine and Wow6432Node registry locations), use the UI when available but preferably use the “Turn on protected mode” policy for the Intranet zone.

There is another way, officially just for developers, but it works. Use the CheckNetIsolation.exe tool. Example:

CheckNetIsolation.exe LoopbackExempt –a –n=windows_ie_ac_122

More info can be found here:

The Enterprise Solution

After a few days of testing we’ve found a more effective solution which I would like to share here.

The following is actually the “Site to Zone Assignment List” policy but split up into http and https. This is applicable when “IE Enhanced Security Configuration” is enabled.

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains\localhost]

Adding the following code will populate the registry in all the required places. The trick here is that the settings above have to be set also. Anything listed here is then effectively applied on the system.

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\EscDomains\localhost]

In our very special case we need to have access from the box itself to the localhost. As of Windows Server 2012 this is prohibited by “Enhanced protected mode” because it uses AppContainers. So we have decided that protected mode is turned off for the Intranet Zone when we need to connect to the localhost.

Computer Configuration – Administrative Templates – Windows Components – Internet Explorer – Internet Control Panel – Security Page – Intranet Zone – Turn on Protected Mode

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\1]

Just to make your life a bit easier I’ve created a GPO template that can be deployed in your infrastructure. Download it here: templates.

Save the files to the PolicyDefinitions folder and open the group policy editor. You will see an additional policy named “Enable Internet Explorer localhost Connectivity” in “Computer ConfigurationAdministrative TemplatesHardeningInternet Explorer“. Set it to “Enabled” to allow localhost connectivity.


My observations and recommendations.

  • Don’t always trust the UI, it only partially reflects the reality, especially with the computer only configuration.
  • Always set UAC to on, even for the local administrator account.
  • Populate the EscDomain in Wow6432Node using a policy or automate it.
  • Manual population of the keys occurs when you turn ‘IE Enhanced Security Configuration” off and back on again (suspect a bug).
  • Use the “Turn on protected mode” policy for the Intranet zone when you want to connect to the localhost and have “enhanced protected mode” enabled. Set it to disable the protected mode.
  • Windows Server 2016 has the same “bug”.
  • The above is not an issue on Windows Server 2008 R2.

Hopefully this helps you to understand how IE actually handles certain security aspects.


Understanding and Working in Protected Mode Internet Explorer

How the Integrity Mechanism Is Implemented in Windows Vista

A Peek into IE’s Enhanced Protected Mode Sandbox

Enhanced Protected Mode add-on compatibility

Internet Explorer security zones registry entries for advanced users

How to enable loopback and troubleshoot network isolation (Windows Runtime apps)