Replacing DiskPart with PowerShell

In my previous life, I was a deployment person. MDT, WDS, WinPE & bare-metal installation were all part of my life. For managing disks, physical or virtual I always used “diskpart.exe” to create the disk layout, create bootable partitions and everything surrounding the magic of disks and partitions. Since I am trying to do as much as possible now with PowerShell I thought it would be fun to see if “diskpart.exe” could be replaced with pure PowerShell CMDLET’s? To give you a heads-up, it can for 99%, almost there! Just one feature I could not find is setting GPT attributes. According to this article, a Recovery Partition should have the attribute of “GPT_ATTRIBUTE_PLATFORM_REQUIRED” & “GPT_BASIC_DATA_ATTRIBUTE_NO_DRIVE_LETTER” resulting in a value of “0x8000000000000001”. Using “diskpart.exe” to query for information from a default installation of Windows 10 results in the correct attributes.

PSDiskpart001

I did expect that setting a partition to the guid value for a recovery partition with PowerShell (“de94bba4-06d1-4d40-a16a-bfd50179d6ac”) ,would also take care of both the attributes. It partially does that, just the “GPT_BASIC_DATA_ATTRIBUTE_NO_DRIVE_LETTER” is set, so the drive is hidden for the OS. The other one is not set and according to my research, it is simply not available in PowerShell. Therefore, you will still need “diskpart.exe”.

During my experimentations, I have concluded that PowerShell’s “Disk”, “Partition” and “Volume” cmdlets are tricky to work with. It takes time to understand how to handle them, but it eventually works. In my opinion “diskpart.exe” is still more powerful when it comes down to pure handling disks, however PowerShell has far better support for handling the dynamics surrounding scripting and error handling. Still it is not so difficult to combine them both, as you will see in my example.

Here is my code or download the script below. Please note that I have put a “return” at the top of the script so you do not destroy your disk the first time you run the script.

# Define the disk
$Disk = Get-Disk | where Number -EQ "0"
$DiskNumber = $Disk.DiskNumber

# Clean the disk and convert to GPT
if ($disk.PartitionStyle -eq "RAW")
{

Initialize-Disk -Number $Disk.DiskNumber -PartitionStyle GPT

} Else {

Clear-Disk -Number $DiskNumber -RemoveData -RemoveOEM -Confirm:$false
 Initialize-Disk -Number $DiskNumber -PartitionStyle GPT

}

#Create the System Partition Partition
$SystemPartition = New-Partition -DiskNumber $DiskNumber -Size 512MB
$SystemPartition | Format-Volume -FileSystem FAT32 -NewFileSystemLabel "system"
$systemPartition | Set-Partition -GptType "{c12a7328-f81f-11d2-ba4b-00a0c93ec93b}"
$SystemPartition | Set-Partition -NewDriveLetter "S"

#Create Microsoft Reserved Partition
New-Partition -DiskNumber $DiskNumber -Size 128MB -GptType "{e3c9e316-0b5c-4db8-817d-f92df00215ae}"

#Create Primary Partition
$PrimaryPartition = New-Partition -DiskNumber $DiskNumber -UseMaximumSize
$PrimaryPartition | Format-Volume -FileSystem NTFS
$PrimaryPartition | Set-Partition -GptType "{ebd0a0a2-b9e5-4433-87c0-68b6b72699c7}"
$PrimaryPartition | Set-Partition -NewDriveLetter "W"

#Shrink Primary Partition by 500MB for the Recovery Partition
$newSize = ($PrimaryPartition.Size - 524288000)
$PrimaryPartition | Resize-Partition -Size $newSize

#Create the Recovery Partition
$RecoveryPartition = New-Partition -DiskNumber $DiskNumber -UseMaximumSize
$RecoveryPartition | Format-Volume -FileSystem NTFS
$RecoveryPartition | Set-Partition -GptType "{de94bba4-06d1-4d40-a16a-bfd50179d6ac}"
$RecoveryPartition | Set-Partition -NewDriveLetter "R"

# Add "Required" attribute to the GPT Recovery partition. (No .Net Function available)
$partitionNumber = $RecoveryPartition.PartitionNumber
$DiskPartCMD=@(
"select disk $DiskNumber"
"select partition $partitionNumber"
'gpt attributes=0x8000000000000001'
'exit'
)

$DiskpartCMD | diskpart.exe

If anyone reading has a PowerShell solution to setting the attributes, please let me know.

Downloads


Create-EFIPartitions

Reference


PARTITION_INFORMATION_GPT structure
https://msdn.microsoft.com/en-us/library/windows/desktop/aa365449(v=vs.85).aspx

CREATE_PARTITION_PARAMETERS structure
https://technet.microsoft.com/en-us/library/aa381635.aspx

New-Partition
https://docs.microsoft.com/en-us/powershell/module/storage/new-partition?view=win10-ps

Customizing the recovery partition after upgrading the OS from Windows 8.1 to Windows 10
https://blogs.technet.microsoft.com/askcore/2016/04/26/customizing-the-recovery-partition-after-upgrading-the-os-from-windows-8-1-to-windows-10/

 

 

Using Credential Manager in PowerShell

Using PowerShell remoting can be a fantastic experience, but the number of  times I had to enter credentials to make a new pssession is getting out of hand, or to put it better a painful hand. Wouldn’t it be great if you could store the credentials somewhere safe and retrieve it when necessary? Fortunately, you can! Since a long time Windows has had the option to store credentials in a local secured database and use it when needed. This is known as the “Credential Manager” and can be located in the control panel.

credmgr001

Within this tool you can store credentials for both Web Sites and network resources such as remote servers. Wouldn’t it be really cool if you could store credentials there once and retrieve them using PowerShell? Luckily, you can! Someone actually created a PowerShell Module called “CredentialManger” that does just what this post is about.

First, we need to install the module. It’s located in the PowerShell Gallery so you need to trust that Gallery (Use Set-PSRepository) and have NuGet installed. If you didn’t do this just yet, you’ll get additional messages, accept them to continue.

Install-Module -Name "CredentialManager"

To see what the module exposes use:

Get-Command -Module "CredentialManager"

credmgr002

Let’s start with creating new credentials. The CMDLET “New-StoredCredential” seems to do just that. I’ve constructed the following code to create a new set of credentials in the credential manager. The first part prompts for input and stores it in the $Secure variable. Next I create the new object and store it as persistant accross sessions by specifying the “-Persist” parameter. As it’s also a generic type of credentials I explicitly specify that also.

$Target = "YourServerName"
$UserName = "Administrator"
$Secure = Read-host -AsSecureString
New-StoredCredential -Target $Target -UserName $UserName -SecurePassword $Secure -Persist LocalMachine -Type Generic

The output will result in this:

credmgr005

Using the help system I figured out that “Get-StoredCredential” retrieves the objects stored in credential manager. To select the credentials we just created the “-Target” property needs to be specified. In this case we will be referring to the target server we just created, “servername”. Use this command to get all properties into a credentialized object:

Get-StoredCredential -Target "servername" –AsCredentialObject

Retreiving a single pair of credentials is easy, right!? Now what I am after is credentials that are stored for the use in PowerShell remoting. In my specific case to connect to a remote server I’m managing.

Eventually, I created this command-line:

Enter-PSSession -ComputerName "servername" -Authentication Credssp -Credential (Get-StoredCredential -Target "servername")

As you can see the “Get-StoredCredential” needs a target parameter to retrieve the credentials. That outputs a username and password (as System.Security.SecureString) that is then passed to the “Enter-PSSession” cmdlet.

credmgr004

My servername in this case is “WHV” and the credentials are stored as “HyperVManager/WHV”.

Removing credentials using the same module is very straightforward. The “Remove-StoredCredentials” does the trick. Again pointing it to the credentials using the “-Target” parameter.

Remove-StoredCredential -Target "servername"

I hope that this post helps you to use credentials in a safe and easy manner.

ByValue & ByPropertyName

Recently I had the pleasure of attending a PowerShell course. Although it was not exactly what I expected I still picked up a few things here and there. It was good to see that people attending the course got a bit more enthusiastic on PowerShell during the course and they acquired a solid base to start using PowerShell.

On of the items that was discussed was how to use the pipeline. Casting the output of a cmdlet to another. Something obvious as “get-service -name “bits”| start-service” was used during a demo. That’s great for starters, however it gets a little more confusing when cmdlets with a different noun are used. As an example, lets try these two cmdlets to play nice with each other.

Resolve-DnsName <something> | Test-NetConnection

Let first try these two cmdlets individually to see how they actually work. Remember you can always use the help system for more information.

help Resolve-DnsName -Full

Without knowing anything about the cmdlet itself, I’m guessing that it will need a hostname, fqdn or ip address. Just trying a hostname with the name “server” resolves into this:

ByValueProprtyName 01

So apparently I was correct. But how does this actually work? What does this cmdlet actually expect as input? In PowerShell this is really easy to figure out. The next command gets the parameter that’s required for this cmdlet.

get-help Resolve-DnsName -Parameter * | where {($_.Position -eq "0") -and ($_.Required -like "true")}

ByValueProprtyName 02

Here we see that a single input object is required, which is called “Name”. We didn’t specify the Name parameter but it still worked because according to the help system anything on position 0 (The first parameter) is automatically assigned to the first parameter. Let’s try this on “Test-NetConnection”.

ByValueProprtyName 03

Okay… weird that produces absolutely nothing. Not to worry, the cmdlet construction isn’t faulty, “Test-NetConnection” just doesn’t require any input. Running the cmdlet by itself checks connectivity to the Internet.

ByValueProprtyName 04

Since we want to try testing an internal server it would make sense that a certain parameter would accept input. Using “get-help” we would need to figure that out.

get-help Test-NetConnection -Parameter *

ByValueProprtyName 05

Because this cmdlet doesn’t require input, the parameter position “0” is not available. This is now transferred to position “1”. Looking at the help file a parameter with the name “Computername” would do the trick. Let’s try it first without specifying the actual parameter name itself.

ByValueProprtyName 06

Eureka! This works. So combining the two cmdlets would work. Let’s try that.

ByValueProprtyName 07

Hmm no go, bummer. “Test-NetConnection” apparently has no idea what we want to accomplish. Let’s figure out what the result of “Resolve-DnsName” actually is? We do this with

Resolve-DnsName server | gm

GM stands for Get-Members, or “give me all stuff from an object”.

ByValueProprtyName 08

What’s important here is the Typename, “Microsoft.DnsClient.Commands.DnsRecord_A”. This “Type” is the object type that is passed over the pipeline. This object type needs to be something that the receiving end understands. This is the PowerShell technique that is called “ByValue”. So the value type of the object that’s being passed needs to be of the same type.  PowerShell under the hood does it’s magic by trying to match the output of the first cmdlet to the input of the second one IF the object type is the same. So let’s see what “Test-NetConnection” expect.

ByValueProprtyName 09

Using the “Get-help” function again we take a closer look at the “Computername” parameter. The input is specified directly after the parameter name “[<string>]”. So it expects the “ByValue” object name property that is passed over the pipeline to be of the string type. Let’s try the simplistic version first. Just pass a string as input.

“server” | Test-NetConnection

ByValueProprtyName 10

So, now we know that this actually works. Let’s convert the output of the “Resolve-Dnsname” and see what happens.

(Resolve-DnsName server).name.ToString() | Test-NetConnection

ByValueProprtyName 11

That seems to work as well! Please note that you could technically skip the “.ToString” part because the property “name” is already a string, regardless of the original type of the entire object. This wouldn’t work for other properties that have a different type. Use “GM” or the “.GetType()” method again to see the actual type.

ByValueProprtyName 12

Please note that by using “ByValue” you can only pass a single property over the pipeline!

ByPropertyName

One of the other options that we could also use is the “ByPropertyName” option. As we can see from the help text this option is available for the “Computername” parameter of “Test-NetConnection”.

What this simply means is that the property name of the output of the first cmdlet must match the input parameter of the second cmdlet. In our example the property “Name” of the “Resolve-DnsName” output must match the input parameter “Computername” of the “Test-NetConnection” cmdlet. It doesn’t by now but we can make it so by creating an expression like this:

Resolve-DnsName server | select @{Name="Computername";Expression={$_.name}} | Test-NetConnection

The select statement is where the magic happens. This “Name=” part tells PowerShell that a new property with the name “Computername” should be created. The “Expression” fills that newly created property with the value of the “Name” property of the first cmdlet. The best part here is that the Object type here doesn’t matter, as long as the type of the property matches. In this case it still needs to be a string. However the object type can remain a different type. In this case a “Selected.Microsoft.DnsClient.Commands.DnsRecord_A” object type.

ByValueProprtyName 13

Putting this all together we end up with the following command line:

Resolve-DnsName server | select @{Name="Computername";Expression={$_.name}} | Test-NetConnection

ByValueProprtyName 14

And there we have what we wanted to accomplish!!

As you can see there are multiple ways to go about constructing your entire command line and casting properties over the pipeline. Specially with the “ByValue” and “ByPropertyName” options. I hope that this post added in understanding the differences.

References


Learn About Using PowerShell Value Binding by Property Name
https://blogs.technet.microsoft.com/heyscriptingguy/2013/03/25/learn-about-using-powershell-value-binding-by-property-name/

Two Ways To Accept Pipeline Input In PowerShell
https://blogs.technet.microsoft.com/askpfeplat/2016/11/21/two-ways-to-accept-pipeline-input-in-powershell/

Internet Explorer Hardening Mysteries

Today I had a very interesting problem with system hardening and a new application that we are going to use. This application moved from a form based management interface to a web based one. Under normal circumstances this doesn’t provide a huge challenge because management of these type of apps is done over the network from a client based machine. However, since we have a solution that’s sometimes only reachable on the box itself I have to make sure that the local instance of Internet Explorer actually still works after I apply system hardening polices. And that’s where it’s gets confusing. In the explanation below I’ll use a simple page hosted on IIS, accessed locally through Internet Explorer on Windows Server 2012 R2.

On this page:

  1. The Basics
  2. Default Setup
  3. Some Magic
  4. Site to Zone Assignment List
  5. Computer Configuration
  6. Security Zones
  7. The Effects of User Account Control
  8. The Enterprise Solution
  9. Summary
  10. References

The Basics

Before we start there are a few basics that we need to discuss. Internet Explorer has something called security zone assignment. This means that every site that you open in the browser is assessed and placed into one of these zones. Each zone holds a different security configuration, so you can interact differently with sites you trust, are under your control, run in your data centers or are hosted on the Internet. Opening the Internet Properties, Security page actually reveals 4 of the 5 zones. There’s one hidden one (zone 0) which is applicable to the local computer (system) only.

  1. Local Intranet
  2. Trusted Sites
  3. Internet
  4. Restricted sites

These are actually the internal number Internet explorer assigns to the zones. We’ll be using them later.

Next I want to focus on a security feature called “IE Enhanced Security Configuration” that was introduced some time ago, but is almost exclusively disabled on all systems that I’ve seen, which is really unfortunate. Let’s be clear on one thing though. Having a browser installed on a server class device is under almost any circumstance a no-go. Just don’t do it. Your networking people have most likely setup all kind of network devices keeping your network secure and browsing from a server isn’t really helping. Getting all kind of nastiness directly into your server segment is considered bad practice. Having said that, our setup unfortunately can’t work in a different way. We have to deal with a browser on the box, hence we harden Internet Explorer and “IE Enhanced Security Configuration” is part of that. Basically it raises the security settings of the security zones we discussed above and some stuff more. If you want to know the details, type this in the Internet Explorer address bar:

res://iesetup.dll/IESechelp.htm#effects

Last thing that I want to address is the addition of “Enhanced Protected mode” of Internet Explorer. This extra layer of security was introduced in Windows 8. Its intent was to further enhance the capabilities of the original implementation of protected mode. It does so by enabling a sandboxing technology, Microsoft named AppContainers. The concept of using AppContainers goes back to the release of Windows Vista where the concept of integrity levels was introduced. Those levels separate abilities of processes running on different levels on the operating system.

More information can be found here:
https://msdn.microsoft.com/en-us/library/bb625962.aspx

Default Setup

Having a web application on your box and connecting to it using localhost simply works out of the box. That’s because the security settings in the security zones are configured in such a way that it’s an allowed operation. As I mentioned in the beginning, all sites are evaluated and placed in the appropriate zone. When you connect to localhost, the site is evaluated, placed in the “Intranet Zone” and those security settings are then used. From a UI point of view, just open the “Internet Options” – “Security“. Select the “Local Intranet“, click “Sites“, those sites listed will automatically be placed in the “Local Intranet” zone, sounds logically, right!

Now on Server 2012 R2, from a registry point of view, it gets a little bit more complicated. Per default security zone information is stored in the context of the current user, but depending if you have “IE Enhanced Security Configuration” enabled (or not) it’s stored in a different place. The UI however does a good job on storing them in the appropriate place.

IE Enhanced Security Configuration turned on (default)

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\EscDomains

IE Enhanced Security Configuration turned off

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains

Some Magic

If you turn off “IE Enhanced Security Configuration“, the EscDomains default registry keys are completely cleared. Just take a look in the registry key or in the security tab. However, connecting to the localhost still works, even bringing up the properties of the webpage shows it being in the local intranet zone. Even though the localhost is not specifically mentioned anywhere the setting “Include all local (intranet sites not listed in other zones” is at play here. Anything not mentioned elsewhere and complies as being “local intranet” is placed in this zone. That’s why this still works. Restoring IE Enhanced Security Configuration will restore the default keys.

Site to Zone Assignment List

When you’re in a network managing a lot of machines you want to have a centralized way of managing devices. Usually that implies using group policies. Managing zone assignment for Internet Explorer is easy and at the same time very difficult because of the differences that you can encounter. Take the “Site to Zone Assignment List” policy as an example. This policy allows you to configure sites to fall into a certain zone. However this does not work when you have “IE Enhanced Security Configuration” enabled. And as far as I could find there’s no policy available to manage sites with a GPO when having “IE Enhanced Security Configuration” enabled, besides using group policy preferences. But that’s simply adding registry entries. This is a bit of a mess if you ask me as one would expect to have policies available for managing this.

Computer Configuration

As we have a requirement from the Internet Explorer STIG (Security Technical Implementation Guide) to use only computer based polices for Internet Explorer, this is actually the point where things start getting a bit more confusing. The policy I’m talking about is located here:

Computer Configuration – Administrative Templates – Windows Components – Internet Explorer – Security Zones: Use only machine settings

The idea behind this setting is that regardless if your user account falls into the scope of management for applying this policy, it’s simply always there. To be honest it isn’t even such a bad idea, it’s just that the execution is very different than using it on an user level. Let me explain.

If you set the above policy to enable, the text states. “If you enable this policy, changes that the user makes to a security zone will apply to all users of the computer”.

Right, so in my experience this simple is not true. Let’s give it a try. Enable the policy and open IE, go to “Internet Options“, “Security“, “Trusted sites“. Now add a website, for example bitsofwater.com to the trusted or intranet websites. Close the dialog and open the web site, next open the properties of the web site. Notice that it’s still in the “Internet” security zone! Hence the settings you make in the UI are not set in the registry and are not in effect. Funny part of the whole situation is that the keys actually end up in the correct registry place, or at least the place you would expect them to be.

001

Assuming the default setup of Windows with no additional policies, a notification popped up while opening the page. It informs you that the page is blocked, but does provide a means to add the web site. Let’s try the “Add” button and see what happens. Open up the properties of the web site and behold! The site is now placed in the appropriate zone! If you’re just a little bit like me you would what to know what magic is at play here. My choice of tools is again Process Monitor. There we see that the keys end up in a totally different place. In the “HKLM\Software\Wow6432Node” registry location.

002

So this means that a 32 bit process is actually writing the keys instead of an expected 64 bits process. Adding the architecture column in Process Monitor actually shows this little gotcha.

Now for some real confusion. Yes it’s getting worse. Open IE again, “Internet Options“, “Security“, “Trusted Sites“. Is the page listed? Nope, it’s not! So we can safely conclude that the UI does not play nice when the computer only policy is enabled. It’s registry only and the settings should be in the ‘Wow6432Node” registry key section.

Note! In this case it doesn’t matter if you turn of “IE Enhanced Security Configuration” the effect is still the same.

Note! Per default the Wow6432Node EscDomains registry is empty and is not populated. Turning “IE Enhanced Security Configuration” off and on again creates the keys. That sure causes a lot of confusion.

Security Zones

After I finally figured out what a mess the above actually was it got me wondering if this actually got a further effect on the zone configuration itself. Let’s try a little something. Per default the “Local Intranet” zone doesn’t have protected mode enabled. To get a little more advanced here, the setting in the zones registry configuration has the name “2500”, the value indicates its configuration state. More info can be found here:

https://support.microsoft.com/en-us/help/182569/internet-explorer-security-zones-registry-entries-for-advanced-users

Let’s start with the default. Set the policy “Security Zones: Use only machine settings” to off, applying the user policies again. A quick look at the current users hive shows that enabling protected mode for the “local Intranet” zone provides a value of 0 for the name 2500. So protected mode is on. A value of 3 turns it off.

Enable the “Security Zones: Use only machine settings” again, and see the effects. Opening the UI and selecting the “Security” – “Local Intranet” zone sets the “Enable Protected mode” to on. Looking in the registry again, the value ends up in:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\1\2500

So that seems to be doing alright.

Note! After further testing it seems that the key needs to be set in both the local machine and the Wow6432Node key to be effective. This should read, don’t mess around with registry keys when you have a policy available. Just think of the above as nice to know information.

The Effects of User Account Control

In the beginning of this post I mentioned that IE protected mode is built on integrity levels, which are a part of User Account Control. As most of you will start testing with the build in local Administrator account (or the domain administrator for that matter) the outcome will be very different. Per default the local administrator account is exempt from UAC and its effects. This directly implicates that protected mode for the local administrator is not in effect even when it’s set to enable. The full effect of having protected mode becomes visible when the policy “User Account Control: Use Admin Approval Mode for the built-in Administrator account” is set to enabled (a system reboot is required). After that the full implementation of protected mode is in effect. This has a direct effect of being unable to connect to the local host. As Applications that have protected mode enabled are prohibited from making a loopback connection. Solving it can be done by using the registry entry 2500 to the value of 3 (Set them in both the machine and Wow6432Node registry locations), use the UI when available but preferably use the “Turn on protected mode” policy for the Intranet zone.

There is another way, officially just for developers, but it works. Use the CheckNetIsolation.exe tool. Example:

CheckNetIsolation.exe LoopbackExempt –a –n=windows_ie_ac_122

More info can be found here:
https://msdn.microsoft.com/en-us/library/windows/apps/Hh780593.aspx

The Enterprise Solution

After a few days of testing we’ve found a more effective solution which I would like to share here.

The following is actually the “Site to Zone Assignment List” policy but split up into http and https. This is applicable when “IE Enhanced Security Configuration” is enabled.

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains\localhost]
"http"=dword:00000001
"https"=dword:00000001

Adding the following code will populate the registry in all the required places. The trick here is that the settings above have to be set also. Anything listed here is then effectively applied on the system.

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\EscDomains\localhost]
"http"=dword:00000001
"https"=dword:00000001

In our very special case we need to have access from the box itself to the localhost. As of Windows Server 2012 this is prohibited by “Enhanced protected mode” because it uses AppContainers. So we have decided that protected mode is turned off for the Intranet Zone when we need to connect to the localhost.

Computer Configuration – Administrative Templates – Windows Components – Internet Explorer – Internet Control Panel – Security Page – Intranet Zone – Turn on Protected Mode

[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\1]
"2500"=dword:00000001

Just to make your life a bit easier I’ve created a GPO template that can be deployed in your infrastructure. Download it here: templates.

Save the files to the PolicyDefinitions folder and open the group policy editor. You will see an additional policy named “Enable Internet Explorer localhost Connectivity” in “Computer ConfigurationAdministrative TemplatesHardeningInternet Explorer“. Set it to “Enabled” to allow localhost connectivity.

Summary

My observations and recommendations.

  • Don’t always trust the UI, it only partially reflects the reality, especially with the computer only configuration.
  • Always set UAC to on, even for the local administrator account.
  • Populate the EscDomain in Wow6432Node using a policy or automate it.
  • Manual population of the keys occurs when you turn ‘IE Enhanced Security Configuration” off and back on again (suspect a bug).
  • Use the “Turn on protected mode” policy for the Intranet zone when you want to connect to the localhost and have “enhanced protected mode” enabled. Set it to disable the protected mode.
  • Windows Server 2016 has the same “bug”.
  • The above is not an issue on Windows Server 2008 R2.

Hopefully this helps you to understand how IE actually handles certain security aspects.

References


Understanding and Working in Protected Mode Internet Explorer
https://msdn.microsoft.com/en-us/library/bb250462(v=vs.85).aspx

How the Integrity Mechanism Is Implemented in Windows Vista
https://msdn.microsoft.com/en-us/library/bb625962.aspx

A Peek into IE’s Enhanced Protected Mode Sandbox
https://securityintelligence.com/internet-explorer-ie-10-enhanced-protected-mode-epm-sandbox-research/

Enhanced Protected Mode add-on compatibility
https://support.microsoft.com/en-au/help/2864914/enhanced-protected-mode-add-on-compatibility

Internet Explorer security zones registry entries for advanced users
https://support.microsoft.com/en-us/help/182569/internet-explorer-security-zones-registry-entries-for-advanced-users

How to enable loopback and troubleshoot network isolation (Windows Runtime apps)
https://msdn.microsoft.com/en-us/library/windows/apps/Hh780593.aspx

3DES/FIPS 140-2/RDP Hotfix

In the past I’ve written a blog about the issues my company encountered when we disabled 3DES on our Windows 2008 R2 systems. Since we are obligated to also use FIPS 140-2 for compliance reasons the combination of disabling 3DES, and having FIPS140-2 enabled would break remote desktop functionality. Basically it came down to RDP using hardcoded 3DES when FIPS140-2 was also enabled. Needless to say RDP would stop functioning when you disable the one thing it could use. When the sweet32 vulnerability came along we had to make a choice, do we want to be secure or do we want to be able to remotely connect? Unfortunately since disabling 3DES is a system wide setting it’s not possible to differentiate between protocols. Leaving us stuck in between a rock and a hard place. Last option was to file a case with Microsoft.

It took us several conversations with Microsoft support, providing evidence and creating a business case for the Windows product group to determine the potential impact. One of the challenges we faced was that Windows 2008 R2 has been out of mainstream support for a couple of years now. So any change on this level would require the product group to agree upon a design change, which is very rare in these kind of situations. After a few month’s they agreed and we eventually got a preview fix that worked like a charm!

As of today I’m glad to be able to announce that a permanent fix has been released. As it’s a platform fix, bound to the operating system it will be included in the Microsoft update cycle of next month. My Microsoft representative has promised that an individual KB will be released soon describing the specific issue and solution. I’ll update this post when it becomes available.

A current hotfix is already included in the preview for September 2017, which you can get here:
https://support.microsoft.com/en-us/help/4038803/windows-7-update-kb4038803

Or get it from the catalog:
http://catalog.update.microsoft.com/v7/site/Search.aspx?q=KB4038803

On a personal note, I would like to extend my gratitude to Microsoft support for the excellent push they did towards the product group convincing them on the necessity for this fix and working with me throughout this endeavor. My first experience being a customer instead of a Microsoft employee was a very pleasant experience!

PowerShell – Help

Last year I had the opportunity to self-educate myself on Microsoft PowerShell. Having had just too little experience with it I thought it was about time to dig in and at least learn the basics. So I bought a book and got started. In the next couple of blog posts I will share my experiences and notes that I took over the month’s that I had my first try at learning PowerShell. Although the language can be very complex, it all starts out with having the basic skills to understand the language. After that it’s getting the experience that counts. Today I’ll start with a very important topic. Getting help…

Tip! Always start the PowerShell command-prompt or the Integrated Scripting Environment (ISE) elevated. Throughout the blog post I’ll use the ISE as much as possible. It has far better and more modern way of working.

First things first

The beauty of having PowerShell is that is a modern language integrated with not just local resources but also online repositories. The help functionality is no different. Although Windows ships with an up to date help file for PowerShell, things can change very quickly. So it’s always A good idea to update the local help files every now and again. Here’s a few tips on how to do it.

Update local help files
Use the following Cmdlet to update the help content.

Update-Help

Download help content
Suppose not all your systems are Internet connected, use save-help to store help files locally.

Save-help -path <path>

Offline-Update
Use the -SourcePath on the target system to update the help files

Update-Help -SourcePath <path>

Tip! man & help are wrappers for get-Help cmdlet

After the help files are updated it’s time to make use of them. In a command-line environment it’s really not that hard. First you need to figure out what command you need help on. That can easily be done be done with Get-Command. Suppose we want to do something with Windows Services. We could use something like this:

Get-Command *service*

The output will look something like this:

get-cmd

Tip! Use the following to list only the Functions and CmdLets

Get-Command *service* | Where {$_.CommandType -eq "Cmdlet" -or $_.CommandType -eq "Function"} | select Name

Or use the “Commands” Window in the ISE. That will bring up a list of only the Cmdlets (PowerShell Commands) and functions that you can use. Getting help from the ISE is also very easy. Just click on the little blue icon with the question mark to get the help overview.

ise-help

To get the same output as available in the ISE and more on the command-line, use these parameters after the get-help Cmdlet. Let’s use the Get-Service CmdLet as an example.

Quick help

Get-Help Get-Service

Detailed Help (without parameter details)

Get-Help Get-Service -Detailed

Full information (I always use this)

Get-Help Get-Service -Full

Examples

Get-Help Get-Service -Examples

Up-to-date online information

Get-Help Get-Service -Online

That’s all for this post on the PowerShell help system.

Reference


Get-Help
https://technet.microsoft.com/en-us/library/ee176848.aspx

Update-Help
https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/update-help

0x800b010e – The revocation process could not continue – the certificate(s) could not be checked

I my line of work, every now and then I run into these unique situations. A few weeks ago we needed to do an application upgrade on a few of our systems. Once we started we got the following message:

0x800b010e – The revocation process could not continue – the certificate(s) could not be checked.

Okay, now what? Turns out that the combination of .Net or system hardening with a non-Internet connected system can trigger this error message. What happens is that the systems notices that the software is digitally signed. Because of its policies it tries to check the validity of the certificate and its revocation status. Since our systems are hardened its expected behavior. The software was digitally singed by Microsoft (signing is a good thing btw!), so it tried to reach the public Microsoft servers. Since it wasn’t allowed to go out and fetch the crl it failed. Luckily it’s a simple matter of switching a registry key to turn the revocation check on or off again. Since I can’t remember registry keys (and I seem to forget these kind of little tricks) I’ve written a PowerShell script to enable or disable the offline installation capabilities. Just in case you want to check the value on your system, use the setreg.exe tool from the SDK. In case you’re curious about the registry entry look at:

HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\WinTrust\Trust Providers\Software Publishing\State (REG_DWORD)

The default value of state is 23c00 (Hex).

My script can be used with the following parameters:

Set-OfflineInstallationMode.ps1 [-On] [-Off] [-Force]

  • On: Enables the offline installation of signed binary files.
  • Off: Reset the system to the previous (or default) value.
  • Force: Ignore the previous setting and execute anyway.

You can download the script here. Version 1.0

Reference


Set Registry Tool (Setreg.exe)

https://msdn.microsoft.com/en-us/library/7zaf80x7(v=vs.80).aspx

SetReg

https://msdn.microsoft.com/en-us/library/windows/desktop/aa387700(v=vs.85).aspx

Enable Schannel logging

To enable logging for Secure Channel logging (Schannel), use the following guide.

Add the following registry key:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SecurityProviders\SCHANNEL\EventLogging (REG_DWORD)

Set one of the following values:

0x0000 Do not log
0x0001 Log error messages
0x0002 Log warnings
0x0004 Log informational and success events

When troubleshooting I like to set it to 0x0007 (0x0001 + 0x0002 + 0x0004). Reboot your machine to start the logging process.

The data will end up in the “System” eventlog with the source name of “Schannel”. You would want to keep an eye out for event id 36880, indicating a succesful event. It would look something like:

A SSL client handshake completed successfully. The negotiated cryptographic parameters are as follows.

Protocol: TLS 1.2
CipherSuite: 0xc028
Exchange strength: 256

To translate the CipherSuite use the following site:
http://www.thesprawl.org/research/tls-and-ssl-cipher-suites/

In the example above this would translate to: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384

Reference


TLS/SSL Settings

https://technet.microsoft.com/en-us/library/dn786418(v=ws.11).aspx

TLS/SSL Security Considerations

https://technet.microsoft.com/en-us/library/dn786446(v=ws.11).aspx

Cipher Suites in TLS/SSL (Schannel SSP)

https://msdn.microsoft.com/en-us/library/windows/desktop/aa374757(v=vs.85).aspx

Prioritizing Schannel Cipher Suites

https://msdn.microsoft.com/en-us/library/windows/desktop/bb870930(v=vs.85).aspx

How to restrict the use of certain cryptographic algorithms and protocols in Schannel.dll

https://support.microsoft.com/en-gb/kb/245030

Update to add new cipher suites to Internet Explorer and Microsoft Edge in Windows

https://support.microsoft.com/en-gb/kb/3161639

IIS Crypto

https://www.nartac.com/Products/IISCrypto/

Custom RDP Certificate on Windows Server 2012 R2

Ever since Windows 2012 the Remote Desktop host tool has been removed from the system, making it more difficult to set a custom certificate. When you’re in a domain context it’s more likely that you will use GPO’s and domain related tools to configure your system, but in my work environment I deal with stand-alone systems, making it a bit harder to manage. During one of my security scans with our vulnerability tool it started complaining about self-signed certificates. Simply removing the certificates from the local computer store and using WMI to update the system is fairly easy, but after doing the same trick twice, it gets annoying so I decided to write a PowerShell script to handle the task of setting the correct certificate.

My script has a couple of parameters that you can use:

Set-RDPCertificate.ps1 -Hash <string> [-Delete] [-Terminalname <string>] [<CommonParameters>]
Set-RDPCertificate.ps1 -listCerts [<CommonParameters>]
Set-RDPCertificate.ps1 -ListCurrent [<CommonParameters>]

  • Hash: This is the default input parameter, expects a 40 character string value
  • Delete: Delete the current certificate from the local store after setting the provided hash value.
  • Terminalname: If you changed the default RDP-TCP connection name you can enter it here, not providing the parameter will result in using the default name.
  • Listcert: Lists the certificates in the personal store of the local computer.
  • ListCurrent: Lists all certificates in the personal store of the local computer.

Download the script from here. Version 1.1.

5 Steps to get started with Azure DSC

In our never ending quest to improve our production environment we are currently looking into the possibilities that Microsoft Azure can offer us. One of offerings that has our interest is state enforcement management with Windows PowerShell, also known as “Desired State Configuration”. I was already deeply impressed with the capabilities that are offered when running DSC on a local box, via PowerShell remoting or using the pull method on-premise. Only thing that was a bit challenging is deploying the configuration in a world wide, loosely connected, non domain environment, Microsoft Azure to the rescue.

Basically PowerShell DSC is a state enforcement platform, much like Group Polies are. With the increased usage of the cloud it’s become clear that group polices via Active Directory are not the way forward anymore, especially for non-domain type systems. We need a way to do remote state enforcement whenever we need it, on demand without the requirement to make use of a self-hosted on premise environment. Introducing Microsoft Azure Desired State Configuration. Follow the steps below to make your first DSC Configuration document, Create an Automation Account, Assign the Document and configure a server to use DSC.

On this page:

  1. Creating the Automation Account
  2. Import the Configuration
  3. On-boarding the target
  4. Apply the meta-data configuration file
  5. Assign the configuration
  6. Downloads
  7. Reference

Creating the Automation Account

Azure DSC makes use of an automation account. Follow these steps to first create that account.

  1. Login to the Azure portal at https://portal.azure.com
  2. Select “More Services
  3. Type “Automation”, click “Automation Accounts
  4. Click “Add” next to the large plus sign
  5. Add the appropriate data in the ‘Add Automation Account” page. Click “Create

automationaccount

Wait a few moments for the account to be created

Import the Configuration

Once the Automation Account has been created we can start to configure desired state. Click on your just created automation account. The blade for configuring the account will open. Click on the “DSC configurations”, located in the “Configuration Management” section. The DSC Configuration node is the store where your configuration documents are saved. These documents dictate the state of your machine. Click the “Add a configuration” on the top of the page. In the “Configuration file” click the folder icon and browse to your configuration file. In my case I’ve created a document that tells the system to install the default Web Server (IIS) role and remove SMBv1, but you can basically configure your entire system this way. Click “OK” to return to the previous blade.

Give it a second to import the file.

What we need to do next  is tell Azure to convert the configuration document to a .mof file that it can deploy. To accomplish that just click the configuration that you just imported. The blade that opens allows you to convert the ps1 script to a .mof file. Click on the “Compile” button on the top of the page. Read the message and click “Yes”.

ConfigurationFile

Close the blade after the compile action successfully completes .

On-boarding the target

Now that the main configuration is done we need to on-board the server that we want to target with DSC. Microsoft made that a really simple task with providing an on-boarding script. This script is available at:

https://docs.microsoft.com/en-us/azure/automation/automation-dsc-onboarding

Just in case that location ever changes, you can easily locate the url again by browsing to your “Automation Account” blade, clicking “DSC Nodes” in “Configuration Management” and click on “Add on-prem VM”. Browse to the “Generating DSC metaconfigurations” section and copy the script content.

The only basic things that needs to be changed in the script are the “RegistrationUrl”, “RegistrationKey” and the “Computername” value. Just for fun we’ll also change the state enforcement policy by setting the “ConfigurationMode”. You first need to get the values for “RegistrationUrl” and the “RegistrationKey”. These are located in the “Automation Account” blade, “Account Settings” select “Keys”. You can use either the primary or the secondary key.

Once updated the code in the script would look something like this:

ComputerName = @('localhost');
RegistrationUrl = 'https://we-agentservice-prod-1.azure-automation.net/accounts/5eb0794a-a035-406e-8181-ece99172d6fa';
RegistrationKey = 'qLVkKkhBe4+RfVEkuIl2TicZSvzUEj+1jPXvdd0SkDM662zwolUcyVx2KzvSoUsSHZpK7FvlzkF4WuxZh4G8CA==';
ConfigurationMode = 'ApplyAndAutoCorrect';

In the example above I apply the configuration to the localhost instead of an individual hostname. That’s just the situation I’m in. Your situation could be very different. Once the script is ready, it needs to be executed to generate the meta-data configuration .mof file.

Apply the meta-data configuration file

Applying the configuration meta-data file can be done multiple ways, by using PowerShell remoting or just apply it locally on the box. In this demo I’ll use the latter.
Copy the “DscMetaConfigs” folder to the local box. Open an elevated PowerShell ISE or PowerShell Command console and use the following command to configure the system.

Set-DscLocalConfigurationManager -Path .\DscMetaConfigs

In case you want to check the configuration that’s been applied, use:

Get-DscLocalConfigurationManager

Assign the configuration

Last step we need to take is assigning the appropriate DSC configuration to the server. Obviously we need to do this in Azure. Select your “Automation Account” and click on the “DSC Nodes” under “Configuration management”. The Server that we used the on-boarding script on should be listed here. Click the Server name, next click the “Assign node configuration” icon at the top of the screen. On the “Assign Node Configuration” blade that opens, click the generated configuration document (If you used my script it will show “MyFirstConfiguration.MyServerRole”). Click “OK”.

And that’s really all there is to it. After waiting for about 15 minutes the configuration will be applied on the selected server.

dsc-node

Tip! Just in case you don’t want to wait, use the cmdlet:

Update-DscConfiguration -Wait –Verbose

To force the consistency check on the target server.

Downloads


MyServerRole PowerShell Script

OnBoardComputerInAzureDSC

Reference


Windows PowerShell Desired State Configuration Overview

https://msdn.microsoft.com/en-us/powershell/dsc/overview

Azure Automation DSC Overview

https://docs.microsoft.com/en-us/azure/automation/automation-dsc-overview

Getting Started with PowerShell Desired State Configuration (DSC)

https://mva.microsoft.com/en-US/training-courses/getting-started-with-powershell-desired-state-configuration-dsc-8672?l=ZwHuclG1_2504984382

Hybrid IT Management Part 3: Automation

https://mva.microsoft.com/en-US/training-courses/hybrid-it-management-part-3-automation-16631?l=MQrdRCHrC_4406218965