Americas

  • United States
by Brendan Allen

REVIEW: Deep dive into Windows Server 2016

Reviews
Feb 27, 201721 mins
MicrosoftSmall and Medium BusinessWindows

Microsoft delivers a boatload of new virtualization, storage and security features, along with a nod to open source.

windows
Credit: Thinkstock

Windows Server 2016 was officially released in September, but we waited until all of the bits were at production level before taking a deep dive into Microsoft’s flagship server operating system.

What we found is an ambitious, multi-faceted server OS that focuses much of its energy within the Microsoft-centric world of Windows/Hyper-V/Azure, but also tries to join and leverage open source developments and initiatives, such as Docker.

One item we noticed right away is that older 64-bit CPUs won’t work with Microsoft’s Hyper-V virtualization infrastructure. This meant our older Dell 1950 servers weren’t compatible with Hyper-V and an older HP 560 Gen4 with 16 cores barely coughed into life as a Windows 2016 server.

A Windows Server 2016 deployment requires plenty of thought and planning. There are two license options, Datacenter or Standard. And there are three installation choices, the regular GUI server version, the server core (no GUI) version and lastly Nano server.

The Datacenter edition, which is the most expensive, has all the best roles and features. Those roles include: Storage Spaces Direct, Shielded Virtual Machines/Host Guardian Service, Storage Replica (synchronous replication service for disaster recovery and replication), and Network Controller (for the Azure cloud communications fabric).

The total price for Standard and Datacenter versions equals the cost of the server software plus client access licenses (CAL). Prices vary widely between list price, OEM prices, enterprise, education, and other options. Also, there is an Essentials Server version limited to 25 users and 50 devices licensed by processors instead of cores, and it requires no CALs.

In this review, we will go through the various new and improved features of Windows Server 2016. We found that many of them worked as advertised, while others weren’t totally baked yet.

New Nano Server option

Windows Nano Server, as the name implies, is designed for DevOps and minimized kernel API. Lean, mean virtual machine/container deployments are to ostensibly ensue. At less than 200MB, Microsoft calls it just enough OS to run apps.

We built Nano Server from PowerShell commands, which is the only way they can be built. They’re currently in vhdx format, and at press time aren’t supported on hypervisors other than Hyper-V, although that’s likely to change.

The key roles that can be used in a Nano server deployment include Hyper-V, Storage, Cluster, the all-important webserver IIS, Dot-Net and ASP-Net core, and containers. All of these roles need to be setup during instantiation, and not later.

There are a number of limitations on uses for Nano Server: 64-bit apps, tools, agents are supported, but it can’t be an AD domain controller. Nano isn’t subject to group policy, can’t use a proxy server or Systems Center Configuration Manager and Data Protection Manager. Nano also has a more limited PowerShell vocabulary.

But the limitations may be poised to emancipate certain Windows 2016 uses to support popular open source deployment and management frameworks. We found primitive but useful OpenStack support for Nano and potential support for VMware vSphere.

Nano is licensed either as Datacenter or Standard server roles, which might include a DNS server or IIS server. This will please many, and can become the substrate for a lush variety of other app/server/service use cases.

Nested virtualization

Windows Server 2016 supports virtualizing within itself. In other words, VMs within VMs. Currently only Hyper-V under Hyper-V is officially supported, but we were able to get Hyper-V running under vSphere 6.

The use cases for this are rather limited but this can be useful where you want to run Hyper-V containers instead of running containers directly on the host or for lab environments for testing different scenarios.

Enabling nesting on a virtual machine under HyperV required us to set a flag via PowerShell. (Set-VMProcessor -VMName test-server-core -ExposeVirtualizationExtensions $true). Networking requires IP MAC address spoofing. (Get-VMNetworkAdapter -VMName test-server-core | Set-VMNetworkAdapter -MacAddressSpoofing On)

We were able to run containers, as well as other Windows Server 2016 VMs, in the nested machine. This was relatively simple to get working for us, and its convenience won’t be lost on coders and system architects.

Just for fun, we tried nesting Windows 2016 in vSphere 6.0. We initially set up a Windows Server 2016 VM running in ESX. There is a setting that enables this in the VM properties. We then installed HyperV in the VM and we were able to install a nested Windows Server 2016 VM. This worked pretty well.

The increased complexity of nested VMs isn’t quite as high as we suspected, providing the hypervisor allows para-virtualization that doesn’t rob a nested VM of needed resources.

Shielded VMs

Another way to make working processes go dark or opaque is to encrypt them. Windows 2016 Server Shielded VMs are virtual machines that have been encrypted, and can live alongside unencrypted VMs. Shielding requires modern TPM chip sets on the physical hardware to be setup or a one-way trust through Active Directory.

Shielded VMs are encrypted with BitLocker technology and only Windows VMs are supported. Unfortunately, shielded VMs can only be used with the Datacenter edition of Server 2016 not the Standard one — and there are dangers.

The trade-offs have to be completely understood. As an example: The only way to connect to shielded VMs is through RDP, and we found it is not possible to connect through the console or another means. So if your VM loses network connectivity, you are totally screwed unless you’ve made other specific working arrangements to get inside the VM.

It is possible to create shielded VMs without Virtual Machine Manager (part of System Center 2016) or Azure but we couldn’t find documentation on how this is done.

Host Guardian Service

Related to encrypted VMs is the Host Guardian Service (HGS). Third-party vendors have been offering SSO, identity, and key management services for Windows server and client environments, and with Host Guardian, Microsoft delivers its own service.

HGS provides two things: key protection to store and provide BitLocker keys to shielded VMs, and attestation to allow only trusted Hyper-V hosts to run the shielded VMs.

This service must run in its own separate Active Directory forest with a one-way trust. The Active Directory forest will be created when installing the role automatically.

There are two forms of attestation available to use for the HGS. The first is using TPM-trusted attestation, which requires the host physical hardware to have TPM 2.0 enabled and configured, as well as UEFI 2.3.1+ with secure boot. The shielded VMs will be approved based on their TPM identity.

The second form is admin-trusted attestation, which can support a broader range of hardware where TPM 2.0 is not available. This form also requires less configuration. For this mode, VMs are approved based on the membership in a certain AD Domain Services security group.

Microsoft recommended that a Host Guardian Fabric be installed on a three-machine physical cluster, but it can be installed on VMs for test purposes. If you use Azure or System Center Virtual Machine Manager, it should be easier to setup the host guardian service and the guarded hosts.

Also note that if the HGS becomes unavailable for whatever reason (which is why it is recommended to run it as a three-machine physical cluster), the shielded VMs on the guarded hosts will not run.

The setup is a bit complicated without System Center so the following describes how to set it up using mostly PowerShell commands. We used “extreme2.local” as our test domain:

PowerShell Commands used to make host guardian after installing the role in a script like this one:

   Install-HgsServer -HgsDomainName ‘hg.extreme2.local’

Restart-Computer

New-SelfSignedCertificate x2 (one for signing and one for encryption)

Export-PfxCertificate x2 (to create the files for the self-signed certificates)

Initialize-HgsServer -HgsService ‘hgs’ -SigningCertificatePath ‘cert.pfx’ -SigningCertifactePassword $pass -EncryptionCertificatePath ‘enc-cert.pfx’ -EncryptionCertificatePassword $pass2 -TrustActiveDirectory (can also use -TrustTPM)

Get-HgsTrace (to check and validate the config)

The following string of PowerShell commands was used for non-TPM servers, as TPM wasn’t initialized on the servers. This seemed the best way to test using AD-trusted attestation – TPM servers are a bit more complicated (on the DNS server):

   Add-DnsServerConditionalForwarderZone -Name “hg.extreme2.local” -ReplicationScope "Forest" -MasterServers 10.0.100.43

(on the main AD server)

netdom trust hg.extreme2.local /domain:extreme2.local /userD:extreme2.localAdministrator /passwordD: /add

Here we created a new security group on the Active Directory server and added the computer we want to be trusted for the guardian host service to the group and then restarted the servers:

Get-ADGroup “guarded-hosts” (

(Made note of the SID)

$SID = “S-1-5-21-2056979656-3172215525-2237764365-1118”

(now back to the host guardian service server)

Add-HgsAttestationHostGroup -Name “GuardedHosts” -Identifier $SID

Get-HgsServer (to get the values for URLs to configure the guarded hosts)

Then we ran these commands on the hosts, with Hyper-V and the Host Guardian Hyper-V services installed, that we wanted to be guarded hosts:

       $AttestationUrl=“http://hgs.hg.extreme2.local/Attestation”

       $KeyProtectionUrl=“http://hgs.hg.extreme2.local/KeyProtection”

   Set-HgsClientConfiguration -AttestationServerUrl $AttestationUrl KeyProtectionServerUrl $KeyProtectionUrl

Now that we had that set up, we could set up shielded VMs for existing VMs:

Invoke-WebRequest 'http://hgs.hg.extreme2.local/KeyProtection/service/metadata/2014-07/metadata.xml' -OutFile

‘.ExtremeGuardian.xml’

Import-HgsGuardian -Path '.ExtremeGuardian.xml' -Name 'GuardedHosts' -AllowUntrustedRoot

     New-VM -Generation 2 -Name "Shielded-Server2016" -NewVHDPath .Shielded-Server2016.vhdx -NewVHDSizeBytes 20GB

 

$guardian = Get-HgsGuardian -Name 'GuardedHosts'

$owner =   New-HgsGuardian -Name '' -GenerateCertificates

$keyp =     New-HgsKeyProtector -Owner $owner -Guardian $guardian -AllowUntrustedRoot

Set-VMKeyProtector -VMName $vmname -KeyProtector $keyp.RawData

Set-VMSecurityPolicy -VMName $vmname -Shielded $true

Enable-VMTPM -VMName $vmname

At this point, the VM should be shielded and we could move the VM vhdx file and config files to guardian host to run the VM after enabling BitLocker on the partitions in the vhdx file.

We could also do this for the supplied template VMs. The docs were fairly clear about the process.

Windows containers

Containers have finally come to Windows Server 2016 using (but not necessarily limited to) Docker and Docker container components. There are currently two ways to run containers. One is directly on Windows Server 2016, called Windows Server Containers. The other one is through Hyper-V in a kind of isolation mode/sandbox, called Hyper-V containers.

The HyperV isolation mode requires the HyperV role to be installed on the server, then we could start the container (and its app payloads) using various Docker commands (an example can be seen below).

docker run --isolation=hyperv microsoft/nanoserver

Docker isn’t supplied and must be installed separately, and currently there are issues running it under remote PowerShell sessions. It was very frustrating trying to work around the bugs in this build. After much testing, we recommend waiting until the kinks are worked out and its production use is currently dubious.

NET RESULTS

First, you can’t use Linux containers unless they’re especially built to run in a highly-confined context. Developers then have two sets of containers if they’re already developing containers: one that runs on Linux/elsewhere and Windows-specific containers.

Second, the number of “off the shelf” containers is dramatically small and a quick check at press time revealed perhaps 100:1 or higher Linux vs Windows-capable containers in the repositories we checked.

Third, we found a dearth of PowerShell commands available to do container management work, and this forced us to use docker specifically — not that we minded — rather docker is an isolate and almost a curiosity. A cohesive management plane for the Windows environment doesn’t quite exist yet that we could find.

Finally: We cratered our specially ordered Lenovo ThinkServers numerous times doing things by the book in our attempts to use just the sample provided of a simple .Net server. Kaboom, even running as admin, with the latest updates, we ended up with just a smoking hole in our rack at Expedient. It did not give us confidence. We tried on Server core and Nano core and still were not pleased.

Inexplicably, we were subsequently able to get the VM running without crashing on another physical, non-hypervised host.

Documentation for docker container use is primitive in the Windows 2016 release, and in looking for what we could have possibly done wrong, we found we weren’t alone.

UEFI Linux Support in Hyper-V

Linux running on generation 2 VMs can now use the Secure Boot/UEFI option, a specification defined in UEFI. Secure boot was already possible for Windows VMs in previous versions of Hyper-V, but caused much growling in admins and installers when trying to use it with Linux distros.

We tried the UEFI with Hyper-V-based Ubuntu 16.04 and it worked easily. Secure boot is supported on Redhat Enterprise Linux 7.0+, SUSE Enterprise Server 12+, Ubuntu 14.04+, and Centos 7.0+. We found that Linux VMs must configure the instance to use MS UEFI Certificate Authority in the VM settings — and can also be set with PowerShell.

PowerShell Direct

PowerShell Direct allows PowerShell commands to be used on certain VMs without any network connectivity as a relationship between Hyper-V and its resident VM(s). This is available only from the host that the VM is running on and serves as a communications channel to unshielded VMs.

We were prompted to enter credentials, and if you are not logged in as a user in the Hyper-V Administrators group then you will not be able to use PowerShell direct. Lack of subordinated admin use seemed strange to us at first, but we can understand the constraints of mandating a Hyper-V administrator.

For now, the only supported operating systems are Windows 10 and Windows Server 2016. Both the host and guest have the same requirements. PowerShell Direct can be a useful tool for scripting or accessing a VM if networking is unavailable but with the OS support very limited, the use case for using PowerShell Direct is very narrow, unless you upgrade everything.

It also opens up a potential compromise if host Hyper-V credentials are somehow hijacked. This said, we’ve wondered about this kind of “hole-in-the-sandbox” in any number of use cases; instead, we’ve used other means to contact VMs that were having difficulties. The hole can be closed, of course but it also means: check every cloud instance to ensure that the hole isn’t open, along with the 20,000+ other things admins have to do.

Storage updates

Storage replicas are available to synchronously protect data shares, purportedly with zero data loss after instantiation. We did not test this – to do so would require multiple clusters in separate locations. The purpose of storage replica is disaster recovery between different sites and allowing more efficient uses of multiple data centers. It supports the ability to synchronously protect data with zero data loss, according to the docs. Also, it is possible to do asynchronous replication for longer ranges or high latency networks. It continuously replicates and is not snapshot/checkpoint based. This feature may be useful for companies with multiple campuses spread over a wide geography.

Storage QoS policies

QoS policies in Server 2016 (Datacenter edition only) are able to be used in two scenarios, both involve Hyper-V and all servers must be running Server 2016. One way to use QoS policies is with a Scale-out File Server and the other is to use Cluster Shared Volumes.

These policies are enabled by default on Cluster Shared Volumes and you do not have to do anything special to enable them. However, modifying the policies lets you fine-tune your server’s storage performance. Some of the options include ways to mitigate noisy neighbor issues, monitor end to end storage performance and manage storage I/O per workload. For example, you could set Minimum and Maximum IOPs on a per VHD basis (as a dedicated policy). Or you can create an aggregated policy which is shared between all the VHDs assigned that policy. We didn’t test this totally, but we checked some of the commands to see that the QoS was indeed running on the cluster with Storage Spaces Direct.

Storage Spaces Direct (S2D)

Storage Spaces Direct is part of the clustering technology in Windows Server 2016. S2D uses Windows 2016 Datacenter edition (even in the Nano Server incarnation  — but watch the licensing costs!) servers having local storage (i.e. JBOD availability) to build highly available and scalable software defined storage using SMB3, clustered file systems and failover clustering. Storage must be clustered using the Failover Role and its clustering file system.

SCORE CARD

The system requirements for storage spaces direct are pretty high: you’ll need 128GB RAM, two SSDs and at least four HDDs configured in a non-RAID setup plus an additional HDD for the boot drive. Also, you’ll need at least two of these servers setup in a cluster. It is also recommended to have 10GBE ports on each machine. (See the complete hardware requirements.)

We ordered up and used two special Lenovo x3650 m5 ThinkServers with 128GB of RAM, two 240GB SSD, six 300GB HDDs that met the requirements to test the theory. We could set different storage tiers, and by default if SSD and HDDs/conventional drives are present, S2D will automagically create a performance and a capacity tier for hybrid storage.

The Lenovo servers we used were setup in a failover cluster, which is a necessary and mandatory step. This means: minimum two servers, although they needn’t be identical. We made sure the extra storage SSDs and HDDs are online and initialized in Disk Management, but were otherwise unallocated and empty. Our installation and use of this failover cluster (remember the madness of Microsoft’s Wolfpack?) was pretty painless. We then used the Enable-ClusterS2D PowerShell command on one of the clustered nodes and it added all the available unused storage from all the server nodes to a pool of disks.

We could see all the disks it used for this pool in the Failover Cluster Manager. After that, one must create one or more volumes. They can be created in the GUI, but the GUI didn’t allow us to set the filesystem or storage tiers.

We created a volume using Resilient File System. This is an example of how to create the volume with ReFS filesystem format as a Cluster Shared Volume (CSV) with the PowerShell command:

            New-Volume -StoragePoolFriendlyName “S2D*” -FriendlyName test2 -FileSystem CSVFS_ReFS -StorageTierFriendlyNames Capacity -StorageTierSizes 100GB ((EDITOR NOTE: this is one long string not three lines))

Once the volume was created, we could use it to create a VM cluster role with the VHD stored in the S2D storage cluster location (in our case it was: C:ClusterStorageVolume1). This storage geography can be seen by all nodes of the cluster. We were successful in creating and running a Server 2016 VM in Hyper-V and live migrated between servers easily. It was very quick too, finishing the migration in mere seconds.

Failover clustering – new and improved

We found many improvements to failover clustering in Server 2016. One of the most interesting is the Cluster Operating System Rolling Upgrade functionality. If you already have Windows Server 2012 R2 cluster nodes, then you can upgrade the cluster to Windows Server 2016 without having to stop Hyper-V or Scale-out File Server workloads. Another interesting feature is using a Cloud Witness for the quorum witness (failover logic) using Azure to store the witness disk. Another improvement that looked interesting was the VM load balancing feature. This can help even the load by checking which nodes are busy and automatically live-migrate VMs to other nodes.

New Windows 2016 Security measures Credential Guard

In previous versions of Windows, credentials and other secrets were put in a Local Security Authority (LSA). Now, with the new Credential Guard feature, the items that used to be in the LSA are now protected by a layer of virtualization-based security. This is used to prevent “pass the hash” and “pass the ticket” attacks. It does this by insulating the secrets, such as NTLM password hashes and Kerberos ticket granting tickets, so that only privileged system software can acquire them.

Derived domain credentials that are managed by Windows services are run in the virtualized protected environment. This environment is not able to be accessed by the rest of the OS. This feature can be managed using group policy, WMI, PowerShell, or even a command prompt. This feature also works in Windows 10 (Enterprise or Education) and Windows Enterprise IoT. However, there are certain basic hardware requirements for this feature to work: 64-bit CPU, CPU virtualization extensions enabled (Intel VT-x or AMD-V) and SLAT, TPM 1.2 or 2.0, UEFI 2.31.c+ with Secure Boot.

Just Enough Admin (JEA)

JEA is a PowerShell-based (included with Version 5 and up) security kit that can limit privileges for admins to just enough for them to do their job. It allows users to be specifically authorized to run certain commands on remote machines with logging. This runs on Windows 10, Server 2016 and older OSs if they have the Windows Management Framework updates. JEA combined with Just In Time admin, introduced in server 2012 R2 and part of Microsoft Identity Manager (product page), allows one to limit an admin in both time and capability.

Network Controller (SDN – Software defined networking)

The Network Controller allows for a more centralized approach to network management. It provides two APIs. One lets the Network Controller communicate with the network and the other API allows you to contact the Network Controller directly. The Network Controller role can be run in both domain or non-domain environments.

Network Controller can be managed with either SCVMM or SCOM. The Network Controller role lets you configure, monitor, program or troubleshoot the underlying infrastructure that is managed by SCVMM or SCOM. However, it is not strictly necessary to use those tools, as we could also use PowerShell commands or the REST API.

Network Controller works with other parts of the network infrastructure such as Hyper-V VMs and virtual switches, the data center firewall, RAS gateways, and software load balancers. Because this is a Windows Server review and not an SC-VMM or SC-OM review, we didn’t test this.

Data center firewall

The data center firewall is a new distributed firewall based on network flow and app connectivity instead of where the workload is actually present. For example, if you migrate a VM from one server to another in the data center, it should automatically change the firewall rules on the other server to allow whatever ports need to be open, reconfigure routers and switches for that VM. The firewall also offers protection to be used on VMs independent of the guest OS. There is no need to separately configure a firewall in each VM.

This means of VM metadata control is an idea also advanced by VMware to permit high VM portability with a minimum of muss and fuss.

These new security features are very highly competitive with announcements from VMware made at VMWorld 2016 — especially designed to advance a control plane for completely objectifying workloads in such a way as to make all elements — compute, networking, storage, and other characteristics — into an (atomic) object for purposes of manipulation, movement, storage, and management and control plane needs.

Summary

In our working test of Windows 2016, we found attempts to cover a lot of turf. We see the new raw edges, but also a different thinking in terms of workload and developer strategy meeting the long-installed capital costs that enterprise fixtures represent.

In other words, Windows 2016 is serving numerous masters, some of them very well, and some are in a race with a blistering pace of change in developer and rapid infrastructure deployment strategies.

Windows Server 2016 has a lot of new and improved features, including attempts to use competitive concepts — largely from Linux. We were able to test some of these, but not all. Some require certain hardware or setups (including ones that needed System Center pieces to work more efficiently). If you cut Systems Center combos from the long list of features in the product announcements, it’s still interesting.

Special thanks to Lenovo for loaning two fully equipped, S2D-capable servers for testing.

How we tested

We tested Windows 2016 Server editions in our lab and in our NOC at Expedient in Carmel IN. We tested Datacenter, Standard, and Nano servers as native, Hyper-V VMs, and VMware 6 VMs on HP Gen4, Gen8, and Gen9 servers, Lenovo RD460 and two Lenovo x3650 m5 Thinkservers (128GB of RAM, two 240GB SSD, six 300GB HDDs) accessed through the NOC’s backplane (Extreme Summit-Series GBE and 10GBE switches), with an HP MicroServer (as an AD controller that also served as a VPN touch point) with clients ranging from Windows 7 through Windows 10, MacOS, and Linux (Ubuntu, Debian, and CentOS).