Zoho Banner September 2011

Recently I set up Web Application Proxy (WAP) instances for a customer to support remote access to several on-premises web applications.  I was looking for a cheap and effective means of ensuring the service continued to be available to clients in the event that one of the WAP instances went down.  Someone recommended using Azure Traffic Manager (ATM) for this.  ATM is cheap, especially compared to an external geo-load-balancer and, also being a Microsoft service, allows you to use Powershell to get things up and running as well as for ongoing administration.  I found some useful on-line information on how to integrate ATM with WAP, but not all in one place and some of the information was out of date.  Hopefully this article will help those to want to do the cut out the noise and simply get up and running quickly.

The first step is to create an ATM Profile.  In this example I have used the DNS name wap.fish-eagle.trafficmanager.net. The wap prefix obviously refers to Web Application Proxy, while fish-eagle is my company name.  The trafficmanager.net suffix is fixed and is common for all ATM DNS names.

ATM_CREATE

Note that if you use the Azure Portal to create the ATM profile (as opposed to using Powershell), the routing method will default to Performance.  This is fine if you have endpoints configured in different geographical regions and you want client requests to be directed to the nearest endpoint (WAP instance).  In my case the WAP instances are both in New Zealand and share the same geographical region (from an Azure global perspective), so I changed the routing method to Weighted with an equal weighting.  This means that neither instance is preferred and ATM will direct 50% of client requests to one instance and 50% to the other under normal circumstances.  ATM monitors the endpoints and will stop directing clients to an endpoint it has detected as being offline.  For a more in-depth discussion of the available routing methods, see this helpful article.

If you don’t already have public DNS A records for each of your WAP instances then this is the time to create them, as you will need them when configuring the ATM endpoints.  In my example, I have created two records:

north.fish-eagle.net A 131.203.247.134

south.fish-eagle.net A 210.48.109.10

dns-before

You will also need to create public DNS CNAME records for each of the web services pubished via the WAP and have these point to the DNS name of the ATM profile you created.  In my example, it looks like this:

mywebapp.fish-eagle.net CNAME wap.fish-eagle.trafficmanager.net.

The ATM will not be able to furnish client requests with a suitable IP address until the endpoints have been configured, which is the next step.

When ATM was first made available it was only possible to configure Azure endpoints, i.e. those that existed in either Azure PaaS or IaaS. Recently, Microsoft has made it possible to also monitor endpoints outside Azure.  These are referred to as external endpoints.  Currently, external endpoints can only be configured using the Azure Powershell module.

After you have downloaded and installed the Powershell module, you need to log in with an account that has permissions to the Azure Resource Group in which the ATM profile has been created.

Login-AzureRmAccount

Login_azure_rm

Choose the Azure subscription to use.

Set-AzureRmContext -SubscriptionId 578039e9-a84e-4e7a-8797-dbbd991cc6b0

Set-Context

Create a one-time registration to use the Microsoft.Network service provider.

Register-AzureRmResourceProvider –ProviderNamespace Microsoft.Network

Register-ResourceProvider

Get a list of your Azure Traffic Manager profiles (in my case there is only one).  Note that currently no Endpoints are configured.

Get-AzureRmTrafficManagerProfile

Get-ATM Profile

Retrieve the existing Traffic Manager profile object, set the MonitorPath and commit the changes.  Note that the default monitoring path “/” has been known to have issues when monitoring WAP endpoints, i.e. ATM incorrectly detects them as Offline at times.  The built-in “/adfs/probe/” path on the WAP servers can be leveraged to correct the aberrant monitoring behaviour.

$profile = Get-AzureRmTrafficManagerProfile –Name wap-fish-eagle -ResourceGroupName fe-production

$profile.MonitorPath = “/adfs/probe/”

Set-AzureRmTrafficManagerProfile –TrafficManagerProfile $profile

Set-MonitorPath

Change the routing method from Performance to Weighted (if you didn’t already do this in the Azure Portal during creation of the ATM profile), and commit the changes.

$profile.TrafficRoutingMethod = “Weighted”

Set-AzureRmTrafficManagerProfile –TrafficManagerProfile $profile

Set-RoutingMethod

Create the external endpoints and commit the changes to the profile.  Note that the value of the Target parameter in each cmdlet corresponds to the DNS A records we established earlier.

Add-AzureRmTrafficManagerEndpointConfig –EndpointName wapnorth `
–TrafficManagerProfile $profile –Type ExternalEndpoints `
-Target north.fish-eagle.net –EndpointStatus Enabled

Add-endpoint

Add-AzureRmTrafficManagerEndpointConfig –EndpointName wapsouth `
–TrafficManagerProfile $profile –Type ExternalEndpoints `
-Target south.fish-eagle.net –EndpointStatus Enabled

Set-AzureRmTrafficManagerProfile –TrafficManagerProfile $profile

Add-endpoint2

Now when you run a DNS query for your web application(s) published through WAP, ATM should provide you with the IP address of one of the available (i.e. Online) endpoints.  Here’s how mine looks.

dns-after

As you can see from the screenshot, ATM has provided the DNS client with the IP address (210.48.109.10), which corresponds to one of my WAP instances (south.fish-eagle.net).

At this point the configuration is complete.  If both WAP instances are up and the monitoring is working successfully, you should see the monitoring status showing as Online in the Azure Portal.

ATM_CREATE

Note that you will need to allow inbound traffic on TCP Port 80 (HTTP) on your external firewall and WAP Windows Firewall configurations for the monitoring to be successful.

The setup of the ATM monitoring component should you take less than an hour.  ATM provides a cheap and effective way to provide a seamless client experience for the web applications published via your WAP infrastructure.

Good luck and be careful out there!

 

Qasim Zaidi has an old but really good blog entry on enabling change notification for Active Directory site links.  For a long time now I’ve encouraged my customers (those with decent bandwidth between sites) to enable change notifications on site links rather than wait the 15 minutes (minimum) for replication between sites.

Qasim’s blog references a Powershell one-liner to enable change notification for all site links.  For some reason it didn’t work for me, so I wrote my own. Here it is for those that might be interested.

### Enable change notification on all site links
$nc = (Get-ADRootDSE).configurationnamingcontext
$sb = "CN=IP,CN=Inter-Site Transports,CN=Sites," + $nc
$fl = "(objectclass=sitelink)"
Get-adobject -LDAPFilter $fl -SearchBase $sb -properties options `
| set-adobject –replace @{options=$($_.options –bor 1)}

The other day one of my customers was testing remote access to a web application via the Web Application Proxy (WAP). Everything seemed to working except some reports. These generated “HTTP Error 400. The request URL is invalid”. Given that the reports worked well inside the corporate network it pointed to an issue with the WAP.

Further investigation revealed that the requested URL for that generated the error was unusually long (approximately 500 characters).

The WAP uses HTTP.sys under the hood. HTTP.sys is a kernel-mode device driver that first drew breath in IIS 6.0 (shipped with the now unsupported Windows Server 2003).

As it turns out HTTP.sys imposes a 260 character limit on URLs. Fortunately, this limit is configurable by modifying the registry, as described in the following KB article:

https://support.microsoft.com/en-us/kb/820129

The steps to increase the limit are:

  1. 1. Create a UrlSegmentMaxLength DWORD value under HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\HTTP\Parameters and set it to 600 (decimal)
  2. 2. Reboot the WAP server.

This resolved the issue for my customer. I hope it helps you too!

 

I recently helped a customer to set up a Web Application Proxy (WAP) service to do pre-authentication to a SAP CRM system. Within the network everything was working well via ADFS and authentication was just fine.  Coming through the WAP however I got a 404 error.  The SAP CRM debug log showed a difference in the URLs when accessing internally versus externally, as follows:

Internal connection bypassing WAP (working)

10.10.10.10 crm.contoso.com – - [23/Nov/2015:15:15:45 +1300] HTTPS 302 “GET /saml2(bD1lbiZjPTMwMCZkPW1pbg==)/bc/bsp/sap/crm_ui_start/default.htm?sap-sessioncmd=open HTTP/1.1″ 0 83 h[-]

External connection via WAP (failing)

10.10.10.11 crm.contoso.com – - [23/Nov/2015:15:34:15 +1300] HTTPS 404 “GET /saml2%28bD1lbiZjPTMwMCZkPW1pbg%3D%3D%29/bc/bsp/sap/crm_ui_start/default.htm?sap-sessioncmd=open HTTP/1.1″ 1819 52 h[-]

The difference appeared to be simply that the special characters in the URL have been transformed/replaced when coming through the WAP.  I couldn’t find a configuration option within WAP that addressed this behaviour.

After posting to a couple of forums, someone from Microsoft came back with a suggestion to apply the hotfix mentioned in the following KB article (KB3042127):

“HTTP 400 – Bad Request” error when you open a shared mailbox through WAP in Windows Server 2012 R2

Apart from not seeming (from the title at least) to be remotely relevant to my issue, this KB wins the award for the most thinly worded article in the world. Ever….

“This issue occurs because Web Application Proxy (WAP) is encoding the reserved characters incorrectly.”

There, that’s the entire “Cause” section of the KB article. :-)

You have to request the hotfix (i.e. it’s not delivered via Windows Update) and also have to have the April 2014 update rollup for Windows Server 2012 R2 (KB2919355) installed as a prerequisite.

Anyway, after installing the hotfix and restarting the WAP server, everything worked like a charm. Issued resolved.

Interestingly, it also appeared to resolve an unrelated issue with another application using the WAP. From my experience at least this seems like an important hotfix and one that should be given more publicity.

Here are two ways to find the GUID (also referred to as the TenantID) associated with your Azure Active Directory (AAD) instance.

1. Embedded in the URL in the Azure Portal

Log into the Azure Portal. Select Active Directory from the left hand pane. Click on the Active Directory instance you are interested in (you may have more than one). Copy and paste the URL into Notepad. It should look something like this:

https://manage.windowsazure.com/fish-eagle.net#Workspaces/ActiveDirectoryExtension/Directory/36bfce4d-e2cf-4066-8063-f27377df4d09/users

The GUID is highlighted above.

2. In the registry of your Azure AD joined Windows 10 workstation

If you have a Windows 10 machine that you have joined to Azure AD then you can find the GUID as a key name in the registry in the following location:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\CloudDomainJoin\TenantInfo\

TenantID

Hopefully, you found this useful. Let me know what other Azure AD topics you would like to see.

I’m sometimes asked what the best practice is surrounding the Default Domain Policy and Default Domain Controllers Policy. Microsoft has some good guidance on this topic, but it’s not always clearly and consistently stated. Here’s a quick Q&A that might help.

 

Q. Is it ok to make changes to the DDP and DDCP GPOs, or should I leave them alone and create new policies?

 

A. The best practice recommendation from Microsoft is as follows:

 

  • ·    To accommodate APIs from previous versions of the operating system that make changes directly to default GPOs, changes to the following security policy settings must be made directly in the Default Domain Policy GPO or in the Default Domain Controllers Policy GPO:
  • ·    Default Domain Security Policy Settings:
    • o    Password Policy
    • o    Domain Account Lockout Policy
    • o    Domain Kerberos Policy
  • ·    Default Domain Controller Security Policy Settings:
    • o    User Rights Assignment Policy
    • o    Audit Policy

Source: Best Practice Guide for Securing Active Directory Installations (https://technet.microsoft.com/en-us/library/cc773164(v=ws.10).aspx)

 

So, that’s it!  If you want to apply other settings at the domain root level or to the Domain Controllers OU then you should create new GPOs and link them to the appropriate scope of management. The ordering of the GPOs shouldn’t really matter as you should have no overlapping settings. As a general rule of thumb, however, I would recommend assigning any new GPOs a higher precedence in case someone starts using the default GPOs for settings that are not on the “approved” list above. That way the new GPOs will win in any conflict.

 

Another reason to limit the settings in the default GPOs is to allow them to be re-created with minimal re-work in scenarios where they have gone missing or are corrupt and you don’t have a good backup.  The method by which you can re-create the GPOs is using a tool called DCGPOFIX.EXE (https://technet.microsoft.com/en-us/library/hh875588.aspx).  Bear in mind that this tool is a last resort following a major issue or disaster and you should really ensure you have good GPO backups, as per this article:

 

If you are in a disaster recovery scenario and you do not have any backed up versions of the Default Domain Policy or the Default Domain Controller Policy, you may consider using the Dcgpofix tool. If you use the Dcgpofix tool, Microsoft recommends that as soon as you run it, you review the security settings in these GPOs and manually adjust the security settings to suit your requirements. A fix is not scheduled to be released because Microsoft recommends you use GPMC to back up and restore all GPOs in your environment. The Dcgpofix tool is a disaster-recovery tool that will restore your environment to a functional state only. It is best not to use it as a replacement for a backup strategy using GPMC. It is best to use the Dcgpofix tool only when a GPO back up for the Default Domain Policy and Default Domain Controller Policy does not exist.

Source: https://support.microsoft.com/en-us/kb/833783

 

Q. We have disabled our DDP and DDCP GPOs and replaced them with new GPOs. Is that OK?

 

A. No, that’s not ok.  The GPOs have a fixed GUID and can be targeted directly using these by the “legacy APIs” mentioned above. 

 

31b2f340-016d-11d2-945f-00c04fb984f9: Default Domain Policy

6ac1786c-016f-11d2-945f-00c04fb984f9: Default Domain Controllers Policy

 

One well known application that directly modifies the Default Domain Controllers Policy is Microsoft Exchange.  The installer adds the Exchange Servers group to the “Manage Auditing and Security Log” User Right (also referred to as SACL right). So, if you disable or unlink the GPO this right (and potentially others like it) it will go missing and will cause problems for Exchange.

 

Q. Is it OK to rename the DDP and DDCP GPOs?

A. If you feel you must do this I don’t believe it will have any impact, other than it might confuse people when they look for them. I’ve seen some customers rename the GPOs to align them with their in-house naming convention. As mentioned above, these GPOs are targeted using their well-known GUIDs, which is why the rename shouldn’t cause an issue. 

 

You can find the renamed GPOs quite easily using the Group Policy cmdlets, e.g.

 

# Find the Default Domain Policy

Get-GPO -Guid 31b2f340-016d-11d2-945f-00c04fb984f9

 

# Find the Default Domain Controllers Policy

Get-GPO -Guid 6ac1786c-016f-11d2-945f-00c04fb984f9

 

Conclusion

Use the default GPOs for the approved specific purposes only.  If you have other settings you need for the same scope of management, create new GPOs and link them with higher precedence than the default GPOs. Under no circumstances should you disable or unlink the GPOs.  If you rename the default GPOs there should be no impact, but your mileage may vary.

 

 

You know you’re getting old when you come across a Usenet post you wrote almost 20 years ago. I came across this little memento while Googling for a much more recent item. Given the vintage of the post, I must have been referring to Exchange 4.0.  Exchange has come a long way since then, although I do kind of miss X.400.

x400_tony

PS. I’m still waiting for an answer to my question. :-)

Over the weekend I opened up my laptop to knuckle down to my chapter reviews for the upcoming update to the excellent Inside Office 365 for Exchange Professionals. If you don’t already have a copy I strongly recommend you make the investments. The E-book is detailed, well researched and written by those who really know their stuff.

But I’m drifting off topic. The nasty surprise for me was that my laptop keyboard didn’t appear to work. This was strange as I’m negotiated past the Ctrl+Alt+Del dialogue, which meant it wasn’t a hardware failure. At first I thought it must be a Windows 10 driver issue. In some of the pre-release builds I’d had issues with the mouse pad drivers and I thought the keyboard issue was something similar. After 10 minutes or so of fruitlessly tinkering with drivers I finally resorted to Google and found the solution quite quickly. It turns out the “Enable Slow Keys” setting, which is part of the Ease of Access keyboard settings, had somehow turned itself on. I was able to confirm this by pressing and holding down a key. The selected character appeared on the screen after a delay.

I’m still not sure how I managed to turn the setting on, but was relieved to be able to turn it off. If you have the same issue, type “filter keys” in the “Search the web and Windows” area and then select the “Ignore brief or repeated keystrokes” option. From there you can turn off “Enable Slow Keys” option, as shown in the screenshot below.

Enable Slow Keys

Hopefully this will help you if you run into the same issue.

Like that hipster beard you grew last summer, all good things must eventually come to an end. You will likely be aware that the end of extended support for Windows Server 2003 finishes on July 14th 2015. If you don’t know whether you still have 2003 boxes lurking in the dark recesses of your AD domain, you could try running the handy script below to flush them out.

The script looks for Window Server 2003 machine accounts that have logged on to the domain some time within the past 60 days – a good indicator that they are still active. It produces a CSV output for your perusal.

#########################################################
#
# Name: Find-W2K3StillActive.ps1
# Author: Tony Murray
# Version: 1.0
# Date: 25/06/2015
# Comment: PowerShell 2.0 script to find active
# Windows Server 2003 computer accounts
#
#########################################################

## Define global variables
# Export file for storing results
$expfile = "c:\w2k3_still_active.csv"
# Define the header row for the CSV (we will create our own)
$header = "`"name`",`"os`",`"sp`",`"lastlogondate`""
# Consider any account logged on in the last x days to be active
$days = 60
$Today = Get-date
$SubtractDays = New-Object System.TimeSpan $days, 0, 0, 0, 0
$StartDate = $Today.Subtract($SubtractDays)
$startdate = $startdate.ToFiletime()
# LDAP filter settings
$filter = "(&(lastlogontimestamp>=$startDate)(operatingsystem=Windows Server 2003*))"

## Functions
Function Format-ShortDate ($fdate)
{
        if ($fdate) {
            $day = $fdate.day
            $month = $fdate.month
            $year = $fdate.year
            "$day/$month/$year"
        } # end if

} # end function

## Start doing things
# Import the AD module
ipmo ActiveDirectory
# Tidy up any previous copies of the export file 
if (test-path $expfile) {Remove-Item $expfile}
# Add the header row to the export file
Add-Content -Value $header -Path $expfile
# Create an array of computer objects
$active = Get-ADComputer -LDAPFilter $filter -pr *
# loop through the array
foreach ($w2k3 in $active) {
    # Grab the attribute values we need from the AD object
    $nm = $w2k3.name
    $os = $w2k3.operatingsystem
    $sp = $w2k3.operatingsystemservicepack
    $lt = Format-ShortDate $($w2k3.lastlogondate)
    $row = "`"$nm`",`"$os`",`"$sp`",`"$lt`""
    # Commit the row to the export file
    Add-Content -Value $row -Path $expfile
} # end foreach

## End script

Enjoy!

Unfortunately Active Directory doesn’t yet provide dynamic security groups in the way that, for example, Exchange provides dynamic distribution groups.  Sometimes it is useful to maintain a group’s membership based on a specific attribute, or set of attributes.  Here’s a quick Powershell example that shows how to maintain the membership based on the presence of a single attribute value.

You can download the script here: AttributeBasedGroupMembership

 

#########################################################
#
# Name: AttributeBasedGroupMembership.ps1
# Author: Tony Murray
# Version: 1.0
# Date: 18/06/2015
# Comment: PowerShell 2.0 script to manage group 
# membership based on attribute values
#
#########################################################

# Import the AD module
ipmo ActiveDirectory

# Define arrays to be used for matching
$arratt = @()
$arrgp = @()

# Domain controller to be used
$dc = (Get-ADRootDSE).dnshostname
write-host "Using DC $dc for all AD reads/writes"

# Specify the OU where the accounts are located
$OUdn = "OU=Corp Users,DC=Contoso,DC=com"

# Find all the objects that have the specified attribute value
$AttUsrs = Get-ADUser -LDAPFilter "(extensionattribute1=Sales)" -SearchBase $oudn -Server $dc

# Specify the GUID of the group to use
# You could also use name of group (but this can be changed)
$grp = "7bbf64bc-46c7-4a90-9d58-7cb5eca35fce" # i.e. "Sales Team"

# Find all the group members
$grpusers = Get-ADGroupMember -Identity $grp -Server $dc

# Build arrays using the DN attribute value
$AttUsrs | % {$arratt += $_.distinguishedname}
$grpusers | % {$arrgp += $_.distinguishedname}


# Add to group membership (newly assigned attribute value)
foreach ($usr in $arratt) {
    if ($arrgp -contains $usr) {
        write-host "User $usr is a member of the group"
    }
    else {
        write-host "User $usr is not a member of the group - adding..."
        Add-ADGroupMember -Identity $grp -Members $usr -Server $dc
    } # end else
    Remove-Variable -ErrorAction SilentlyContinue -Name usr    
} # end foreach

write-host "`n"

# Remove from group (no longer has attribute value or has been manually added to group)
# Assumption here is that the attribute value is authoritative for the group's membership
foreach ($mem in $arrgp) {
    if ($arratt -contains $mem) {
        write-host "User $mem still has the attribute value.  Nothing to do"
    } # end if
    else {
        write-host "User $mem does not have the attribute value.  Removing from membership..."
        Remove-ADGroupMember -Identity $grp -Members $mem -Server $dc -Confirm:$false
    } # end else
    Remove-Variable -ErrorAction SilentlyContinue -Name mem
} # end foreach

 

To ensure the script is run regularly, you would likely want to call it from a scheduled task.