Quantcast
Channel: Secure Infrastructure Blog
Viewing all 177 articles
Browse latest View live

Field Notes: Azure Active Directory Connect – Domain, OU and Group Filtering

$
0
0

This is a continuation of a series on Azure AD Connect. The recently published blog post covers a quick introduction to the troubleshooting task available in Azure AD Connect. This post goes through options that are available in Azure AD Connect to apply filtering on objects that should be synchronized. I provide links to all other related posts in the summary section below.

Filtering in the Azure AD Connect installer

The Azure AD Connect sync: Configure filtering document goes through a lot of detail on how you can control which objects appear in Azure AD based on filtering options that are configured. The scope of this post is just the following options, which are available in the Azure AD Connect installer:

  • Domain-based filtering
  • Organizational unit (OU)-based filtering, and
  • Group-based filtering

Domain and OU based filtering

I am combining the domain and OU filtering options as they are covered in one screen of the installation wizard. Using the installation wizard is the preferred way to change both domain-based and OU-based filtering. To get to this screen, we need to follow the custom installation path of the installation wizard. I cover this option here, and I’ll just skip to the place where we have the ability to customize synchronization options. This option is available under additional tasks once custom installation is selected.

This additional task requires credentials of a global administrator account in the Azure AD tenant to proceed. Provide a valid set and click next to move on.

The next screen shows the directories that are already configured. I only have one forest – idrockstar.co.za.

We are now at the first filtering option – domain and OU. To simplify demonstration, I synchronize everything in the child domain (east.idrockstar.co.za) and only the Sync OU in the root domain (idrockstar.co.za).

Let’s explore the second filtering option.


Group based filtering

Moving along brings us the second part – filter users and devices. Here, we specify a group containing objects that we wish to synchronize.

Note that this is currently only intended for pilot deployment scenarios. Nested groups are not supported and will be ignored.

Provide either the name or the distinguished name of the group and resolve to validate, then click next to proceed. This will be followed by selecting optional features and finalizing the configuration.


Testing the effect of filtering

For demonstration and testing, I created three accounts as follows:

  • First Rockstar – in the synchronized OU and a member of the sync group
  • Second Rockstar – in the synchronized OU and not a member of the sync group
  • Third Rockstar not in the synchronized OU and a member of the sync group

Synchronization Service Manager

Let’s take a quick look at the synchronization manager to see what happens. Only one of the three objects we just created is exported.

Clicking the adds link under export statistics takes us to the object. The properties button exposes details. In this case, we see the object that matches both the OU and group membership synchronization requirements – First Rockstar.

I’ll cover the synchronization service in detail in a future blog post.

Azure Active Directory

To confirm, I also logon to the Azure AD tenant, select users and search for rockstar. The search only returns the only account that was synhronized, which met the criteria.


Summary

I just covered the two synchronization filtering options available in the Azure AD Connect installer – domain/OU and group-based filtering. I’ll take a closer look at the synchronization service in the follow up blog post soon.

References

Related posts


Downgrading Active Directory Domain and Forest Functional Levels (Part 1)

$
0
0

Background

With Windows Server 2008/2008 R2 approaching end of support, more organisations are upgrading their Operating Systems to the latest supported versions.

Upgrading of Active Directory Domain Services (AD DS) requires a schema update, and ultimately raising the domain and forest functional levels. Customers are concerned that applications may stop functioning after raising the functional levels, and traditionally there was no turning back once functional levels are raised.

Since the introduction of Windows Server 2008 R2 it is possible to downgrade your functional levels. We are receiving more questions regarding Active Directory functional level downgrade capabilities, as organisations plan their migration to Windows Server 2016/2019. There seems to be a misunderstanding of the downgrade capabilities, especially where the Active Directory Recycle Bin is enabled.

You may find this post by Jose Rodrigues useful. It provides information on the importance of the Microsoft Product Lifecycle Dashboard, which can help identify if products are no longer supported or reaching end of life, and keep your environment supported.


Disclaimer

We always recommend in-depth testing in a LAB environment before completing major upgrades in your production environment if possible. At a minimum, ensure that you have a well-documented and fully tested forest recovery plan. Active Directory functional level rollback is not a substitution for these core recommendations.


The basics

The Domain Functional Level (DFL) for all the domains in a forest has to be raised first, before you can raise the Forest Functional Level (FFL). When attempting to downgrade (lower) the DFL of a domain, you would first need to downgrade the FFL to the same level as the required DFL to be configured. The FFL can never be higher than the DFL of any domain in the forest.

Functional levels determine the available AD DS domain or forest capabilities. They also determine which Windows Operating Systems can be installed on Domain Controllers in the domain or forest. You cannot introduce a Domain Controller running an Operating System which is lower than the DFL or FFL. This needs to be considered when upgrading functional levels but would not have any impact when downgrading functional levels.

Distributed File Service Replication (DFSR) support for the System Volume (SYSVOL) was introduced in Windows Server 2008. Whether you are using Distributed File Service Replication (DFSR) or File Replication Service (FRS), it will not impact the ability to complete a functional level rollback.

Tip: SYSVOL replication should be migrated to DFSR before deploying Windows Server 2016 (Version 1709) or Windows Server 2019 Domain Controllers. FRS deprecation may block the Domain Controller deployment. Beystor Makoala posted a great article about FRS to DFSR Migration and some issues you may experience.

Let’s explore another feature that was introduced with Windows Server 2008 R2.


Active Directory Recycle Bin

The Active Directory Recycle Bin was first introduced with Windows Server 2008 R2. Considering the functional level rollback capability was also introduced with Windows Server 2008 R2, there were clear instructions on rollback capabilities.

You cannot roll back to Windows Server 2008 functional level after the Recycle Bin is enabled. Simple reason being that Windows Server 2008 doesn’t support the Recycle Bin, and the Recycle Bin cannot be disabled.

I’ve seen inconsistent information regarding rollback capabilities when working on newer Operating Systems such as Windows Server 2016 or Windows Server 2012 R2. Some articles indicate rollback cannot be performed at all after the Recycle Bin is enabled and others indicate the lowest functional level that can be utilized is Windows Server 2012.

The Recycle Bin was the only blocker when attempting to lower functional levels initially. The Recycle Bin has been supported since Windows Server 2008 R2 and thus it has no impact when working with any functional levels higher than Windows Server 2008 R2 (which all support the Recycle Bin feature). The Recycle Bin will only be a blocker when attempting rollback to Windows Server 2008.


Summary

We’ve discussed several Active Directory features and their impact when lowering Active Directory functional levels. We’ve determined that, in theory, the lowest functional level that can be utilized with the Active Directory Recycle Bin enabled is Windows Server 2008 R2, and the lowest functional level that can be utilized with the Active Directory Recycle Bin disabled is Windows Server 2008.

In part 2 of this series, I will demonstrate how to lower the domain and forest functional levels, and test the theory to determine the lowest functional levels that can be utilized while running a Windows Server 2019 Active Directory Domain.

Downgrading Active Directory Domain and Forest Functional Levels (Part 2)

$
0
0

Introduction

In part 1 of this series, we established in theory that we can lower the Active Directory functional levels from the latest functional level to Windows Server 2008 R2, or even Windows Server 2008 if the Active Directory Recycle Bin is not enabled.

I will now demonstrate how to lower the functional levels from Windows Server 2016 to Windows Server 2008.


Lab Configuration

I’ve deployed a three-domain forest with Windows Server 2019 Domain Controllers. This is a root domain with two child domains. The Forest Functional Level (FFL) is Windows Server 2016 and the Active Directory Recycle Bin is disabled (it is not enabled by default when deploying a new forest).


Viewing the forest configuration using Active Directory Domains and Trusts


Viewing domain and forest functional levels using Windows PowerShell


When creating a new Active Directory forest on Windows Server 2019, you can select Windows Server 2008 as the functional level. This should indicate functional level compatibility when using the latest Windows Operating Systems. There is no option to select a Windows Server 2019 functional level. This is because no new functional levels were added with the release of Windows Server 2019.



In the following demonstration, I will attempt to lower the functional level of the root domain (root.contoso.com) and a child domain (child1.root.constoso.com).


The basics

You should be a member of the Enterprise Admins group to raise or lower the FFL and a member of the Domain Admins group to raise or lower the DFL. Enterprise Admins, by default, should have Domain Admin rights in all the domains. Read more on default Active Directory security groups here.

Unlike raising the functional levels, downgrading (lowering) the functional levels can only be accomplished using Windows PowerShell. There are no Graphical User Interface (GUI) tools to accomplish this task.

The Active Directory Module for Windows PowerShell is required for the commands that we will use. Find more information on this module here.

We will use Set-ADForestMode to lower the Forest Functional Level (FFL) and Set-ADDomainMode to lower the Domain Functional Level. You can also use these commands to raise the functional level instead of using the Active Directory Users and Computers, or Active Directory Domains and Trusts management consoles.


Downgrading the Forest Functional Level: Active Directory Recycle Bin disabled

The Forest Functional Level (FFL) should be lowered first before the Domain Functional Level (DFL) can be lowered. Attempting to lower the DFL before the FFL will result in the error below:

Set-ADDomainMode : The functional level of the domain (or forest) cannot be lowered to the requested value


Ensure you are logged on with an Enterprise Admin account. Open Windows PowerShell, enter and execute the following command to lower the FFL of the forest:

Set-ADForestMode -Identity root.contoso.com -ForestMode Windows2008Forest -Server root.contoso.com -Confirm:$false

I am using the domain and forest names of my lab environment. Replace the -Identity and -Server switches with the appropriate domain names of your environment. Adding -Confirm:$false at the end of the command prevents being prompted to confirm your actions.



No confirmation message is received to confirm that the command was executed successfully. Not receiving any error messages is good. We need to verify the FFL to confirm that the functional level was lowered successfully. This can be completed using the following command in Windows PowerShell:

Get-ADForest | select Name,ForestMode



I want to verify the DFL of the domains, after the FFL was lowered, before I move on to the next step of lowering the DFL of the root domain. I use the following code in Windows PowerShell to accomplish this:

$domains = (Get-ADForest).domains
foreach ($domain in $domains) {
Get-ADDomain -Identity $domain | Select DNSRoot,DomainMode
}



Downgrading the Domain Functional Level: Active Directory Recycle Bin disabled

The FFL was successfully lowered to Windows Server 2008 while the DFL for all domains are still on Windows Server 2016. I will now lower the DFL of the root domain. I am still logged on with an Enterprise Admin account. Enter and execute the following command in Windows PowerShell to lower the DFL of the root domain:

Set-ADDomainMode -Identity root.contoso.com -DomainMode Windows2008Domain -Server root.contoso.com -Confirm:$false



Again, there is no confirmation message that the command was executed successfully and not receiving any error messages is good. Let’s review the DFL of all domains to confirm that the DFL of the root domain was lowered successfully.



I now want to attempt to lower the DFL of a child domain in the forest.

Please note that any of the domains can be lowered in any order, there is no dependency on the root domain DFL being lowered before lowering the DFL of any child domains. The only requirement is lowering the FFL before lowering the DFL of any domain in the forest.

I am still logged on with an Enterprise Admin account and Windows PowerShell is open. The command syntax is the same except for -Identity and -Server switches which should now be the Fully Qualified Domain Name (FQDN) of the child domain.

Set-ADDomainMode -Identity child1.root.contoso.com -DomainMode Windows2008Domain -Server child1.root.contoso.com -Confirm:$false

Attempting to lower the DFL when not logged onto the target domain, as I am doing now with the Enterprise Admin account, may result in an error: Set-ADDomainMode : A referral was returned from the server.

This is prevented by using the -server switch and specifying the Fully Qualified Domain Name (FQDN) of the target domain, as I have done in all my previous steps.



The command executes without any confirmation message or errors. Viewing the DFL of all domains confirms that the DFL of the child domain was successfully lowered to Windows Server 2008.




Summary

I’ve demonstrated that the Active Directory functional levels can successfully be lowered from a Windows Server 2016 functional level to Windows Server 2008 functional level. It is important to note that this was achieved with the Active Directory Recycle Bin disabled.

In part 3 of this series, I will raise the functional levels back to Windows Server 2016, enable the Active Directory Recycle Bin and attempt lowering the functional levels again.

Infrastructure – System Center Configuration Manager –“Deploying applications with the PowerShell Application Deployment Toolkit”

$
0
0

The Issue

Recently I was posed a question where a customer wanted their users to experience a more advanced or informative when software gets installed. They also required that data be saved so that users work are not affected.

Requirements: The Mimecast for Outlook add in had to be installed (This requires Outlook to close) but users should be clearly warned to save data and then re open Outlook after completion (or instruct user to re open).

The Investigation

Option 1: System Center Configuration Manager “Run another program first”.

My first attempt was to create a Powershell Pop Up and turn that into a package that could run using the ConfigMgr feature “Run another program first” to warn users to close Outlook and save data. This Pop up we had to specify the time it waits which means for slower machines it would not wait long enough and faster machines it waited too long. Although the worked to an extent, I was not yet satisfied with the end product.

Run Another Program First
Timer Pop Up
During Install
After Install

PowerShell Code below:

#.\PopupTimer.ps1 -Mimecastlocation

param (
    $MimecastLocation
    )

#script for balloon notification
[void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
$objNotifyIcon = New-Object System.Windows.Forms.NotifyIcon

#script to pop up window
$wshell = New-Object -ComObject Wscript.Shell 
$wshell.Popup("Mimecast needs to close Outlook to install the latest add in. Please save all your work. Outlook will close automatically in 5 minutes",0,"Done",0x1)

#wait for user to close outlook
Wait-Event -Timeout 300

#force outlook to close
Stop-Process -Name 'OUTLOOK'

#wait before mimecast install
Wait-Event -Timeout 30

#pop up balloon notification
$objNotifyIcon.Icon = ".\Mimecast_M_2015.ico"
$objNotifyIcon.BalloonTipIcon = "Info"
$objNotifyIcon.BalloonTipText = "Do Not Open OUTLOOK while Mimecast is installing"
$objNotifyIcon.BalloonTipTitle = "Install Mimecast Add-In"
$objNotifyIcon.Visible = $true
$objNotifyIcon.ShowBalloonTip(5)

#Run msi to install mimecast
Invoke-Command -ScriptBlock {cmd /c msiexec.exe /i ".\Mimecast_for_outlook_7_7_x64.msi" /qn} 

#Balloon Pop up for completion
$objNotifyIcon.BalloonTipText = "You can now open OUTLOOK"
$objNotifyIcon.BalloonTipTitle = "Install Mimecast Add-In was successful"
$objNotifyIcon.Visible = $true
$objNotifyIcon.ShowBalloonTip(5)

#wait after mimecast install
Wait-Event -Timeout 30

#start up Outlook
Start-Process 'OUTLOOK'

Option 2: The PowerShell Application Deployment Toolkit (PSappDeployToolkit)

This neat little toolkit that can be downloaded from https://psappdeploytoolkit.com/ surpassed my expectations when it comes to capabilities, features and design. It has an amazing manual that I will be quoting for the sake of this post.

Extract the Package

Extract the toolkit
Copy the MSI required for installation

Modify the PowerShell

Edit the Deploy-Application.ps1
Fill in details [line 64 – 72]
Fill in Options [line 121]
Fill in Options [line 141] [line 151]
Fill in Options [line 161] [line 181]

Create the ConfigMgr Application (PSAppDeploymentToolkit Admin Guide)

Create the application

Deploying the application

The Resolution (User Experience)

Summary

  1. Download and extract the PSAppDeployToolkit from: https://psappdeploytoolkit.com/
  2. Modify the PowerShell
  3. Create the ConfigMgr Application
  4. Deploy to your customers

As always, I hope this has been informative and feel free to correct me in any steps.

Convert all targeted devices to Autopilot

$
0
0

In this blog I will look at how to convert an existing corporate device to Autopilot.

Configuration

Ensure you have an AD/AAD group that contains the existing corporate devices that you would like to target for Autopilot conversion.

  • Open the Azure portal and navigate to Microsoft Intune > Device enrollment > Windows enrollment
  • On the Device enrollment – Windows enrollment blade, select Deployment Profiles in the Windows AutoPilot Deployment Program section
  • On Windows AutoPilot deployment profiles blade, either select Create profile or select [existing deployment profile] > Properties
  • On the Create profile blade or the [existing deployment profile] – Properties  blade, the setting Convert all targeted devices to AutoPilot must be switched to Yes
  • On the Assignments blade, select the group that contains all the devices you would like to target

I will target the following device by adding it to the AD/AAD group:

Once the device is added to the targeted group you can confirm by navigating to Microsoft Intune > Device enrollment > Windows enrollment > Windows Autopilot Devices. The process takes a couple of minutes as it assigns the profile to the device.

When you select the device you will be able to confirm that the Profile is assigned and what profile was assigned:

Now that the device has been converted to Autopilot, the device can be reset. The AutoPilot Reset will only be available in the console once the device has been reset and gone through the Autopilot Deployment process once.
undefined
To test this newly added device I will reset the device by either doing a manual reset in Windows Settings or initiating a Wipe in Intune.
The device will reset and start the Autopilot Deployment.
undefined

undefined
After completing the Autopilot Deployment we now have the ability to do an Autopilot Reset in the Intune Console.
undefined

Summary
With the Convert all targerted devices to Autopilot option you can easily convert corporate owned devices without the need to import any data.

NB! All corporate owned, non-Autopilot devices in assigned groups will register with the Autopilot deployment service.

Field Notes: The case of the disappearing Name Server (NS) records

$
0
0

Introduction

I recently assisted a customer with Name Server (NS) records in DNS, disappearing from their DNS zones. All of the Domain Controllers are configured as DNS servers, yet when viewing the NS records for the Active Directory-integrated DNS zones, only a few of these servers had NS records.

The administrators manually re-added the NS records to the DNS zones, only to find that the NS records were missing when reviewing the DNS zone configurations later.


Background

Every DNS server that is authoritative for an Active Directory-integrated DNS zone creates its respective NS record in the DNS zone, which also means that the replication scope of the DNS zone will determine which servers are registered for the specific DNS zone.

When a DNS zone is replicated to all DNS servers in the forest, the zone will contain NS records for all servers in the forest, and when the zone is replicated to all DNS servers in the domain, the zone will only contain NS records for servers in the specific domain where the Active Directory-integrated DNS zone is created.


Active Directory-integrated DNS zone replication scope


Forest-zone replication scope: Contains DNS servers from the all domains in the forest.


Domain-zone replication scope: Contains DNS servers from the specific domain only.


The NS records can be managed by selecting the properties of the DNS zone in DNS Manager.


In most deployments, every Domain Controller is also a DNS server.

The DNS Server will create the NS record and Active Directory replication will propagate the change to the relevant DNS Servers, as per the configured DNS zone replication scope.

When NS record registrations are functioning properly, these NS records can be removed from the DNS zone, and the NS records will be re-added when the DNS Server service is restarted.

In this instance, the customer manually added the missing NS records but they were being removed when the DNS Server service restarted.


Resolution

There are two configurations that may impact the creation of NS records in DNS:

  • Configuration in the Windows registry of a DNS Server, which affects all DNS zones hosted by the DNS server.
  • Configuration on a DNS zone, which may affect any DNS Server hosting the configured DNS zone.

The registry

In the registry of an affected DNS Server, find the DNS Server service parameters at the following location:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DNS\Parameters

The following registry value, by default, does not exist in the registry and has to be manually created when required:

Registry value: DisableNSRecordsAutoCreation
Data type: REG_DWORD
Data range: 0x0 | 0x1
Default value: 0x0



If this registry value exists and is set to 1, the DNS server will not automatically create NS records, for all Active Directory-integrated DNS zones hosted by this server. Changing the value to 0 or deleting the entry will reset automatic NS record behavior to default, resulting in the DNS Server creating NS records for all Active Directory-integrated DNS zones that it is hosting. You must restart the DNS Server service for this value to take effect.

This registry value did not exist on the customers DNS Servers, which is the default configuration, and thus the server would attempt to create a NS record.


The DNS Zone

View the AllowNSRecordsAutoCreation configuration of the DNS zone, use the following command:

dnscmd <servername> /zoneinfo <zonename> /AllowNSRecordsAutoCreation

With default configuration the results should be as per the image below. This means all DNS Servers are allowed to automatically create NS records for the zone.

Default configuration


In the customers environment we executed the same command and received different results as per the example below:

Customized configuration


What this result means is that the DNS zone is restricted to allow NS record registrations only from the two specific IP addresses listed in the result.

When there are 50 DNS servers for example and only 10 IP addresses are listed, only those 10 servers will be able to create their NS records for the specific zone.

This would explain why only some NS records are listed, and not the records from all the DNS servers in the forest or domain. This was causing the NS records on the customer environment to be removed after they have been manually added.

This was easily fixed by executing the following command, which will reset the NS records creation configuration to the defaults, for the specific DNS zone:

dnscmd <servername> /config <zonename> /AllowNSRecordsAutoCreation

Reset NS record creation to default value


The command needs to be completed for each DNS zone to configure, but only needs to be executed on one DNS Server. Active Directory replication will propagate the changes as per the configured DNS zone replication scope. You can wait for the NS records to be created automatically, or restart the DNS Server service on the affected servers to speed up the process.


Conclusion

There are very specific situations where an adminitrator may need, or want to limit the creation of NS records. There may be a requirement to limit NS record creation for a specific DNS zone to only a few servers, or you may want to prevent specific DNS servers from creating NS records in all the DNS zones that it is hosting, for example a DNS Server in a branch office.

Feel free to explore the reference article for specific instances where NS record registrations may need to be limited.

Be sure to document any changes made on DNS servers or DNS zones and specify the reason for the specific configurations. This will ensure future administrators understand the configurations, and when reviewing these custom configurations, also have enough information to determine if they are still required.


Reference:

Problems that can occur with more than 400 Domain Controllers in Active Directory integrated DNS zones:

https://support.microsoft.com/en-za/help/267855/problems-with-many-domain-controllers-with-active-directory-integrated

Downgrading Active Directory Domain and Forest Functional Levels (Part 3)

$
0
0

Introduction

In part 2 of the series we’ve successfully lowered the Forest Functional Level (FFL) and Domain Functional Level (DFL) to Windows Server 2008. The demonstration was completed in a forest where the Active Directory Recycle Bin was not enabled.

In this final part of the series, I will first raise the functional levels back to Windows Server 2016, enable the Active Directory Recycle Bin, and then lower the functional levels. As determined in part 1 of the series, we should be able to lower the functional levels to Windows Server 2008 R2 but not Windows Server 2008.


Lab Configuration

The Forest Functional Level is set to Windows Server 2008 and the Domain Functional Level of the root domain (root.contoso.com) and a child domain (child1.root.contoso.com) is also set to Windows Server 2008. The remaining child domain (child2.root.contoso.com) is set to Windows Server 2016.


Forest and domain functional levels viewed using Windows PowerShell


Raising the Domain Functional Level (DFL) and Forest Functional Level (FFL)

We’ve determined that the FFL cannot be lower than the DFL of any domain in the forest, which means the DFL of the root and child domain needs to be raised to Windows Server 2016 first. Let’s see what happens when we attempt to raise the FFL to Windows Server 2016 first.

In Windows PowerShell I run the following command:

Set-ADForestMode -Identity root.contoso.com -ForestMode Windows2016Forest -Server root.contoso.com -Confirm:$false



The result is no confirmation or error message which we already know means that the command completed successfully. How is this possible when we haven’t raised the DFL of all the child domains? Let’s confirm this using Windows PowerShell:



The results in PowerShell indicates that while raising the FFL to Windows Server 2016, the DFL of all the domains were automatically raised to Windows Server 2016.

Be careful not to raise the FFL by mistake when planning on changing the DFL of a single domain. This may result in unknowingly raising the DFL of all your domains in the forest.

I should also note that this will fail if all the Domain Controllers are not on the required Operating System version. In the following example I attempted the same action, but a Windows Server 2012 R2 Domain Controller still existed in a child domain. I received an error message:

Set-ADForestMode : The functional level of the domain (or forest) cannot be raised to the requested value, because there exist one or more domain controllers in the domain (or forest) that are at a lower incompatible functional level.



The FFL is raised to Windows Server 2016 and now we can enable the Active Directory Recycle Bin to determine the outcome of lowering the functional levels with the recycle bin enabled.


Enable the Active Directory Recycle Bin

Windows PowerShell can be used to verify if the Recycle Bin is enabled or not.

Get-ADOptionalFeature -Filter ‘name -like “Recycle Bin Feature”‘



We can see from the PowerShell results that the required FFL to enable the Recycle Bin is Windows Server 2008 R2. The EnabledScopes attribute indicates whether the Recycle Bin is enabled or not. The current value is blank which means that the Recycle Bin is not enabled in this forest yet.

The following command is used in PowerShell to enable the Recycle Bin. Replace -Target with the forest root domain Fully Qualified Domain Name (FQDN).

Enable-ADOptionalFeature ‘Recycle Bin Feature’ -Scope ForestOrConfigurationSet -Target root.contoso.com



You will be prompted to confirm your actions. Take note of the warning that this action is not reversible. The Recycle Bin cannot be disabled after it is enabled. No confirmation message is provided to confirm that the Recycle Bin was successfully enabled. Again, no error messages are good.

This should also prevent lowering the Forest Functional Level to Windows Server 2008, because the recycle bin was only introduced with Windows Server 2008 R2.

I will run the Get-ADOptionalFeature command again to verify that the Recycle Bin status in PowerShell again.



The EnabledScopes attribute is no longer blank. This is the indicator that the Recycle Bin is enabled in the forest.


Downgrading the functional levels: Active Directory Recycle Bin enabled

The FFL will now be lowered. The first attempt was to set the FFL to Windows Server 2008 which failed as shown in the screenshot. We then attempt lowering the functional level to Windows Server 2008 R2 which resulted in no error or success message, which indicates the FFL was lowered successfully.


Set-ADForestMode : The functional level of the domain (or forest) cannot be lowered to the requested value


Verify that the FFL is lowered to Windows Server 2008 R2



The DFL of the child domain (child2.root.contoso.com) will now be lowered.



The first attempt was to set the DFL to Windows Server 2008 which failed as shown in the screenshot. The second attempt set the DFL to Windows Server 2008 R2.

Verify the Domain Functional Level. The DFL of the child domain was successfully lowered to Windows Server 2008 R2.



Conclusion

I’ve successfully demonstrated that the Active Directory functional levels can be lowered from Windows Server 2016 functional level, to Windows Server 2008/2008 R2 functional levels, depending on whether the Active Directory Recycle Bin is enabled or not.

The rollback can be completed from any functional level since Windows Server 2008, just keep the Active Directory Recycle Bin in mind when raising the functional level from Windows Server 2008.

If you are planning on upgrading your Active Directory infrastructure, whether this is from Windows Server 2008/2008 R2 or Windows Server 2012/2012 R2, you should now be able to complete this with more confidence. Raising the Active Directory functional levels should be an easier step, knowing you have the option of rolling back to the previous functional level should you experience any unexpected issues.


Series

Field Notes: Azure Active Directory – Group Filtering Gotchas

$
0
0

This is a continuation of a series on Azure AD Connect. In the previous blog post, we looked at filtering options that can be used to control which objects are synchronized from on-premises directories to Azure AD – domain, OU and group filtering. I would like take a closer look at group filtering here, and discuss some gotchas that I briefly touched on in previous posts of this series. If you have not seen the previous blog post on object filtering using Azure AD Connect, I suggest you start here. Other related (previous) posts are provided in the summary section below.

Security Group Filtering

The filtering on groups feature allows you to synchronize only a small subset of objects for a pilot. Group-based filtering can be configured the first time Azure AD Connect is installed by using the custom installation option. Details are available in this document, which also highlights the following important points:

  • It is only supported to configure this feature by using the installation wizard
  • When you disable group-based filtering, it cannot be enabled again
  • When using OU-based filtering in conjunction with group-based filtering, the OU where the group and its members are located must be included (selected for synchronization)
  • Nested group membership is not resolved – objects to synchronize must be direct members of the group used for filtering

Let’s go through some cases to demonstrate:

(1) nested groups, and

(2) what happens when the group used for filtering is moved to a different OU.


The case of the nested group

In the previous post on filtering, we only had two user objects in the security group that we use for filtering – First Rockstar and Third Rockstar.

The name of the group is IDRS Sync in this example

I have just added a group named Nested Group to the sync group in order to demonstrate the requirement for direct membership. Members of the IDRS Sync group are now:

  • First Rockstar (user)
  • Third Rockstar (user)
  • Nested Group (security group)

Nested Group is a security group containing one member – Fourth Rockstar as shown above.

With this in place, a quick look at the Troubleshooting Task that I introduced here reveals that the object (Fourth Rockstar):

  • is found in the AD Connector Space
  • is found in the Metaverse
  • is not found in the Azure AD Connector space – no export

Fourth Rockstar is in the OU selected for synchronization. The account is also filtered out because it is not a direct member of the sync group.

In the Synchronization Service Manager, we can see that only the group was exported, but not the account that was added to the group itself. This confirms what the troubleshooting task picked up.


To get Fourth Rockstar synchronized, we would have to add the account as a direct member of the IDRS Sync group.


The case of the changed distinguished name

Let us now cover a scenario where the group used for filtering is moved to an OU that is not selected for synchronization. In this example, I moved the IDRS Sync group from the Sync OU to the VIP OU.

The distinguished name changed from CN=IDRS Sync,OU=Sync,DC=idrockstar,DC=co,DC=za to CN=IDRS Sync,OU=VIP,DC=idrockstar,DC=co,DC=za

If you look at the Synchronization Service Manager, you will notice that the group is removed from the on-premises directory connector and the metaverse. (The VIP OU is not selected for synchronization.)

It may appear that First Rockstar was not removed at first. It is still available in Azure AD at this stage. Remember that this was the only account that was in the OU selected for synchronization AND in the IDRS Sync group (previous blog post).

A synchronization cycle later, the object is deleted.

A quick refresh now shows that the account (First Rockstar) was deleted as a result of moving the IDRS Sync group to an OU that is not in scope of synchronization. This may not be a desired outcome!

Notice the error that clearly states what the problem is when we look at the filter users and devices page in Azure AD Connect. The distinguished name of the group has changed.

For my tenant, I am going with the synchronize all users and devices option to make life easy and align with the recommendation against use of this feature for production deployments.

Summary

I just went through two of the scenarios covering challenges that could be faced when using group filtering. Please note that this feature is currently only intended to support a pilot deployment and should not be used in production.

References

Related posts

Till next time…


Test Azure resource name availability

$
0
0

Background

Most of the services in Azure such as Storage Accounts, Key Vaults or AppService Websites must have globally unique names, where the fully qualified domain name (aka FQDN) for the service uses the name you selected and the suffix for the specific service. For example, for Key Vaults its vault.azure.net and for WebApps its azurewebsites.net

The Azure portal can help you determine the name availability during the service creation, but there’s no built-in PowerShell cmdlet or azure cli command to do so for ARM services (in the old ASM days, we had the Test-AzureName PowerShell cmdlet we could use to check for a classic cloud services name availability).

For scenarios where you have an automated deployment and don’t want the deployment failing because of the name availability, you’d want to have a simple command that returns a true/false boolean value that determines if the name is already taken or not.

 

Proposed solution

Several of the Azure providers have an API that exposes a checkNameAvailability action that you can use the test the name’s availability. Each provider requires and accepts a different set of parameters, where the most important ones are obviously the name you want to check and the service type.

To get a list of the providers that support the checkNameAvailability action, you can use the following PowerShell command:

Get-AzResourceProvider | 
    Where-Object { $_.ResourceTypes.ResourceTypeName -eq 'checkNameAvailability' } | 
        Select-Object ProviderNamespace

That outputs the following:

ProviderNamespace      
-----------------      
Microsoft.Sql          
Microsoft.Web          
Microsoft.DBforMySQL   
Microsoft.Media        
Microsoft.Cdn          
Microsoft.ApiManagement
Microsoft.BotService   
Microsoft.Storage      
Microsoft.KeyVault     
Microsoft.Management   
microsoft.support 

Drilling down to one of the providers, we can see the list of API versions that support the action, and we can build the URI string to invoke

Get-AzResourceProvider -ProviderNamespace Microsoft.Web |
    Where-Object { $_.ResourceTypes.ResourceTypeName -eq 'checkNameAvailability' } |
        Select-Object -ExpandProperty ResourceTypes | 
            Select-Object -ExpandProperty ApiVersions

Invoking Azure APIs using PowerShell is simple enough, you just need the bearer token, the URI to the API action and the needed parameters for the action. For some of the APIs we need a subscription ID to work with.

The important and main function is Test-AzNameAvailability:

function Test-AzNameAvailability {
    param(
        [Parameter(Mandatory = $true)] [string] $AuthorizationToken,
        [Parameter(Mandatory = $true)] [string] $SubscriptionId,
        [Parameter(Mandatory = $true)] [string] $Name,
        [Parameter(Mandatory = $true)] [ValidateSet(
            'ApiManagement', 'KeyVault', 'ManagementGroup', 'Sql', 'StorageAccount', 'WebApp')]
        $ServiceType
    )

    $uriByServiceType = @{
        ApiManagement   = 'https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.ApiManagement/checkNameAvailability?api-version=2019-01-01'
        KeyVault        = 'https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.KeyVault/checkNameAvailability?api-version=2019-09-01'
        ManagementGroup = 'https://management.azure.com/providers/Microsoft.Management/checkNameAvailability?api-version=2018-03-01-preview'
        Sql             = 'https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Sql/checkNameAvailability?api-version=2018-06-01-preview'
        StorageAccount  = 'https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Storage/checkNameAvailability?api-version=2019-06-01'
        WebApp          = 'https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.Web/checkNameAvailability?api-version=2019-08-01'
    }

    $typeByServiceType = @{
        ApiManagement   = 'Microsoft.ApiManagement/service'
        KeyVault        = 'Microsoft.KeyVault/vaults'
        ManagementGroup = '/providers/Microsoft.Management/managementGroups'
        Sql             = 'Microsoft.Sql/servers'
        StorageAccount  = 'Microsoft.Storage/storageAccounts'
        WebApp          = 'Microsoft.Web/sites'
    }

    $uri = $uriByServiceType[$ServiceType] -replace ([regex]::Escape('{subscriptionId}')), $SubscriptionId
    $body = '"name": "{0}", "type": "{1}"' -f $Name, $typeByServiceType[$ServiceType]

    $response = (Invoke-WebRequest -Uri $uri -Method Post -Body "{$body}" -ContentType "application/json" -Headers @{Authorization = $AuthorizationToken }).content
    $response | ConvertFrom-Json |
        Select-Object @{N = 'Name'; E = { $Name } }, @{N = 'Type'; E = { $ServiceType } }, @{N = 'Available'; E = { $_ | Select-Object -ExpandProperty *available } }, Reason, Message
}

To use it, you first have to get a bearer token, for either the current logged on user or for a service principal using one of the two functions Get-AccesTokenFromServicePrincipal or Get-AccesTokenFromCurrentUser:

function Get-AccesTokenFromServicePrincipal {
    param(
        [string] $TenantID,
        [string] $ClientID,
        [string] $ClientSecret
    )

    $TokenEndpoint = 'https://login.windows.net/{0}/oauth2/token' -f $TenantID
    $ARMResource = 'https://management.core.windows.net/'

    $Body = @{
        'resource'      = $ARMResource
        'client_id'     = $ClientID
        'grant_type'    = 'client_credentials'
        'client_secret' = $ClientSecret
    }
    $params = @{
        ContentType = 'application/x-www-form-urlencoded'
        Headers     = @{'accept' = 'application/json' }
        Body        = $Body
        Method      = 'Post'
        URI         = $TokenEndpoint
    }
    $token = Invoke-RestMethod @params
    ('Bearer ' + ($token.access_token).ToString())
}


function Get-AccesTokenFromCurrentUser {
    $azContext = Get-AzContext
    $azProfile = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile
    $profileClient = New-Object -TypeName Microsoft.Azure.Commands.ResourceManager.Common.RMProfileClient -ArgumentList $azProfile
    $token = $profileClient.AcquireAccessToken($azContext.Subscription.TenantId)
    ('Bearer ' + $token.AccessToken)
}

To get the current (already logged in) user’s bearer token, use:

$AuthorizationToken = Get-AccesTokenFromCurrentUser

Or to get a Service Principal (App Registration) bearer token, use:

$AuthorizationToken = Get-AccesTokenFromServicePrincipal `
    -TenantID '<Directory Tenant ID>' `
    -ClientID '<Application Client ID>' `
    -ClientSecret '<Application Client Secret>'

And then, to test for the name availability for some of the services you can use:

Test-AzNameAvailability -Name martin -ServiceType ApiManagement -AuthorizationToken $AuthorizationToken -SubscriptionId $subscriptionId 
Test-AzNameAvailability -Name kv -ServiceType KeyVault -AuthorizationToken $AuthorizationToken -SubscriptionId $subscriptionId 
Test-AzNameAvailability -Name root -ServiceType ManagementGroup -AuthorizationToken $AuthorizationToken -SubscriptionId $subscriptionId 
Test-AzNameAvailability -Name sqlsrv1 -ServiceType Sql -AuthorizationToken $AuthorizationToken -SubscriptionId $subscriptionId 
Test-AzNameAvailability -Name storage -ServiceType StorageAccount -AuthorizationToken $AuthorizationToken -SubscriptionId $subscriptionId 
Test-AzNameAvailability -Name www -ServiceType WebApp -AuthorizationToken $AuthorizationToken -SubscriptionId $subscriptionId 

This outputs:

Name      : martin
Type      : ApiManagement
Available : False
reason    : AlreadyExists
message   : martin is already in use. Please select a different name.

Name      : kv
Type      : KeyVault
Available : False
reason    : Invalid
message   : Vault name must be between 3-24 alphanumeric characters. The name must begin with a letter, end with a letter or digit, and not contain consecutive hyphens.

Name      : root
Type      : ManagementGroup
Available : False
reason    : AlreadyExists
message   : The group with the specified name already exists

Name      : martin
Type      : Sql
Available : False
reason    : AlreadyExists
message   : Specified server name is already used.

Name      : storage
Type      : StorageAccount
Available : False
reason    : AlreadyExists
message   : The storage account named storage is already taken.

Name      : www
Type      : WebApp
Available : False
reason    : AlreadyExists
message   : Hostname 'www' already exists. Please select a different name.

So in a full script, you could use something like:

$params = @{
    Name               = 'myCoolWebSite'
    ServiceType        = 'WebApp'
    AuthorizationToken = Get-AccesTokenFromCurrentUser
    SubscriptionId     = $subscriptionId
}
if((Test-AzNameAvailability @params).Available) {
    # Continue with the deployment
}
 

Closing notes

The checkNameAvailability API is available in several Azure providers, but because of time constraints I implemented the test only for a few of them (ApiManagement, KeyVault, ManagementGroup, Sql, StorageAccount and WebApp), so you are more than welcome to improve it.

The complete code with the functions and examples is published under my github Azure repository as: https://github.com/martin77s/Azure/blob/master/PS/Test-AzNameAvailability.ps1

HTH,
Martin.

Communicate with Confidence – Taking the fear out of public speaking

$
0
0

Your technical skills are honed to a fine-tooth edge. You’re a ninja when it comes to Active Directory, SQL, or Exchange. Server crash? You got this! PowerShell scripting? It’s your superpower! Speaking in front of an audience? Handling an upset customer? Answer the unanticipated question? Your palms sweat, your stomach hurts, your head spins. “Someone, anyone, please help!” is all you can think.

We have all heard people are more afraid of public speaking than they are of death. Have you had to speak with a customer or in front of an audience? While a certain level of anxiety is normal, you can learn how to master the art of communication whether it’s one-on-one, in meetings, or in front of an audience. Read on to learn how to teach your butterflies to fly in formation!

undefined

Yes, you can!

You may be saying to yourself, no way. Not me. Not possible. I truly would rather die than give a presentation or talk to someone I don’t know. Let me tell you a short story to encourage you that it is possible to overcome your fears.

Many years ago, I met a man, we’ll call him Jim. Jim was so terrified of talking to people he could not even say hello when he was introduced to someone. At his wife’s urging, they joined a group called Toastmasters International™. Jim’s first goal was to stand up in front of the group for 30 seconds. Easy you say. Not for Jim. His anxiety was so high, it took him several months just to hit the 30 second mark.

Jim then set a second goal. He wanted to be able to say “hello” to his audience. Again, it took several months before Jim was able to confidently utter the words, “Good evening fellow Toastmasters.”

Fast forward several years. I was working a local event for the Chamber of Commerce. And who did I see at the event? Jim! Not only was Jim at the event, he came up to me and said hello. He let me know that because he learned to overcome his fear of public speaking, he now had his own business selling eye glass frames and was doing well with it!

If Jim was able to overcome his fears, I know you can too! In this series of blogs, I am going to teach you the basics of public speaking, provide resources to assist you, and help you build the confidence you desire to take control of the butterflies!

Interpersonal Communication

Before we dive into public speaking, AKA delivering MIP to an audience, let’s look at interpersonal communication.

Interpersonal communication is simply a conversation between two people. It can be positive or negative. Positive conversations might include talking with a co-worker about your weekend, a casual conversation with a customer, or meeting someone new. While these can be stressful situations, it is typically the difficult conversations that cause us high levels of anxiety. These conversations might include disputing a charge on a bill, a discussion with your auto mechanic about what is really wrong with your car versus what he tells you is wrong (and how much it will cost to repair it!), being interviewed for a new position, or dealing with an unhappy customer.

Each of these conversations can be challenging and stressful. However, if you have the necessary skills, handling uncomfortable conversations will no longer cause you to sweat. And, once you master these, you will be able to master the art of public speaking, aka MIP delivery, with ease!

Toastmasters™ Levels of Conversation

Every relationship starts with conversation. Toastmasters International™ (TI) defines four levels of conversation. Level One is small talk – talking about the weather, maybe a concert or play, current events, etc. At this level, the conversation remains neutral and does not typically delve into personal topics or opinions.

Level Two is the fact finding and disclosure level. Here we are starting to build enough trust to disclose a few personal facts about ourselves. May we discuss our occupations, if we are married or single, our kids, or our hobbies. At this level, we are looking for common ground to see if we wish to continue to invest in a relationship with the other person.

Level Three raises the stakes. We are feeling comfortable and positive with the other person and our conversations. This may occur at the initial meeting or at later subsequent meetings. You begin to express personal opinions on different topics and may discuss different viewpoints. You are opening yourself up to the other person.

Finally, you reach Level Four. The relationship is deepening and there is a strong comfort level with this person. You share similar views and find you have enough in common to want to continue the relationship. Several encounters are usually needed to reach this level. Topics are now of a more personal nature. You may disclose an issue you are having with your spouse, kids, or at work and seek advice, discuss concerns you both have, or other topics you would not disclose or discuss with a stranger.

Not all conversations/relationships will make it to Level Four. Nor should they. In the business environment, you will most likely only speak with your customer at Level One or Level Two. If you move through the levels too quickly, you could overwhelm the other person making them shut down to whatever it is you are sharing or the message you wish to communicate. Your customer probably doesn’t care about the argument you had with your child, or your neighbor who keeps letting their dog destroy your yard. Getting too personal in the workplace can diminish your professionalism and detract from your credibility with your customer.

What we’ve learned

In this post, we learned you are not alone in your fear of public speaking. We also learned you can overcome this fear! We learned about Toastmasters’ four levels of interpersonal communication. This will allow us to tailor our conversations to the environment in which we find ourselves, as well as giving us guidelines on how fast to move when desiring to build a relationship or rapport with another person. It also allows us to see we should not strive to engage at all levels with all people in which we find ourselves in conversation.

Next time…

In my next post, I will introduce you to tips and tricks for dealing with those difficult conversations we all must have at one time or another including “does it really help to picture the audience in their underwear?”

Until then,

– pjz –

Azure – Changing Directories in other Portals like the Device Management Portal

$
0
0

The Issue

If you have guest access to multiple directories then switching is fairly easy. You simply click on your username, click switch directory and then choose your directory. Below is a simple example. But what happens when you try to switch to these directories in other portals like the Desktop Analytics portal (devicemanagement.portal.azure.com)? In my experience it reverted me back to my default directory with no option to change directories.

Example 1 : Switching Directories in Azure Portal

Click Switch Directory
Select the Directory
Easy right?

Example 2 : Switching Directories in Device Management Portal

Navigate to Device Management portal
As you can see the directory has been changed to my default. Which is not what I wanted

The Resolution

As you can see it seems to be reverting to the default directory. But I need access on my other directory.

  1. Get the domain of the directory you would like to navigate to.
  2. Add this directory name in the URL as per below example

3. As you can see now you are logged in with the correct directory

As always, I hope this has been informative and feel free to correct me in any steps.

Offline installation of OpenSSH Server on Windows Server 2019

$
0
0

Windows Server 2019 has a lot of additional capabilities that can be added. Those features are easily added with the Add-WindowsCapability PowerShell cmdlet. When adding a capability it pulls from either the Internet or a WSUS server. Sometimes the capability needs to be added in an offline environment where there is no Internet and the WSUS server is non-existent or does not have the package. In that case the Windows Server 2019 Features On Demand (FOD) ISO is needed and the -source parameter then can be used to add the capability. The Features On Demand ISO can be downloaded from MSDN or my.visualstudio.com.

While there is a Windows Server 2019 Features On Demand ISO it does not contain all the capabilities, such as OpenSSH server. That capability is on the Windows 10 Features On Demand ISO. However the Windows 10 Features On Demand ISO cannot be used on a Windows Server 2019 OS. There is a little work around though.

For this work around you will need both the Windows Server 2019 Features On Demand disc and the Windows 10 Features On Demand disc. Once you have both discs / ISOs downloaded follow these simple steps.

  1. Extract the entire Windows Server 2019 Features On Demand ISO to a local directory on the server (e.g. C:\FOD).
  2. Open up the Windows 10 Features On Demand ISO and copy the following cab files to the directory with the extracted Windows Server 2019 Features On Demand files.
    • OpenSSH-Client-Package~31bf3856ad364e35~amd64~~.cab
    • OpenSSH-Server-Package~31bf3856ad364e35~amd64~~.cab
  3. Run the Add-WindowsCapability -online -name OpenSSH.Server~~~~0.0.1.0 -source C:\FOD

You will then see the following output:
Add-WindowsCapability -Name OpenSSH.Server~~~~0.0.1.0 -Online -Source c:\FOD

Path :
Online : True
RestartNeeded : False

Now OpenSSH Server is installed on the server in an offline environment you will be able to see the OpenSSH SSH Server service.

Field Notes: Azure Active Directory – Attribute-based Filtering

$
0
0

This is a continuation of a series on Azure AD Connect. I recently covered using domain/OU and group filtering options that are available in Azure AD Connect to help control which objects are synchronized to Azure AD. I also took a closer look in group filtering, which is not recommended for use in production. Another filtering mechanism I would like to cover before moving on to another topic is attribute-based filtering. This is, however, not something we achieve through the Azure AD Connect wizard that we have been using throughout the series, but the Synchronization Rules Editor. A full list of related blog posts is provided in the summary section below.

Attribute-based filtering

We now know that filtering using a security group is not recommended as pointed out in the previous blog post. What other options do we have — if we, say — wanted to filter out (exclude) some of the user objects residing in an OU selected for synchronization? Attribute-based filtering! The Azure AD Connect sync: Configure filtering document has finer details on attribute-based filtering. I’ll just go through an example to see how this feature could be leveraged to filter objects based on attribute values.

Environment setup

To simplify demonstration of this feature, I focus on only one of the domains I have in my test AD forest – idrockstar.co.za. The VIP OU in that domain is already selected for synchronization as shown below.

I created two user accounts in the VIP OU:

  • First VIP – should be synchronized to Azure AD
  • Second VIP – should NOT be synchronized to Azure AD (cloud filtered)

I further updated Second VIP‘s extentionAttribute15 attribute have a value of NoSync. The idea is to apply negative filtering based on this attribute, but more on this is covered in the next section.


Applying attribute-based filtering

The tool for this job is the Synchronization Rules Editor. This tool can be used to view, edit and/or create new synchronization rules that control attribute flows.

Once the tool is open, new rules can be added by clicking the add new rule button. Note that the direction (inbound) was already selected by default. I highlight this as there is also an option for outbound filtering, which I don’t cover in this post. I click the (add new rule) button to start the wizard.

Clicking the add new rule button opens up a create new inbound synchronization rule wizard that is needed to apply the negative filter (do not synchronize objects that meet the critiria). I provide the following information on the description page and click next to proceed:

  • Name: this should describe the purpose of the rule (visible in the default view of Synchronization Rules Editor)
  • Description: more details on what the rule aims to achieve (optionally used to provide more information)
  • Connected System: this is the on-premise directory – idrockstar.co.za in my case
  • Connected System Object Type: target object type is user in this example
  • Metaverse Object Type: user objects are presented as person type in the metaverse
  • Link Type: join is selected by default – I leave this unchanged
  • Precedence: defines which rule wins in case of a conflict when more than one group contribute to the same attribute. The rule with the lower precedence number (higher priority) wins.

The rest of the fields are not necessary for this exercise.

On the scoping filter page, I click add group, followed by the add clause button and specify the value of NoSync for extentionAttribute15.

I click next, and next again to skip the join rules as they are not required for our task. On the transformations page, I click the add transformation button and complete the form as follows:

  • FlowType – Constant
  • Target Attribute – cloudFiltered
  • Source – True

I leave everything else default.

To finish off, I click add at the bottom of the page (not shown in the screenshot). A warning message stating that a full (initial) synchronization will be run on the directory during the next synchronization cycle is displayed. Be prepared for this when you apply this feature in your environment. I click OK to dismiss the dialog box.

Looking back at the main Synchronization Rules Editor window, we can confirm that the new rule was added.

The effect of attribute-based filter

Looking at the Troubleshooter that we covered here, we see that:

  • the Second VIP user object is found in the AD Connector Space
  • the Second VIP user object is found in the Metaverse, but
  • the Second VIP user object is not found in the Azure AD Connector space

The Connector Space Object Properties windows in the Azure AD Connect Synchronization Service shows that Second VIP has been deleted (it had initially been exported).

The Metaverse Object Properties window confirms that the cloudFiltered attribute was indeed set to the value of true by the rule we created. (The connectors tab would also reveal that the object is only present in the on-prem AD connector and not in the Azure AD connector.)

Finally, looking at Azure AD confirms that Second VIP was filtered out and is not available in the Azure AD user list. Only First VIP is showing.

Summary

This was a third blog post on filtering, which covered attribute-based filtering in Azure AD Connect. This feature provides a way to filter objects based on attribute values. Below is a list of references that provide a lot more detail if required. I have also provided a list to all previous Azure AD Connect-related blog posts below.

References

Related posts

Till next time…

Hyper-V On-The-Go or “To Boldly Lab Where No-One Has Lab’d Before!”

$
0
0

This is an into to a multi-part series on building portable labs.

Boldly Going

One thing I have found invaluable throughout my career has been the ability to maintain a decent lab environment, something that has been an ongoing struggle over the years. Early on it was all about the hardware. One of my earliest labs grew to roughly 15+ Frankenstein mini-tower systems I had cobbled together into a learning tool for learning to work with Novell/Windows NT, Banyan Vines, TCP/IP  (replacing IPX/SPX at the time) plus a whole bunch of old-timer stuff I won’t bore you with here.

Over time, technology changed and my lab with it. KVMs let me remove several monitors and reduced hot-swapping problems. Combined server roles like Small Business Server (SBS) let me reduce the number of lab-production machines to basically one box.
Virtualization with VMWare got me down to 4 boxes total and when Hyper-V finally arrived with WS-2008, I was able to run my lab with 2 Microsoft (MS) Hypervisors boxes running beefed up RAM and HDDs.
This lasted a few years until the hardware finally succumbed to its age and died on me. I decided to try rack-mount systems (I went with used because I am WAY too cheap for new) and am still engineering that particular solution. If my free time allows for it, I may try documenting that in another series.

Setting a Course

What this series will focus on is the portability of lab environments and how to work with them. While travelling for work and I’ve observed this as an issue for myself and many of the people I work with.
Like many in the industry, I spend a lot of time on the road and often do not always have access to my permanent lab environment. I needed a local solution I could easily keep with me and was self-contained, quickly configurable and easily shared with team members in a pinch. I also wanted lightweight since my bag was heavy enough already. 😎

Basic Hardware Setup

I write this with the understanding our work PCs (in my case a Surface Book Pros) are fixed on RAM and Drive-space and the options to change them limited at best, but as long as your RAM is at least 16GB, the right external drive solution will fix the drive-space limits (usually 256/512GB) inflicted on our machines. 

NOTE: 8GB of RAM would work but you would be limited to 1 or 2 VM’s running max.

After much research and testing various drives on my machine I settled on the SanDisk 2TB Extreme Portable External SSD (USB-C, USB 3.1)

This image has an empty alt attribute; its file name is 511CKblEcAL._SY180_.jpg

With a capacity of 2 Terabytes SSD, it has more than enough space to hold my Lab VMs as well as any ISOs I may need to build a new lab. It comes in a rugged case and is amazingly light. It has a USB-C connection and the cable comes with a USB-3.1 adapter. I found its performance on both ports to be exemplary. I have had many VMs running concurrently and the combined SSD/USB-3 has never been an issue on my machine.
My main limiting factor has always been the RAM. I’ve used this drive on my 32GB laptop with no discernable performance degradation.

For secondary storage I added an SD (micro in this case) to each of my machines to house lab configs, scripts, extra ISOs or other files I might need in a pinch if I happened to be caught without my LabDrive.
I went with the SanDisk Ultra 128GB microSDHC UHS-I card (it came with a  SD adapter):

It was the largest available at the time and is designed for photography so it is one of the faster SDs out there (98MB/s).

Setting the Environment (5 Years?!?)

I hope you find the hardware recommendations useful in your lab endevours. I do NOT recommend running an external lab drive on any USB port less than USB-3, the performance hit is too crippling to the lab’s performance. A USB-3 HDD can be used but for better, more consistent performance I would stick with SSDs.

In an upcoming blog, I will cover tips for setting up the environment and tweaking configurations. I will also cover enabling Hyper-V and some changes to defaults you need to be wary of when working with a lab drive. I will also demonstrate PowerShell vs GUI configurations as automation is the key to rapidly deploying a functioning lab.

There is an excellent tutorial here by Jaromir Kaspar that goes into rapidly deploying labs in 2016 and Windows 10. I highly recommend it, especially if you’re craving better automation.

Azure Files – What is it? Why Would I want it?

$
0
0

This is the first part of a multipart post about Azure Files.  In this first post,  we will cover the high level of what Azure Files are before we dive into further information about configuration or fine details about them.

What are Azure Files?

Azure files are fully managed file shares in the cloud located on an Azure Storage Account. 

Azure File shares can be directly mounted by Windows, macOS and Linux with SMBv3 support.

Why would I want Azure Files?

Azure Files can be used to supplement or even replace traditional on-premises file servers. 

It can be used in conjunction with Azure File Sync to replicate Windows Servers from either on-premises or in the cloud.  This allows you to migrate file servers from on-premises,  or in the cloud back and forth without having to do the traditional method of “Robocopy” like so many File Server migrations utilize.

The benefits of the Azure File Sync make it easy to also “Lift and Shift” applications to the cloud that are expecting a file share to store application data.

Great, I want to start utilizing Azure Files but I have a large amount of data

Azure Files supports individual file shares up to a 100TB. There are two tiers Azure Files offers from a performance perspective

Premium

  • 100 TB limit
  • Up to 100,000 IOPS
  • Up to 5 GiB/s transfer

Standard

  • 100 TB limit
  • Up to 10,000 IOPS
  • Up to 300 MiB/s 

In the next post,  we will walk through setting up Azure Files and start going into configuration details! 

 

 

 

 


SSRS – There are misconfigured data sources

$
0
0

Background

Occasionally we may receive the following alert:

SSRS 2012: There are misconfigured data sources

Gentleman, Start your hacking

So let’s rip open the MP and see what’s going on. After going recursively bottom to top I finally understood that this is the DataSource being referenced in the monitor/probe:

<ProbeActionModuleType ID=”Microsoft.SQLServer.2012.ReportingServices.ProbeAction.TSQLCountersReportingServiceCustom” Accessibility=”Internal” Batching=”false” PassThrough=”false”>

And we can see these references:

<Assembly>SQLRS!Microsoft.SQLServer.2012.ReportingServices.Deployment.Assembly</Assembly>          <Type>Microsoft.SQLServer2012.ReportingServices.Module.Deployment.AllInstancesAreDiscoveredMonitor</Type>

What in the world?!

If we open ProcMon and/or search the file system we will see the following file referenced:

  • SQL 2012:
    • “C:\Program Files\Microsoft Monitoring Agent\Agent\Health Service State\Cabinets\Microsoft.SQLServer.2012.ReportingServices.Monitoring.357.cab”
  • SQL 2014:
    • “C:\Program Files\Microsoft Monitoring Agent\Agent\Health Service State\Cabinets\Microsoft.SQLServer.2014.ReportingServices.Monitoring.283.cab”
  • SQL 2016:
    • “C:\Program Files\Microsoft Monitoring Agent\Agent\Health Service State\Cabinets\Microsoft.SQLServer.2016.ReportingServices.Monitoring.530.cab”
  • Etc.

We can extract the contents of this file to find a few files within:

Now let’s extract all the files and copy them to the desktop. Then rename the manifest file to manifest.txt and open it with notepad.

We will see the following content:

{d032ca24-9972-b9da-d045-31cd2519c56c}=MP.Microsoft.SQLServer.2012.ReportingServices.Monitoring

{b1579809-8203-b518-8b4f-d9901688bfd7}=RES.Microsoft.SQLServer.2012.ReportingServices.Module.Monitoring.dll.{b1579809-8203-b518-8b4f-d9901688bfd7}

If we compare the file names above (“{b157..}) to the files names in the text file we will see a match. So let’s just manually rename the files ourselves. We only need to do this for the first file.

But wait, there’s more!

We need to repeat the above process in order to extract some necessary dependencies. So, let’s also grab and extract the following files, using the same process as above:

  • SQL 2012:
    • “C:\Program Files\Microsoft Monitoring Agent\Agent\Health Service State\Cabinets\Microsoft.SQLServer.2012.ReportingServices.Discovery.96.cab”
  • SQL 2014:
    • “C:\Program Files\Microsoft Monitoring Agent\Agent\Health Service State\Cabinets\Microsoft.SQLServer.2014.ReportingServices.Discovery.523.cab”
  • SQL 2016:
    • “C:\Program Files\Microsoft Monitoring Agent\Agent\Health Service State\Cabinets\Microsoft.SQLServer.2016.ReportingServices.Discovery.401.cab”
  • Etc.

And now what? Champaign?

Not just yet, but sit tight. Let’s open the DLL with JustDecompile (or your most favorite decompile tool) and let’s see what’s inside.

When we see the counter above:

JustDecompile by Telerik will prompt us to load another dll (Microsoft.SQLServer.2012.ReportingServices.Module.Helper.dll):

This is one of the DLLs we extracted from the Discovery cab file.

Then it will continue to complain about some other DLLs:

Just click Skip.

Gotcha!

Now let’s go back to our management pack and see what it monitors:

This is the monitor:

<UnitMonitorType ID=”Microsoft.SQLServer.2012.ReportingServices.MonitorType.DeploymentWatcher.MisconfiguredDataSources” Accessibility=”Internal” RunAs=”SQLRS!Microsoft.SQLServer.2012.ReportingServices.RunAsProfile.Monitoring”>

And this is the bread n’ butter:

           <TSQLCounterClassName>CountableStatistics</TSQLCounterClassName>

              <TSQLCounterPropertyName>MisconfiguredDataSources</TSQLCounterPropertyName>

When going back to JustDecompile I can now search for this statistic and see how it’s calculated:

And this is the query:

SELECT COUNT(1) as RETURN_VALUE FROM [Catalog] AS c INNER JOIN DataSource AS ds ON ds.ItemID = c.ItemID WHERE ds.Link IS NULL AND ds.Extension IS NULL

Now, in order to see the name of the problematic DataSource let’s modify the query a bit:

SELECT * FROM [Catalog] AS c INNER JOIN DataSource AS ds ON ds.ItemID = c.ItemID WHERE ds.Link IS NULL AND ds.Extension IS NULL

Just to confirm this DataSource is indeed corrupt, when browsing SSRS I find:

The perpetrator has been found! Book him!

Credit goes to Reuven Singer

Field Notes: The case of the stopped Azure AD Connect synchronization – stale Internet proxy server

$
0
0

This is a continuation of a series on Azure AD Connect. In this blog post, I cover a specific case where an export to Azure AD fails due to stale Internet proxy settings configured on the server running Azure AD Connect. I go through various tools, some of which we have covered in our previous blog posts, to provide different perspectives.

Background

Azure AD Connect should be made aware when it is running on a server that is sitting behind a proxy server. This is achieved by updating the machine.config file to include proxy and port settings. This file is located in the C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config folder. The Troubleshoot Azure AD connectivity document details this process.


The case of the stale proxy server

So, we have successfully managed to install and configure Azure AD Connect following one of the methods I covered in the previous blog posts (I provide links in the summary below). Synchronization that has been working fine for a few months suddenly stops. Troubleshooting begins – let’s look at some tools and methods.


Azure AD Connect Troubleshooting Tool

We covered an introduction to the troubleshooting tool here. This tool has the ability to help troubleshoot and diagnose object synchronization issues.

We go ahead and select necessary menu options and specify a distinguished name of an object that we are using to troubleshoot.

Interesting! We get a confirmation that there is a problem, but let’s focus on one message that stands out: An error occurred while sending the request – OperationStopped [Get-MsolDomain], HttpRequestException

This is a nice clue, but let’s move on.


Synchronization Service Manager

We move on to the Synchronization Service Manager and discover that export to Azure AD profile has a status of stopped-extension-dll-exception.

If you do a search on the Internet on this status, you will find some blogs and documents pointing to a stale or expired credential. Could this be a problem in our case? Let’s have a look somewhere else to gather more clues.


Synchronization Scheduler

Let’s also check if the synchronization scheduler has not been suspended and everything is health from that perspective. Oh no – we run Get-ADSyncScheduler and get and error! Start-ADSyncSyncCycle throws a similar error.

System.Net.Http.HttpRequestException: An error occurred while sending the request. System.Net.WebException: Unable to connect to the remote server —> System.Net.Sockets.SocketException: A connection
attempt failed because the connected party did not properly respond after a period of time, or established connection
failed because connected host has failed to respond 10.0.0.2:8080

Someone passing by makes a remark — “that’s a lot of red right there”, but we already have a good idea on we are dealing with at this stage. Before we get to the fix though, let’s look at two more tools we could leverage.


Windows Event Viewer

Everyone’s favourite! A quick look at the application event log gives us an array of event ID 906 errors.

One of these events confirms our challenge with connectivity “... connection failed because connected host has failed to respond…” You know what this is – proxy:port! We probably should have just started at the Event Viewer right? Did someone change or decommission the proxy server without our knowledge? Hmmm…


Azure AD Connect installer

The fix is coming up next, but let’s check what the Azure AD Connect installer would show us when an attempt to connect to Azure AD is made.

Unable to connect to the remote server

I highlight this because the error is different in a case where the proxy is still there but we cannot get to it due to name resolution for instance.

The remote name could not be resolved.

The fix is easy

In our case, the proxy server is no longer around and the Azure AD Connect server was still attempting to go through it. The server now has a more direct route to the necessary Azure AD endpoints. We need to remove the proxy settings. So, we navigate to C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config and remove the stale proxy settings from the machine.config file.

<system.net>
    <defaultProxy>
        <proxy
        usesystemdefault="true"
        proxyaddress="http://proxy.<server>:<port>"
        bypassonlocal="true"
        />
    </defaultProxy>
</system.net>

You may need to restart the Microsoft Azure AD Sync service

Start-ADSyncSyncCycle now completes without errors and synchronization is working again.


Recap

This is kind of a reverse of what we would normally do in most production deployments as servers running Azure AD Connect rely on a proxy server to get to the required endpoints. In this case, we needed to remove proxy settings as the server running the proxy service is no longer around. I covered a few tools to demonstrate different approaches and perspectives.


Related posts

Here’s a little extra before I go: Aaron Guilmette‘s Azure AD Connect Network and Name Resolution Prerequistes Test script that you can also explore. — If you are uncertain about your server’s ability to connect to Office 365 for the purposes of deploying Azure AD Connect or to local network resources for configuring a multi-forest deployment, you can attempt to use this tool to report on connectivity and name resolution success. —

Till next time…

Configuration Manager: Software Updates install before deadline???

$
0
0

Recently I was helping a customer who was having a challenge with software updates in that some of their endpoints were installing software updates the day that updates were made available and not waiting until the deadline was set.

After verifying that the software update deployment was setup correctly as shown below:

We confirmed that the clients were in fact deploying software updates the evening of November 27th, 2019 and not waiting until the deadline of December 6th, 2019 by reviewing the client logs on a Machine that did install the updates.

Upon reviewing UpdatesDeployment.log on we discovered the following entries:

  • Automatic required software installation during non-business hours is selected UpdatesDeploymentAgent 11/27/2019 11:00:00 PM 23472 (0x5BB0)
  • Auto install during non-business hours is enabled, selecting all required updates. UpdatesDeploymentAgent 11/27/2019 11:00:00 PM 23472 (0x5BB0)

This caused us to review the Software Center on the Client and confirmed that “Automatically install or uninstall required software and restart the computer only outside of the specified business hours” was checked in Software Center.

This is what was causing the software updates to automatically install outside of business hours in this customers case 11:00pm.

The customer had a Configuration Baseline that set this on the majority of clients without fully realizing what the ramifications were to deadlines.

The customer updated the CI’s to uncheck this so that the SCCM Admins would have better control over when deployments install on the Clients.

#Detection Script for "Automatically install or uninstall required software and restart the computer only outside of the specified business hours"
$Results = (Invoke-CimMethod -Namespace "ROOT\ccm\ClientSDK" -ClassName "CCM_ClientUXSettings" -MethodName "GetAutoInstallRequiredSoftwaretoNonBusinessHours").AutomaticallyInstallSoftware

If ($Results -eq $false){ #$true = Checked $False = Unchecked
    Write-Host "Compliant"
} Else {
    Write-Host "Non-Compliant"
}

#Remediation Script for "Automatically install or uninstall required software and restart the computer only outside of the specified business hours"
#$true = Checked $False = Unchecked
Invoke-CimMethod -Namespace "ROOT\ccm\ClientSDK" -ClassName "CCM_ClientUXSettings" -MethodName "SetAutoInstallRequiredSoftwaretoNonBusinessHours" -Arguments @{ AutomaticallyInstallSoftware = $false } | Out-Null

After this Baseline is run on the Clients “Automatically install or uninstall required software and restart the computer only outside of the specified business hours” is no longer selected, and the deployments will wait until the deadline before attempting to install.

Infrastructure – System Center Operations Manager – SQL Query for SCOM Maintenance mode schedules

$
0
0

SCOM maintenance schedules list views only display names and comments. In order to view affected objects you are required to open the schedules to see the server list. This SQL query will display semicolon delimited list of the affects objects for each schedule.

Below is a SQL query you can utilize to see all SCOM maintenance schedules in your Operations Manager Management Group.

Use OperationsManager
SELECT
      [ScheduleName]
	  , ( SELECT  BaseManagedEntity.DisplayName + '; '
  FROM  BaseManagedEntity with (NOLOCK)
  left join [OperationsManager].[dbo].[ScheduleEntity] on BaseManagedEntity.BaseManagedEntityId = ScheduleEntity.BaseManagedEntityId 
  where  ScheduleEntity.ScheduleId = MMS.ScheduleId
   FOR XML PATH('') 
   ) as ObjectName

      , case 
         when Recursive = 0 then 'False' 
         when Recursive = 1 then 'True' 
         else 'Undefined' 
       end as "Recursive" 
      , case 
         when IsEnabled = 0 then 'False' 
         when IsEnabled = 1 then 'True' 
         else 'Undefined' 
       end as "IsEnabled" 
      , case 
         when Status = 0 then 'Not Running' 
         when Status = 1 then 'Running' 
         else 'Running' 
       end as "Status" 
      
      , case 
         when IsRecurrence = 0 then 'False' 
         when IsRecurrence = 1 then 'True' 
         else 'Undefined' 
       end as "IsRecurrence" 
      ,[Duration]
      ,[Comments]
      ,[User]
      ,[NextRunTIme]
      ,[LastRunTIme]
  FROM [OperationsManager].[dbo].[MaintenanceModeSchedule] as MMS with (NOLOCK)

Note that the object names are semicolon delimited to show you the systems that are included in the named maintenance schedule.

I hope you find this query useful in your daily SCOM routine.

AppLocker – Part 1

$
0
0

Introduction:
AppLocker has been around for a few years and whilst the concept is very simple, the implementation can get very complex. In this series of blogs, I will look at AppLocker rules and the implementation of these rules .

Blacklisting vs Whitelisting
The first decision you face to decide if your organization can benefit from deploying AppLocker is whether to go “whitelisting” or “blacklisting”.
Before you start, let’s look at the definition of the two.

Blacklist
A list of applications that are regarded as unacceptable or untrustworthy and should be excluded or avoided. These applications would be explicitly specified in an AppLocker rule to block these applications from running. Therefore, anything can be executed provided it hasn’t been “Blacklisted”.

Whitelist
A list of applications considered to be acceptable or trustworthy. These applications would be explicitly specified in an AppLocker rule and only these applications would be allowed to run and therefore implicitly deny anything other than the whitelist applications.

Now you have a clear idea of what these options mean and probably know what route you would like to take, but there are more considerations to look at.
Below is a small comparison between the two:

Blacklist

Protect against yesterday’s threats

Always leaves zero-day opportunities for hackers

Requires less rules

Less time required for implementation

Whitelist

Protect against tomorrow’s threats

Minimizes opportunity for yet-unknown threats

Requires more complex set of rules

Requires analysis of the environment and therefore requires more time

Conclusion
In very large organizations where applications are not all known, you would require enough time to gather and analyse the AppLocker event logs. Although implementing blacklisting could be easier and a “quick” win, the efforts put into whitelisting ensures a more secure environment.
Keep in mind that AppLocker is not a replacement for your anti-virus software, but rather compliments it by assisting to prevent the execution of unwanted applications.

In the next blog I will look at AppLocker Rules, Rule Conditions and how to enforce them.

Viewing all 177 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>