Quantcast
Channel: Secure Infrastructure Blog
Viewing all 177 articles
Browse latest View live

Field Notes: Azure Active Directory Connect – Troubleshooting Task Overview

$
0
0

This is a continuation of a series on Azure AD Connect. Previous parts have mostly been focusing on the installation and configuring different user sign-in options for Azure AD. Links to these are provided in the summary section below.

Now that we have covered the common setup options for Azure AD Connect, I would like to switch gears a little and discuss troubleshooting. In this post, I cover the troubleshooting task available in Azure AD Connect version 1.1.614.0 and newer.


Azure AD Connect Troubleshooting

The Azure AD troubleshooting task is triggered by selecting troubleshoot under additional tasks as depicted below.

Selecting the ‘troubleshoot’ task and clicking next presents the Welcome to Azure AD Connect Troubleshooting screen, which provides the ability to launch the troubleshooter. Click Launch to proceed.

This opens up a PowerShell window with the following options:

  • [1] Troubleshoot Object Synchronization
  • [2] Troubleshoot Password Synchronization
  • [3] Collect General Diagnostics, and
  • [Q] Quit

You may need to set the PowerShell execution policy to remote signed or unrestricted.


Let’s explore each option.


Troubleshooting Object Synchronization

Selecting the first option allows us to troubleshoot object synchronization. For this demonstration, we will focus on diagnosing object synchronization issues by pressing number 1 and hitting the enter key.

The troubleshooter enumerates a list of connectors and prompts for a distinguished name of the object of interest. This is followed by a request for the Azure AD tenant’s global administrator credentials. Next, it attempts to connect to the Azure AD tenant, and checks both the domain & OU filtering configuration.

An HTML report is generated and exported to the C:\ProgramData\AADConnect\ADSyncObjectDiagnostics folder. Below is a sample that shows object details for the on-premises directory and the Azure AD Connect database.

In the example depicted above, I reproduced a synchronization issue by using a duplicate attribute for the test account I am using. On the flip side; with an account that is successfully synchronized, we see that object details for Azure AD are also provided with information such as last directory synchronization time, immutableId, UPN, as shown below:

Do we have other options for this scenario? Yes – IdFix, Azure AD Connect Health and the Synchronization Service Manager. Let’s briefly go through each.

IDFix

IdFix identifies errors such as duplicates and formatting problems in on-premises directories before an attempt to synchronize objects to the Azure AD tenant.

In this example, we can see we have two objects with the same attribute value.

Azure AD Connect Health

Azure AD Connect Health provides robust monitoring of your on-premises identity infrastructure.

In this example, we see that contactus@idrockstar.co.za is a duplicate attribute value (Error Type:AttributeValueMustBeUnique).

Synchronization Service Manager

The Synchronization Service Manager UI is used to configure more advanced aspects of the sync engine and to see the operational aspects of the service.

Unable to update this object because the following attributes associated with this object have values that may already be associated with another object in your local directory services: [ProxyAddresses SMTP:contactus@idrockstar.co.za;]. Correct or remove the duplicate values in your local directory. Please refer to http://support.microsoft.com/kb/2647098 for more information on identifying objects with duplicate attribute values.


Troubleshooting Password Hash Synchronization

Troubleshoot Password Hash Synchronization is the second option on the main menu, which is invoked by pressing 2 and hitting the enter key. For the purpose of this demonstration, we select option 3 from the sub-menu (synchronize password hash for a specific user account). Other options are:

  • Password hash synchronization does not work at all
  • Password hash synchronization does not work for a specific user account
  • Going back to the main menu, and quitting the program

The single object password hash synchronization utility attempts to synchronize the current password hash stored in the on-premises directory for a user account. A distinguished name of an object is required as input. Let’s see two scenarios in action:

  • An attempt to synchronize a password of an object that has not yet been exported
  • Synchronizing a password of an object that has already been exported

Account not exported

I am using the account that reported errors in the troubleshooting object synchronization section above to demonstrate this. After providing the distinguished name, we see a message confirming that password hash synchronization is enabled for the connector. This is followed by a message stating that password hash synchronization has failed. This is obviously because the object has not yet been exported.


Account is exported

Now what happens if an account has already been exported? The password hash is synchronized successfully.


Collecting General Diagnostics Information

Let’s explore the last option – collect general diagnostics. With this option, the troubleshooter collects diagnostics information. The output report contains useful information such as Azure AD tenant settings, Azure AD Connect settings, sync scheduler and more:

There is also a lot of useful troubleshooting information stored in the C:\ProgramData\AADConnect\<date>-111422_ADSyncDiagnosticsReport folder.



Summary

Previous parts of this blog post series have mostly been focusing on installation and configuring different user sign-in options for Azure AD. Here’s a list for reference:

This post was an introduction to troubleshooting, covering the troubleshooting task available in Azure AD Connect.

References

Till next time…


Azure AD Best Practice: Requiring users to periodically re-confirm their authentication information

$
0
0

Disabling the authentication methods re-confirmation prevents users from updating potentially outdated information such as email or phone number and can decrease the effectiveness of Self-service Password Reset (SSPR). This may also result in password reset information being sent to an unintended recipient. The default setting in Azure AD is to require users to re-confirm authentication information every 180 days and it is recommended to maintain this configuration unless required by a defined business need.

However, this re-confirmation can be seemingly annoying so some organizations cave to complaint and disable it. As a best practice keep it enabled and set it to a more comfortable re-confirmation schedule to help better secure the user identity and keep it current.

To enable it or alter the default number of days:

  1. Login to https://portal.azure.com
  2. Click the Azure Active Directory blade in the console.
  3. Click Users
  4. Click Password reset
  5. Click Registration
  6. Change the number of days to a value other than 0 (default is 180 days).

Re-confirm authentication information

System Center Configuration Manager –“Error Deploying Windows 10 In Place Upgrades with McAfee DLP Endpoint”

$
0
0

The Issue

Trying to do an In Place Windows 10 Upgrade with McAfee DLP Endpoint fails. As soon as the Operating System is applied the machine restarts and simply starts up to the “Repair” screen.

The Investigation

In this case the In Place Upgrade was being performed by System Center Configuration Manager using an In Place Upgrade Task Sequence. This means we have some logs to go through.

After digging in smsts (%windir%\ccm\logs\smsts.log) we could see there was an extra switch added to the Windows 10 Setup.exe file.

This parameter is what is required by McAfee for you to complete your upgrade and can be viewed on their website – https://kc.mcafee.com/corporate/index?page=content&id=KB89000.

There is also a great article by a Microsoft MVP – https://www.anoopcnair.com/in-place-os-upgrade-on-mcafee-encrypted-machines-using-sccm-ts/

But even after reviewing the settings were correct the upgrade was still failing. So it was time to go look a little deeper, the Panther file.

If you do not know what the Panther file is, it is basically a folder that contains some helpful files for troubleshooting Windows upgrades. The location of this folder can differ so have a look at this link – https://support.microsoft.com/en-us/help/927521/windows-vista-windows-7-windows-server-2008-r2-windows-8-1-and-windows

The Panther directory mostly looks something like the below,

To me the two most helpful files in the Panther directory, to me, is always

1. ActionableReport.html

and

2. CompatData_xxxxxxxxxxxx.XML

The Solution

As we could see in the Actionable Report and the XML File it was clearly still DLP Endpoint causing a “Hard” block and “UpgradeBlock”. So we pointed our efforts in that direction. After some minor review of this article we figured out the DLP Endpoint version was not compatible for this upgrade to Windows 10 1809. Refer to below table and link.

After updating to a supported version the upgrade went through successfully.

If you have anything to add or would like to correct me in any of the steps please reach out and I will be happy to discuss.

Active Directory security Best Practices : Part 1

$
0
0

As The Active Directory is identified as one of the most business critical applications whose any outage can cause downtime of users and services so it need special care and high attention in terms of security , backup and health , and every day as I visiting customers there is a frequent question that I keep receiving ,

How I can secure my AD infrastructure?

The truth here that AD itself is not the actual target of the attacker but it’s the way that will enable him reaching his target whether to steal confidential data , cause an outage , gain reputation or bargain for money , …etc.

also something I like to mention that  some customers still think that as long as they have a firewall and they have security mitigation on the network level they are already protected , believe me you are not ! ,modern attacks can cross this line , so you need to follow the defense on depth concept all your layers need to be secured , network  ,servers ,applications specially now with the cloud and integrations between companies and services your users and data will be always in mobility and you need to maintain its security .

So through these series  I’m going to answer this question and I will try to simplify this as much as I can , as having secure AD infrastructure is long way to go but at least we need to maintain the basics of security and keep going step by step till we can say , okay my AD infrastructure is secured !

First let me give you quick introduction why we need to secure our AD , and the answer is so simple because it’s the repository of all identities so for the attacker to be able to gain access to his target he needs to compromise a domain account , and there is a lot of ways now to do that, you must heard about pass the hash , pass the ticket , golden ticket attacks .. etc , it’s all based on the attacker gain access to machine inside the network then tries to extract the hashes inside the RAM and move laterally till he be able to get hash of domain admin account then the whole forest will be under his control and this why we need to make this task (obtaining domain admin account) very difficult to him by securing our identities .

So here I’m going to talk about one of the main Active Directory Security mitigations,

Secure Privileged Accounts:

as as we mentioned for the attacker to gain access to his target he needs an account with a privilege , so we need to harden this task for him  , here how we can do this ,

1. Patch Patch Patch till the end of the world

  • 99% of incidents in 2014 involved vulnerabilities for which patches were released in 2013 or earlier .
  • 90% of incidents in 2013 involved vulnerabilities which were patched in 2007 .
  • Patching does not guarantee 100% security! but its mandatory if you want to maintain the basics of security .

2. Credentials Partitioning

  • Never use the same account for your daily task and administrative tasks.
  • Your admin account should be restricted from connecting to internet, email, LOB applications.
  • Maintain the tier Model which based on divided your admin accounts into three tiers, and block access between these tiers to prevent privilege escalation.
  • If you have small team that manage all tiers in that case every one of the team will need dedicated account for every tier , so we can guarantee that even if one of these account was compromised the attacker will be locked into that tier and will not be able to escalate his privilege to the higher tiers .

clip_image002[4]

3. Privileged Access workstation (PAW)

  • Use dedicated hardened workstation for the administrative tasks.
  • Must not connect to the internet, Email any LOB Application.
  • Hardened using APP whitelisting, IPsec, firewall…etc.
  • Dedicated PAW per Tier per administrator.
  • Block access between tiers.

1

4. Least Privilege 

  • Minimize the number of high privilege groups as every member increase attack surface.
  • Maintain proper delegation model based on least privilege concept.
  • Use “privilege Access management” feature available in windows sever 2016 to give temporary privilege for users and the privilege will be revoked automatically after specific amount of time.
  • Build workflow for approvals to join specific groups and this can be done by using MIM.
  • Give special attention to service accounts as they usually member of high privileged groups with password never expire , make sure they really need this privilege otherwise give them the least privilege they need to accomplish the task .

so that is all for now , our next blog will be about how we can mitigate the lateral movement of the attacker inside the environment , stay tuned Smile

Deploy Azure Kubernetes Service (AKS) to a preexisting VNET

$
0
0

I recently ran into an issue where I needed to deploy AKS in an environment with a limited number of available IP addresses. If you’ve ever deployed AKS before, you might have noticed that using the default settings creates a new VNET with a /8 CIDR range (16,777,214 hosts), which was way too large for this environment as the largest we could use could was a /23 (510 hosts).

Since AKS uses the kubenet plugin by default, the pods will be getting their IPs from a virtual network that resides inside the cluster (separate from the Azure VNET), which eliminates the need to use a large CIDR range in Azure.

The steps below will walk you through the process of deploying your cluster and using not only a preexisting VNET, but one that resides in a resource group that’s separate from your cluster resource group.

Prerequisites

Create a service principal

Most guides that walk through creating a service principal for AKS recommend doing so using the command

 $ az ad sp create-for-rbac --skip-assignment

While this works just fine, it doesn’t provide any rights to the service principal and requires you to configure a role and scope after you’ve created the AKS cluster. Instead of doing this in two steps, I prefer to use this command to handle it all at once.

$ az ad sp create-for-rbac -n AKS_SP --role contributor \
    --scopes /subscriptions/061f5e92-edf2-4389-8357-a16f71a2cbf3/resourceGroups/AKS-DEMO-RG \
            /subscriptions/061f5e92-edf2-4389-8357-a16f71a2cbf3/resourceGroups/AKS-VNET-RG

What I’m doing with the above command is setting the scope of the service principal to have contributor rights on two resources groups. The first resource group (AKS-DEMO-RG) will contain the AKS cluster and the second (AKS-VNET-RG) contains the virtual network and subnet that will be used for the cluster resources. I’m also providing a name for the service principal (AKS_SP) so it’s easy to identify later on down the road. If you use the default name it will be labeled azure-cli-yyyy-mm-dd-hh-mm-ss, which as you can see is not quite as friendly nor identifiable as AKS_SP

When the command completes, you should see the following output:

{
    "appId": "b2abba9c-ef9a-4a0e-8d8b-46d8b53d046b",
    "displayName": "AKS_SP",
    "name": "http://AKS_SP",
    "password": "2a30869c-388e-40cf-8f5f-8d99fea405bf",
    "tenant": "dbbbe410-bc70-4a57-9d46-f1a1ea293b48"
}

Make note of the appId and the password as that will be required in the next step

Create the cluster

In this section we’ll create our AKS cluster and configure the required tools to interact with it after deployment.

In the below example, replace the parameters with values that suit your environment. The Service Principal and Client Secret parameters should match the appId and password from the output of the az ad sp create command above.

 az aks create --resource-group AKS-DEMO-RG --name demoAKSCluster \
 --service-principal "b2abba9c-ef9a-4a0e-8d8b-46d8b53d046b" \
 --client-secret "2a30869c-388e-40cf-8f5f-8d99fea405bf" \
 --vnet-subnet-id "/subscriptions/061f5e92-edf2-4389-8357-a16f71a2cbf3/resourceGroups/AKS-VNET-RG/providers/Microsoft.Network/virtualNetworks/AKS-DEMO-VNET/subnets/S-1"

Install kubectl

$ sudo az aks install-cli

Fetch the credentials to use for the connection to the cluster

$ az aks get-credentials --resource-group AKS-DEMO-RG --name demoAKSCluster

You should see the following output

Merged "demoAKSCluster" as current context in /home/azureadmin/.kube/config

Test connectivity to the cluster

$ kubectl get nodes

All of your nodes should appear in a Ready status

Additionally, you should see the NIC for each of your nodes connected to the VNET/subnet you provided during deployment.

And that’s it. You now have an AKS cluster deployed using a preexisting virtual network and subnet.

In my next post, I’ll show you how to configure TLS for Helm and Tiller, and deploy an ingress-controller with SSL termination all with certificates issued by a Windows Certificate Authority.

Quick blog – Importing Updates into WSUS – CVE-2019-1367

$
0
0

a Question that was raised this week by quite a few customers is around importing updates into the SCCM environment, that are not available on WSUS, but are on Microsoft Update.

The below steps will guide you through the steps to get the updates into the environment quickly

As per the CVV article, there are a couple of updates you will have to manually import into wsus for now, should you wish to get the updates deployed as soon as possible.

https://portal.msrc.microsoft.com/en-US/security-guidance/advisory/CVE-2019-1367

The steps are as per below

In SCCM and WSUS verify that the update you want is not listed, in this case I am looking for (4522015)

On WSUS Server, select Updates, right-click – import updates (this will open a webpage to the catalog.update.microsoft.com site)

Select the KB you want and hit search

Now select the applicable ones to your environment – add to basket

View basket – ensure the “import directly into WSUS “ is enabled, then click import

Once it is completed, re-search through wsus for update

Now just sync WSUS (from within SCCM), and once done you can download\deploy the update

Active Directory Security Best Practices: Part 2

$
0
0

Hello Again Smile, this our second blog about AD security best practices in our fist blog we talked about one of the most important security mitigation which is secured privileged accounts , you can find it in the following link ,

https://secureinfra.blog/2019/09/26/active-directory-security-best-practices-part-1/

here we will talk about our second mitigation :

Slow Lateral Movement

Lets Explain first what is the lateral movement to understand why we need to prevent it , when the attacker succeed to gain access to one machine normally it will be user workstation and his target is a domain controller or any high privileged system so the first thing the attacker will do is extract the hashes inside the RAM to find one of high privileged accounts that can take him to his target or even higher tier and from there he can do the same till he reach the upper tier , so what we need to do is locked the attacker inside his compromised machine so he can’t escalate to higher tiers or even move laterally inside the same tier.

One example for that is the attacker may success to get the local administrator account and normally most organizations use same name same password for the local administrator in such case the attacker will be able to use this account to access all the machines then start moving laterally between them extract hashes till he get domain admin hash and all the kingdom will be under his control .

So here is how we can mitigate against the lateral movement and this of course side by side with secured privileged account practices that we discussed earlier :

1. Firewall

  • Do you have any business reason to allow communications between workstations ? ,so use firewall to block the traffic between workstations  or allow only the required traffic between workstations and also between workstation and applications for example if you have SCCM allow only the required ports needed by the SCCM agent installed on the machines .

image

2. GPO Based Restrictions

  • Use GPOs to restrict logon for the local administrator account through network ,so the attacker can’t use it to move laterally between workstations .

image

3. Unique Random Password for local administrator account

  • use tools like LAPS to randomize local administrator password for endpoints so if the attacker succeed to compromise the local admin account of one machine he can’t use it to access the other machines ,in the following link you will find step by step guide for LAPS deployment its free tool and very easy to implement and manage . it creates unique password for local admin on every workstation and change it automatically every 30 days by default https://gallery.technet.microsoft.com/step-by-step-deploy-local-7c9ef772

image

that is all for now , our upcoming blogs will be about other security best practices like ESAE , ATA .. etc , stay tuned Smile .

The new way to avoid exposing port 3389 in Azure – Bastion!

$
0
0

Microsoft has released the public preview for Azure Bastion, allowing an additional factor and separate subnet to be your protection from the hordes of hackers who scan the Internet every day looking for open port 3389 with easy passwords or vulnerable patch-level. And things are simpler for you as well – no more unnecessary PIP’s or jump servers to maintain, just for desktop access. Of course, many of you are already using Powershell or Azure automation, and don’t need that desktop, right?  Baston uses the HTTPS connection to Azure to then proxy your connectivity through to the specified desktops: 

 

The steps are simple, but for more details, check out the links at the conclusion.  First pick a region where the preview is supported (I used “East US”, otherwise provisioning may fail) and set up your vnet and put both a working subnet and a /27 subnet – the /27 actually has to have this special name “AzureBastionSubnet”:

Let’s also set up your subscription to take advantage of this new preview feature, by entering these in your cloud shell:

Register-AzureRmProviderFeature -FeatureName AllowBastionHost -ProviderNamespace Microsoft.Network
Register-AzureRmResourceProvider -ProviderNamespace Microsoft.Network
Get-AzureRmProviderFeature -ProviderNamespace Microsoft.Network

Once you see the status “registered” (may take a while), then when you create your virtual machine, and choose “Azure Bastion” on the Operations blade, it will select everything you need, and allow you to create the Bastion, which does use a separate public IP address (PIP): 

It will take a few minutes to deploy the resource, so go get a cup of coffee, knowing that you’ve just helped make the world a safer place. When you come back, Azure Bastion will provide you with a web logon form – upon submitting and connecting with your credentials, you’ll see an RDP tab pop open with access to your VM:

bconnect

In summary, Azure Bastion is a great new way to minimize your threat surface to cloud-hosted IaaS while still providing remote access for manual administrative tasks.  To read up more about this preview feature, check ou tthe documentation at https://aka.ms/AboutBastion or  https://azure.microsoft.com/en-us/services/azure-bastion/. 

And if you need more step-by-step help, here’s a comprehensive guide: https://docs.microsoft.com/en-us/azure/bastion/bastion-create-host-portal

For more advanced users, you can do some special tuning of the NSG’s to provide additional security: https://docs.microsoft.com/en-us/azure/bastion/bastion-nsg

P.S. Just announced, another preview feature (Windows Virtual Desktops) has JUST gone GENERAL AVAILABILITY (GA)!  


Azure AD Best Practice: When to Consider Using a Full SQL Server Instance for Azure AD Connect

$
0
0

By default, Azure AD Connect installs with SQL Express. More specifically, the default is a SQL Server 2012 Express LocalDB (a light version of SQL Server Express).

If you need to manage a higher volume of directory objects, you’ll definitely want to point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the performance of Azure AD Connect. And, if – like a lot of Microsoft customers – the fear of sync failure keeps you up at night, doing this could help you sleep a lot better.

SQL Express has a 10 GB size-limit which also means that there’s very little room to grow above 100,000 objects. If you are even near the 100,000 object limit, make plans to upgrade.

Azure AD Connect supports all versions of Microsoft SQL Server from 2008 R2 (with latest Service Pack) to SQL Server 2019. Microsoft Azure SQL Database, though, is not supported as a database.

Also, keep in mind that you can only have one sync engine per each SQL instance. You can’t use the same SQL Server instance for syncing FIM/MIM, DirSync and Azure AD Sync. Each would need its own SQL Server instance.

Check out how to Move Azure AD Connect database from SQL Server Express to SQL Server.

LAPS Security Concern : Computers joiners are able to see LAPS Password

$
0
0

Here we will discuss a common concern about LAPS as many customers noticed that people who join the computers to the domain can retrieve the LAPS password although they are not given the Permission to do so and because some organizations allow normal users to join their machines to the domain this consider a security risk for them   , so lets answer two question here :

Why this happens ? 

This happen because by default the joiner of the computer has creator owner privilege by default and this privilege give him a set of permissions that were defined by defaultSecurityDescriptor on the computer class in schema , the defaultSecurityDescriptor define the default security permission over the objects , for more information about it check this please https://docs.microsoft.com/en-us/windows/win32/ad/default-security-descriptor

So how we can check the defaultSecurityDescriptor for the computer class ? ,

1-Open ADSIedit , connect to schema Partition 

image

2-Right click on CN=Computer , choose Properties , the Attribute Editor , look for defaultSecurityDescriptor ,

image 

3-As you can see its in Security Descriptor Definition Language (SDDL) Format , so to be able to put it in human readable format , we run the following PowerShell commands

$defaultSD=”D:(A;;RPWPCRCCDCLCLORCWOWDSDDTSW;;;DA)(A;;RPWPCRCCDCLCLORCWOWDSDDTSW;;;AO)(A;;RPWPCRCCDCLCLORCWOWDSDDTSW;;;SY)(A;;RPCRLCLORCSDDT;;;CO)(OA;;WP;4c164200-20c0-11d0-a768-00aa006e0529;;CO)(A;;RPLCLORC;;;AU)(OA;;CR;ab721a53-1e2f-11d0-9819-00aa0040529b;;WD)(A;;CCDC;;;PS)(OA;;CCDC;bf967aa8-0de6-11d0-a285-00aa003049e2;;PO)(OA;;RPWP;bf967a7f-0de6-11d0-a285-00aa003049e2;;CA)(OA;;SW;f3a64788-5306-11d1-a9c5-0000f80367c1;;PS)(OA;;RPWP;77B5B886-944A-11d1-AEBD-0000F80367C1;;PS)(OA;;SW;72e39547-7b18-11d1-adef-00c04fd8d5cd;;PS)(OA;;SW;72e39547-7b18-11d1-adef-00c04fd8d5cd;;CO)(OA;;SW;f3a64788-5306-11d1-a9c5-0000f80367c1;;CO)(OA;;WP;3e0abfd0-126a-11d0-a060-00aa006c33ed;bf967a86-0de6-11d0-a285-00aa003049e2;CO)(OA;;WP;5f202010-79a5-11d0-9020-00c04fc2d4cf;bf967a86-0de6-11d0-a285-00aa003049e2;CO)(OA;;WP;bf967950-0de6-11d0-a285-00aa003049e2;bf967a86-0de6-11d0-a285-00aa003049e2;CO)(OA;;WP;bf967953-0de6-11d0-a285-00aa003049e2;bf967a86-0de6-11d0-a285-00aa003049e2;CO)(OA;;RP;46a9b11d-60ae-405a-b7e8-ff8a58d456d2;;S-1-5-32-560)”
$sec=New-Object System.DirectoryServices.ActiveDirectorySecurity
$sec.SetSecurityDescriptorSddlForm($defaultSD)
$acc=New-Object System.Security.Principal.NTAccount(“CREATOR OWNER”)
$sec.GetAccessRules($true,$false,[System.Security.Principal.NTAccount]) | Where-Object {$_.IdentityReference -eq $acc}

4-So if we check the output we will see here that the creator owner has this Extended Rights Permission , which allow him to read the confidential attributes

image

So this Explain why Computer joiners can retrieve the LAPS Password as they by default has creator owner privilaege which has extended right permission that allow them to read confidential attributes of the computer account they joined .

How we can Fix this ?

Actually we have two solution here :

1.First Solution:

Allow only  dedicated service accounts for computer joining that is trusted to retrieve LAPS Password or using tools like SCCM to deploy OS and Join to the domain ,

Challenge :

some issues like broken secure channel need the computer to be rejoined to the domain so in such case its not practical do to OSD deployment as it will take time also this machines of course has user profile and data , but if we dedicated service account for domain joining we can use it instead but maybe this will be too much work on the helpdesk specially if its small team .

2. Second Solution:

which actually i prefer because it has no limitation is that we remove the Extended right from the creator owner permission by updating  defaultSecurityDescriptor specially the user will still be able to join the computer to the domain but he will not be able to read LAPS Password Smile .so to adjust the defaultSecurityDescriptor and remove Extnded right permission from the Creator owner  its so simple we will just change (A;;RPCRLCLORCSDDT;;;CO) to(A;;RPLCLORCSDDT;;;CO) .

As you can see here after updating the defaultSecurityDescriptor and rerun the Powershell Commands the Extended right has gone .

image

Challenge :

We have removed the extended right but the user still the owner which by default has these two Permissions

  • WRITE_DAC permission. This permission gives security principals the ability to change permissions on an object.

  • READ_CONTROL permission. This permission gives security principals the ability to read the permissions that are assigned to an object.

So with WRITE_DAC Permission the user can change the ACL and elevate his privilege so to address this challenge starting from windows serer 2008 we have a new Security Principle called Owner Rights which can control and adjust the default Owner permissions so we can use it to allow the owner to only read the ACL not write by adding the Owner Rights security principal to objects and  specify what permissions are given to the owner of an object .

So how we do this , i simulate it on my lab i have user called DomainJoin that i gave him Prmission to join Machines , now I will try to remove the WRITE_DAC permission and allow him only to read the ACL  .

  • Before Applying the Write Owner Permission , he had the following privileges as you see the highlighted part he is able to modify permissions which i need to remove .

image

  • Now I will go to the OU of the joined computers right click Priorities , Security , then add the Owner rights and give it only read access .

2

  • choose advanced then adjust the permissions for the owner rights as needed, and make sure it apply to “this object and all descendant objects”

3 

4

  • Now lets check again the DomainJoin User effective access , he is no longer able to modify permission

image

so now you have to options to Solve this LAPS Concern , either assign specific service accounts for domain join , or adjust the defaultsecuritydescriptor and owner permission and you are safe to go .

References:

https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd125370(v=ws.10)?redirectedfrom=MSDN

https://blogs.msdn.microsoft.com/laps/2015/07/17/laps-and-permission-to-join-computer-to-domain/


Azure AD Best Practice: Using Azure AD Connect Standby for Redundancy and Failover

$
0
0

My big focus for Azure at Microsoft is in administration and identity. This includes a lot of heavy Azure AD work. I regularly help customers assess their Azure AD implementations and plans, which puts me in the unique position to hear about customer woes directly.

One of the bigger pain points I hear from customers A LOT – and the thing that keeps them awake at night – is Azure AD Connect and specifically when there’s no discernible plan for backup and failover in the event sync fails or disaster happens.

Obviously, we have some work to do to ensure customers are hearing about Azure AD Connect implementations that supply backup and redundancy, but we do have guidance on this.

As a best practice, consider installing a second Azure AD Connect server, but instead of making it active, install it as a Standby server so that the Azure AD Connect implementation looks like the following:

Standby Server

You put the Azure AD Connect server into Staging Mode during installation as shown in the next screen capture (and use the same process to change a server to standby and back again).

Staging Mode

Installing the Azure AD Connect server in this mode causes it to be active for import and synchronization, but it is prohibited from doing the actual exports that the primary sync server is performing. Essentially, this “backup server” is constantly doing collection of your on-premises Active Directory objects, mirroring what your active sync server is capturing. Doing this, you have a backup copy of your AD objects and should disaster strike, you can take the active sync server offline and quickly enable the backup server to become the master.

Also rest assured, when a server is in Standby mode, no exports occur to your on-premise Active Directory, no exports occur to Azure Active Directory, and Password synchronization and password write-back are disabled – even if the features are selected during installation. When staging mode is disabled and the backup server becomes the primary, the server immediately starts exporting, enables password sync, and enables password writeback.

Also, keep in mind that if the server is left in staging mode for an extended period of time, it can take a while for the server to synchronize all password changes that had occurred during the time period. 

Additionally, for even better protection and failover, consider putting the Primary and Standby servers in different data centers if that option is available.

For help setting up and configuring a Standby server, see: Azure AD Connect: Staging server and disaster recovery

System Center Service Manager: Working with FIPS and Report Server

$
0
0

When you browse Report Manager URL, you get an HTTP 500 error or a blank page (in case if you have disabled friendly HTTP messages) on the browser window. When you check the Reporting Services log files you would find the below error being logged:

ERROR: System.Web.HttpException: Error executing child request for Error.aspx. —> System.Web.HttpUnhandledException: Exception of type ‘System.Web.HttpUnhandledException’ was thrown. —> System.InvalidOperationException: This implementation is not part of the Windows Platform FIPS validated cryptographic algorithms.

Cause:

This is happening because FIPS is enabled on the Reporting Services server and Report Manager does not support the Local Security Policy “System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing”.

To ascertain that FIPS is enabled you can:

(1)    Check the registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\fipsalgorithmpolicy

And the value of it should be set to 1.

(2)    Or else, go to Local Security Policy (Start -> Run -> secpol.msc) and then go to “Security Settings -> Local Policies -> Security Options” and on the right-side windows you should see the policies in that please look for the Policy “System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing” and checked the security thing and it should be Enabled.

How to resolve the issue?

If you do not need FIPS, go ahead and change the above mentioned registry change from 1 to 0 or else change the local security policy from Enabled state and Disabled state.

If you cannot disable FIPS, the following link is another way to work around it. With reference to https://support.microsoft.com/en-us/kb/911722, in order to get around this issue you would have to edit Report Manager’s web.config file as explained below.

File to be edited:

<system-drive>\Program Files\Microsoft SQL Server\MSRS<version>.<instance>\Reporting Services\ReportManager\Web.config

What to do?

(1)    In the Web.config file, locate the <system.web> section.

(2)    Add the following <machineKey> section to in the <system.web> section:

<machineKey validationKey=”AutoGenerate,IsolateApps” decryptionKey=”AutoGenerate,IsolateApps” validation=”3DES” decryption=”3DES”/>

(3)    Save the Web.config file.

Once the file has been changed, you would have to restart Reporting Services service for the change to become effective.

Recommendation: Take a backup of the web.config file prior to making the change.

AGPM: The case of the missing GPT.ini file – a possible workaround

$
0
0

Hey everyone, Theron (aka T-) here, Senior Consultant with Microsoft Consulting Services (MCS) specializing in Active Directory, amongst other technologies, including Advanced Group Policy Manager (AGPM).

Have you ever deployed a GPO via AGPM only to experience either of the two situations?

  • EventID 1058 (GroupPolicy) in a client’s System log

or

  • The follow message when using ‘gpupdate’ on a client:
A screenshot of a cell phone

Description automatically generated
GPUpdate message when gpt.ini is missing – Windows Server 2016

The actual details included in the message returned by ‘gpupdate’ will differ depending on the version of Windows you’re using.

So, what’s the message telling us? Well, it’s pretty self-explanatory…the gpt.ini file located in \\<domain.fqdn>\SysVol\<domain.fqdn>\Policies\{7F2C98CE-3BEE-4CDB-A815-DEF1E2897706}\ is missing. {7F2C98CE-3BEE-4CDB-A815-DEF1E2897706} is the GUID of the GPO in question, so it will obviously differ in each situation.

Now, what happened? Well, that’s the tricky part and I have yet to find an actual cause to the situation. From my research and discussion internally with colleagues at Microsoft, no one else has either. Frustrating, right? Fear not, we may have found a viable work around to prevent it.

In case you didn’t know or think about it, simply re-deploying the policy in question via AGPM usually solves the current GPO’s file issue.

Workaround that may work for you:

I currently support a customer who is dealing with this issue just about every time they deploy a policy. It had gotten to the point that each time a deployment was executed, the person deploying the policy would have to check the GPO folder in SYSVOL to make sure the gpt.ini file was there. Doesn’t sound very efficient, does it? Yeah, I agree.

During some “let’s throw darts at the wall and see what sticks” troubleshooting of this problem, we decided to create an AD DS Site containing one domain controller and put the AGPM server into that site. Basically, we created an AD DS Subnet with one IP address (/32), that of the AGPM server, and assigned it to the newly created site. The thought process was to eliminate the use of any additional domain controller in the original Site the AGPM server was a member of; there were four. The next thing we did was ensure the GPMC being used for deployments during our testing was using that domain controller.

Well, wouldn’t you know it, the issue hasn’t occured since. Each subsequent deployment of policies has yielded the expected results…the GPO works and no issues on the clients! We’re still evaluating and monitoring the situation and yes, SYSVOL is still being checked after each deployment. Once we’re confident the issue is gone, hopefully that won’t have to happen.

The next step on our list of things to do is to move the FSMO roles, more importantly the PDCe to the domain controller for the AGPM Site. Since GPMC defaults to the PDCe, unless changed, by moving it to the AGPM Site, each time a policy is deployed, the domain controller in the AGPM Site will be used. For those of you that don’t know, the AGPM server will randomly pick a domain controller in its AD DS Site when you’re managing policies vs. using the domain controller your GPMC is using. Weird, huh?

Well, that’s all for now. If we have any further development with our testing, positive or negative, I’ll make sure to provide an update.

Roll Tide!

T-

AD: Discover what you’ve got

$
0
0

Hey everyone, Theron (aka T-) here, Senior Consultant with Microsoft Consulting Services (MCS) specializing in Active Directory.

I wrote a really basic script that will scour your domain and return some valuable information regarding its configuration. There are probably several things in the script that could be done differently and if I was to go through it again, I’d probably change them, but this was quickly thrown together over a year ago for me to fulfill a customer’s request.

The script is written in PowerShell and located here.

It performs the following:

    – Writes outputs to the console.
        – Also creates a transcript output in your Documents folder.
    – Gets forest and domain information.
    – Gets forest and domain functional levels.
    – Gets domain creation date.
    – Gets FSMO role holders.
    – Gets AD schema version.
    – Gets tombstone lifetime.
    – Gets domain password policy.
    – Gets AD backup information.
    – Checks to see if AD Recycle Bin is enabled.
    – Gets AD Sites and Subnets.
    – Gets AD Site replication links.
    – Gets AD trust information.
    – Gets users and groups information.
        – Number of users
        – Number of groups
        – Inactive accounts based on 30, 60, 90 days.
    – Lists OUs with blocked inheritance.
    – Lists unlinked GPOs.
    – Lists duplicate SPNs.

Enjoy.

Roll Tide!

T-

Tip: Capturing Devices to Manage in Intune Using Azure AD Connect

$
0
0

Working with customers who are starting their migration for identity and administration from on-premises to Azure, I see a couple options in the installation and configuration of Azure AD Connect that get missed. Particularly, once Azure AD Connect is installed and on-premises accounts are synced with Azure, customers find that their Active Directory managed devices are missing from Azure AD. And, of course, this means that Intune can’t see and manage these devices.

During the Azure AD Connect installation, there’s a configuration option available to “Configure Device Options.”

Alain Schneiter has a good blog with instructions on how to accomplish this: Configure Device Registration with Azure AD Connect

(Kudos to my teammate Jeff Gilbert for finding Alain’s blog post)

However, what if you miss configuring this option during the installation and configuration of Azure AD Connect the first time?

You can rerun the Azure AD Sync installation wizard a second time to make changes to the sync configuration.

What you can change:

  • Add more directories.
  • Change Domain and OU filtering.
  • Remove Group filtering.
  • Change optional features.

In this instance, the most common scenario for needing to rerun the Sync tool is because specific OUs that contained managed devices were missed during the initial configuration. By, altering the configuration so that the sync picks up the additional OUs you’ll see those missing managed devices shows up in Azure AD and be manageable using Intune.

One last thing…make sure you also assign an Enterprise Mobility Suite License to the synced users.

To assign an Azure AD Premium or Enterprise Mobility Suite License

  1. Sign in to the Azure portal as an admin.
  2. On the left, select Active Directory.
  3. On the Active Directory page, double-click the directory that has the users you want to set up.
  4. At the top of the directory page, select Licenses.
  5. On the Licenses page, select Active Directory Premium or Enterprise Mobility Suite, and then click Assign.
  6. In the dialog box, select the users you want to assign licenses to, and then click the check mark icon to save the changes.

 

 


AD: Domain controllers – discover what you’ve got

$
0
0

Hey everyone, Theron (aka T-) here, Senior Consultant with Microsoft Consulting Services (MCS) specializing in Active Directory.

During an engagement with a customer a couple of years ago, I needed to identify some info regarding their domain controllers. They were in the process of deploying System Center Operations Manager (SCOM) at the time, but it wasn’t monitoring the DCs yet, so I couldn’t use it for the what I needed. They had ‘another’ management product that may have provided the info, but I wasn’t familiar with it and didn’t think trying to figure it out was worth the time it would’ve taken. Besides, that wouldn’t have been as interesting as scripting it.

So, with the assistance of a colleague, I wrote a quick script to gather pertinent info about all of the domain controllers in their environment. As with all of my scripts, there may be better ways of doing things, but this accomplished my goals. Also, with this particular script, there are probably things that could be added that would be valuable, but again, this accomplished my goals.

Basically, it’ll connect to each DC in the domain, gather the info and output it into a CSV which will be located in \Documents\Domain_Discovery_Output. The more domain controllers you have, the longer it’ll take to finish. Also, you’ll need to ensure Remote PowerShell requirements are met.

The script is written in PowerShell and located here.

It performs the following:

  •     Checks to see if Domain_Discovery_Output folder exists. 
    •         If not, creates one under $Home\Documents. 
  •     Outputs a csv file to the Domain_Discovery_Output folder. 
  •     Gathers the following information about your domain controllers: 
    •         Server Name 
    •         Domain Name 
    •         Manufacturer 
    •         Model 
    •         Physical memory 
    •         OS caption 
    •         OS version 
    •         # of cores 
    •         CPU name 
    •         IP Address 
    •         DeviceID (hard drive letter) 
    •         HD size in GB 
    •         HD free space in GB 
    •         HD % free 
    •         NTDS and SYSVOL locations 

Enjoy

Roll Tide,

T-

SCOM 2019 Agent Installation Error

$
0
0

While providing support at a customer, I encountered a strange issue with the SCOM agent installations as shown below:

Upon investigation the findings were as follows:

The usual workaround is to delete the following  three Registry Entries:

  • HKEY_CLASSES_ROOT\Installer\Products\Microsoft Monitoring Agent ID (D996D247BE65CC940AA413D70EF113DC)
  • HKEY_LOCAL_Machine\SOFTWARE\Microsoft\Microsoft Operations Manager
  • HKEY_LOCAL_Machine\SOFTWARE\Microsoft\System Center Operations Manager

Usually after deleting the above entries the installation works, however agent installations from the console as well as manual installations still failed with the above error, even after server reboot.

The successful work around I attempted is below:

MOMAgent.msi /qb NOAPM=1 USE_SETTINGS_FROM_AD=0 USE_MANUALLY_SPECIFIED_SETTINGS=0 MANAGEMENT_GROUP=managementGroupNameHere MANAGEMENT_SERVER_DNS=MS Server Name here FQDN MANAGEMENT_SERVER_AD_NAME=MS Server Name here SECURE_PORT=5723 ACTIONS_USE_COMPUTER_ACCOUNT=1 AcceptEndUserLicenseAgreement=1

After running the command in elevated command prompt the install was successful. Next I identified that the server did not show up in the Management Console. I then checked the configuration of the Microsoft Monitoring Agent under Control Panel and identified that the Primary Management server was showing “Not Available”.

Only once I deleted the entry and re-added it manually with changes applied, only then did the agent show up in the SCOM Console for approval under pending management.

AKS: Enabling and using preview features such as nodepools using CLI

$
0
0

Most of the time we use the familiar Azure portal to consume Azure Resources. That is all well and good. However sometimes we find that having the Azure CLI to do this is more easier, as once we perfect the script we can just run it, instead of having to use the Portal. In this post I present a PowerShell script that I used to

  • (a) turn on preview features,
  • (b) register them,
  • (c) check they are turned on, and
  • (d) finally to consume them.
    As of now, some of the preview features can only be turned by using the CLI.

Pre-requisites:

  • An Azure subscription
  • Access rights to the subscription
  • Azure CLI installed on your local (Client) machine from where you will be running the script.

First let’s get connected.

#################################################
##### AKS Preview features ######################
#################################################

## This allows you to have multiple node pools within a single cluster. 
## Now you can deploy different applications exclusively to these node pools
## Also for cluster auto-scaling, one should have node pools. Without that 
## AUTO-Scaling is not possible. Ofcourse you can manually scale. 
## Note that node pools are built on top of the VM Scale Set capability of Azure Compute. 

az --version
az login --tenant microsoft.onmicrosoft.com 
az account set --subscription cxxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxf  

###################################
###### Set of Registrations #######
###################################

## (1) If not registered, register the container service.
az provider register --namespace Microsoft.ContainerService


## (2) Install the aks-preview extension
az extension add --name aks-preview

## (3) Update the extension to make sure you have the latest version installed
az extension update --name aks-preview

## (4) Register the feature on the Microsoft.ContainerService namespace to have the MultiAgentPool feature (which is preview)
az feature register --name MultiAgentpoolPreview --namespace Microsoft.ContainerService

## (5) Check the status of the feature - it takes time. Only when registered can you go further.
##     This should show as "registered".
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/MultiAgentpoolPreview')].{Name:name,State:properties.state}"

############################################
### Now for creating the cluster via CLI ###
############################################

# Create a basic single-node AKS cluster :: We will create all things in this RG.
az  group create --name  myResourceGroup  --location eastus2euap

## Please note the additional options of vm-set-type and load-balancer-sku 
az aks create `
    --resource-group myResourceGroup `
    --name myPreviewK8S `
    --vm-set-type VirtualMachineScaleSets `  
    --node-count 2 `
    --generate-ssh-keys `
    --kubernetes-version 1.15.4 `
    --load-balancer-sku standard

az aks get-credentials --resource-group myResourceGroup --name myPreviewK8S

## Now go onto experiment with adding node pools
az aks nodepool add `
    --resource-group myResourceGroup `
    --cluster-name myPreviewK8S `
    --name mynodepool `
    --node-count 3 `
    --kubernetes-version 1.15.4

 

Test read rights for user-assigned managed identity on a Linux VM in Azure Gov

$
0
0

I recently came across an issue where a user-assigned managed identity on a VM was not able to read the properties of the resource group where the VM object it was assigned to resided. As our deployment relied on these permissions being set it would fail until the permissions were added.

Normally, you could easily check this in the portal; however, in this case the user doing the deployment didn’t have portal access and had to rely on another person to add/remove the permissions. So they either had to go through the deployment and wait for it to fail or succeed or ping someone with portal access to go check the permissions.

In trying to determine a method for a user without portal access to verify the permissions, I came across this article, but it was geared towards system-assigned managed identities and required giving your virtual machines read rights on the resource group. Additionally, the article only states how to test the identity in Azure Commercial, which didn’t help me as my customer was in Azure Government.

Using this article as a general guide, I pieced together the following steps:

  1. Open a terminal session to the Linux VM that has the user-assigned managed identity assigned
  2. Run the following curl command
curl 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02- 
    01&resource=https://management.usgovcloudapi.net/' -H Metadata:true

You should see output similar to the following

``{
  "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik02cFg3UkhvcmFMc3ByZkplUkNqU3h1VVJoYyIsImtpZCI6Ik02cFg3UkhvcmFMc3ByZkplUkNqU3h1VVJoYyJ9.eyJhdWQiOiJodHRwczovL21hbmFnZW1lbnQudXNnb3ZjbG91ZGFwaS5uZXQvIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvOGEwOWYyZDctODQxNS00Mjk2LTkyYjItODBiYjQ2NjZjNWZjLyIsImlhdCI6MTU3MTMyNDkzOCwibmJmIjoxNTcxMzI0OTM4LCJleHAiOjE1NzEzNTQwMzgsImFpbyI6IlkyRmdZUENWemJuT3UzeWljWU9vR3Evbnd5ZGlBQT09IiwiYXBwaWQiOiJiNGQ4MDAzOS01YjU4LTQzZjAtYWZlNy00ZTI5NDI3MDk1YmQiLCJhcHBpZGFjciI6IjIiLCJpZHAiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC84YTA5ZjJkNy04NDE1LTQyOTYtOTJiMi04MGJiNDY2NmM1ZmMvIiwib2lkIjoiNjIxMGZkOGMtNTYwZS00OTllLTlmYTItMWFlYjZiZmUyZjY0Iiwic3ViIjoiNjIxMGZkOGMtNTYwZS00OTllLTlmYTItMWFlYjZiZmUyZjY0IiwidGlkIjoiOGEwOWYyZDctODQxNS00Mjk2LTkyYjItODBiYjQ2NjZjNWZjIiwidXRpIjoibjZBX2RSUEIzRUt1a1lWYU1ISUVBQSIsInZlciI6IjEuMCIsInhtc19taXJpZCI6Ii9zdWJzY3JpcHRpb25zL2Q4YWJiNWZkLTlkMDAtNDhmZC04NjJhLTBmNzc4MzA2Y2NlNy9yZXNvdXJjZWdyb3Vwcy9BTlNJQkxFLVJHL3Byb3ZpZGVycy9NaWNyb3NvZnQuTWFuYWdlZElkZW50aXR5L3VzZXJBc3NpZ25lZElkZW50aXRpZXMvQW5zaWJsZS1NYW5hZ2VkSWQifQ.MDBjxDLSOLlZs3bbFVH9NjR2_qY4vqbFynXaqsxNcfsBLv8XXXFZPSqBBNk7Ig8hQoNAjOWjT9W0FYw_KzLzWpUs4O1fSsuuqvEzIfml1H2hDn4-I-6bHxC3Il_9wt6njaH4vj31lWXOtNhynOaNl9jPuz4jAOJtbVlMR7ryCa9gZz3f_RCr3ShhkSpXmRU2RP-9c4KbLxSxr3ZYDyuHZ6u66PnDrX5-CyoMUKem3FBSsC29DZURaAMbjYr62gT9HJc7tYuXYvjBuG12suvHslLg1yWfFPxS5Td0pxSZMnc8JdonveOI5MmcW6FySi-5v7JNwH8yf7adr-eHYq0AcQ",
  "client_id": "b4d80039-5b58-43f0-afe7-4e29427095bd",
  "expires_in": "28800",
  "expires_on": "1571354038",
  "ext_expires_in": "28800",
  "not_before": "1571324938",
  "resource": "https://management.usgovcloudapi.net/",
  "token_type": "Bearer"
}``
  1. From the output, copy the access token portion, which will leave us with this

eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik02cFg3UkhvcmFMc3ByZkplUkNqU3h1VVJoYyIsImtpZCI6Ik02cFg3UkhvcmFMc3ByZkplUkNqU3h1VVJoYyJ9.eyJhdWQiOiJodHRwczovL21hbmFnZW1lbnQudXNnb3ZjbG91ZGFwaS5uZXQvIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvOGEwOWYyZDctODQxNS00Mjk2LTkyYjItODBiYjQ2NjZjNWZjLyIsImlhdCI6MTU3MTMyNDkzOCwibmJmIjoxNTcxMzI0OTM4LCJleHAiOjE1NzEzNTQwMzgsImFpbyI6IlkyRmdZUENWemJuT3UzeWljWU9vR3Evbnd5ZGlBQT09IiwiYXBwaWQiOiJiNGQ4MDAzOS01YjU4LTQzZjAtYWZlNy00ZTI5NDI3MDk1YmQiLCJhcHBpZGFjciI6IjIiLCJpZHAiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC84YTA5ZjJkNy04NDE1LTQyOTYtOTJiMi04MGJiNDY2NmM1ZmMvIiwib2lkIjoiNjIxMGZkOGMtNTYwZS00OTllLTlmYTItMWFlYjZiZmUyZjY0Iiwic3ViIjoiNjIxMGZkOGMtNTYwZS00OTllLTlmYTItMWFlYjZiZmUyZjY0IiwidGlkIjoiOGEwOWYyZDctODQxNS00Mjk2LTkyYjItODBiYjQ2NjZjNWZjIiwidXRpIjoibjZBX2RSUEIzRUt1a1lWYU1ISUVBQSIsInZlciI6IjEuMCIsInhtc19taXJpZCI6Ii9zdWJzY3JpcHRpb25zL2Q4YWJiNWZkLTlkMDAtNDhmZC04NjJhLTBmNzc4MzA2Y2NlNy9yZXNvdXJjZWdyb3Vwcy9BTlNJQkxFLVJHL3Byb3ZpZGVycy9NaWNyb3NvZnQuTWFuYWdlZElkZW50aXR5L3VzZXJBc3NpZ25lZElkZW50aXRpZXMvQW5zaWJsZS1NYW5hZ2VkSWQifQ.MDBjxDLSOLlZs3bbFVH9NjR2_qY4vqbFynXaqsxNcfsBLv8XXXFZPSqBBNk7Ig8hQoNAjOWjT9W0FYw_KzLzWpUs4O1fSsuuqvEzIfml1H2hDn4-I-6bHxC3Il_9wt6njaH4vj31lWXOtNhynOaNl9jPuz4jAOJtbVlMR7ryCa9gZz3f_RCr3ShhkSpXmRU2RP-9c4KbLxSxr3ZYDyuHZ6u66PnDrX5-CyoMUKem3FBSsC29DZURaAMbjYr62gT9HJc7tYuXYvjBuG12suvHslLg1yWfFPxS5Td0pxSZMnc8JdonveOI5MmcW6FySi-5v7JNwH8yf7adr-eHYq0AcQ

  1. Now run the following command, replacing SUBSCRIPTIONID, RESOURCEGROUP, and ACCESSTOKEN with the information relevant to your environment.
curl https://management.usgovcloudapi.net/subscriptions/SUBSCRIPTIONID/resourceGroups/RESOURCEGROUP?api-version=2016-09-01 -H "Authorization: Bearer  ACCESSTOKEN"

In the above command, the RESOURCEGROUP parameter should be the name of the resource group that you’re testing read access on.

You should see the following output

{"id":"/subscriptions/SUBID/resourceGroups/RG","name":"RG","location":"usgovvirginia","tags":{},"properties":{"provisioningState":"Succeeded"}}

If you see the below error, it means the managed identity does not have read access

{"error":{"code":"AuthorizationFailed","message":"The client '6210fd8c-560e-499e-9fa2-1aeb6bfe2f64' with object id '6210fd8c-560e-499e-9fa2-1aeb6bfe2f64' does not have authorization to perform action 'Microsoft.Resources/subscriptions/resourceGroups/read' over scope '/subscriptions/SUBID/resourceGroups/RG' or the scope is invalid. If access was recently granted, please refresh your credentials."}}

AD: Nitty Gritty of Fine-Grained Password Policies

$
0
0

Hey everyone, Theron (aka T-) here, Senior Consultant with Microsoft Consulting Services (MCS) specializing in Active Directory.

Fine-Grained Password Policies (FGPP) have been around for a while, but in my experience with various customers, they aren’t used often, if at all. This post is an attempt to simplify them, provide some details and list some of the PowerShell CMDLets you can use to manage them. There are plenty of resources out there that outline how to implement them, so I won’t get into that.

FGPP? What?

Windows Server 2008 and above operating systems provide organizations with a way to define different password and account lockout policies for different sets of users in a domain. In Windows 2000 Server and Windows 2003 Server Active Directory domains, only one password policy and account lockout policy could be applied per domain. These settings were specified in the Default Domain Policy for the domain. Thus, organizations that wanted different password and account lockout settings for different sets of users had to either create a password filter or deploy multiple domains.

You can use Fine-Grained Password Policies to specify multiple password policies within a single domain. You can also use them to apply different restrictions for password and account lockout policies to different sets of users in a domain. For example, you can apply more restrictive settings to privileged accounts and less restrictive settings to the accounts of regular users. In other cases, you might want to apply a special password policy for accounts whose passwords are synchronized with other data sources.

For more details, refer to this.

‘Fine-Grained’ Details:

Here are some of the details of FGPPs that may help you understand their use a little better:

  • For the Fine-Grained Password Policy and account lockout policies to function properly in a given domain, the domain functional level of that domain must be set to Windows Server 2008 or greater.
  • Fine-Grained Password Policies apply only to global security groups and user objects (or inetOrgPerson objects if they are used instead of user objects).
  • A Fine-Grained Password Policy is referred to as a Password Settings Object (PSO) in Active Directory.
  • Permissions: By default, only members of the Domain Admins group can create PSOs. Only members of this group have the Create Child and Delete Child permissions on the Password Settings Container object in Active Directory.
    • In addition, only members of the Domain Admins group have Write Property permissions on the PSO by default. Therefore by default, only members of the Domain Admins group can apply a PSO to a group or user.
    • The appropriate rights to create and apply PSOs can be delegated, if needed.
  • Delegation: You can delegate Read Property permission of a PSO to any other group (such as Help desk personnel or a management application) in the domain or forest. This allows the delegated group to see the actual settings in a PSO.
    • Users can read the msDS-ResultantPSO or the msDS-PSOApplied attributes of their user object in Active Directory, but these attributes display only the distinguished name of the PSO that applies to the user. The user cannot see the settings within that PSO.
  • A PSO has attributes associated with all of the settings that can be defined in Account Policies section of a Group Policy, except for Kerberos settings.
    • Enforce password history
    • Maximum password age
    • Minimum password age
    • Minimum password length
    • Passwords must meet complexity requirements
    • Store passwords using reversible encryption
    • Account lockout duration
    • Account lockout threshold
    • Reset account lockout after

In addition, a PSO also has the following attributes:

  • msDS-PSOAppliesTo. This is a multivalued attribute that is linked to users and/or group objects.
  • Precedence. This is an integer value that is used to resolve conflicts if multiple PSOs are applied to a user or group object.
    • Settings from multiple PSOs are not cumulative. Only the PSO with the highest precedence, lowest number, is applied.

Read that last bullet again, it’s important!!

PowerShell and all of its Goodness:

While there are several ways to get information about a PSO, assign a PSO, remove assignment of a PSO, or to figure out what settings are applied to a user/group, PowerShell is the easiest…in my opinion.

Get all of the details of a PSO:

Get-ADFineGrainedPasswordPolicy '<PSOName>' -Properties *

Get the groups and users to which a PSO is applied:

Get-ADFineGrainedPasswordPolicySubject -Identity '<PSOName>'

Get the resultant password policy for a group or user:

Get-ADUserResultantPasswordPolicy -Identity '<TargetName>'

Assign PSO to a group or user:

Add-ADFineGrainedPasswordPolicySubject -Identity '<PSOName>' -Subjects '<GroupOrUser>'

Remove PSO from a group or user:

Remove-ADFineGrainedPasswordPolicySubject -Identity '<PSOName>' -Subjects '<GroupOrUser>'

To recap, Fine-Grained Password Policies are a way to apply different password/account lockout policies to various users/groups within a domain. Using them to shorten the password age of your administrative accounts is a sure way of improving security by forcing their passwords be changed more often. Who isn’t up for improved security?

Roll Tide!

T-

Viewing all 177 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>