Quantcast
Channel: Secure Infrastructure Blog
Viewing all 177 articles
Browse latest View live

SCOM: MSSQLServer Event ID 28005

$
0
0

Hey everyone, Theron (aka T-) here, Senior Consultant with Microsoft Consulting Services (MCS) specializing in Active Directory, but also dabbles with all things Systems Center.

Recently, while implementing SCOM 2019 in a customer’s environment, I ran into an issue when trying to install agents; the discovery wizard would never complete the ‘discovery’ process. After making sure that my account was granted ‘Log on as a service’ to the management server, I started going through event logs. There really wasn’t anything of interest in the Operations Manager log of the management server, so I moved onto the SQL server hosting the OperationsManager database.

Well, there was lots of interesting things in its Application log….LOTS. The primary event of interest and what I focused on was Event ID: 28005.

Log Name:      Application
Source:        MSSQLSERVER
Date:          10/24/2012 4:55:48 PM
Event ID:      28005
Task Category: Server
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      omdb.om.domain
Description:
An exception occurred while enqueueing a message in the target queue. Error: 15404, State: 19. Could not obtain information about Windows NT group/user ‘domain\OmSDK’, error code 0x5.

– Credit goes to the TechNet blogger mentioned later for those details, I copied them from his post.

Essentially, the event is saying that information about the SDK account couldn’t be ‘obtained’ from Active Directory. Really?

I shook my head, literally LOLed and said to myself, “that can’t be true”, but I’m a firm believer in checking all leads, regardless of how far fetched they may be. So, I opened ADUC, found the SDK account and looked at its ACL. I wanted to see if the SQL server or its service account had rights, any rights, through any means, to it.

Wouldn’t you know it? My literal LOL came back to bite me in the backside. I couldn’t find an ACE granting neither SQL accounts, server nor service, rights to the SDK account, not even read. The event being generated now made sense, I think. More on that later.

Now you may be asking yourself, like I did, “why didn’t those SQL accounts have rights to read the SDK account? After all, Authenticated Users should have rights to read just about anything in the directory, right?”. Technically, you’re right, those SQL accounts by default should have read rights to just about anything in the directory….buuuuuuttttt, this wasn’t the case; Authenticated Users didn’t have any rights to the account. And before anyone asks, no, I didn’t look into why. Didn’t have time nor desire at that point, I just wanted to get SCOM functioning correctly.

Before I started “shooting from the hip’ and made any changes, I started scouring the internet, searched high and low for just about everything that made sense regarding this issue. I found a lot of information about the SQL Server Broker service not running. Contrary to what I said earlier about being “a firm believer in checking all leads, regardless of how far fetched they may be”, I didn’t check this one, because I had already verified the service was running.

So I kept searching. Finally, when all hope seemed lost and I was about to making the changes I thought needed to be made, I found a TechNet blog post from 2012 that not only links to details about how the Discovery process works, it was IDENTICAL to the issue I was seeing. Thank you, baby Yoda!!! Kudos to Łukasz Rutkowski for the post. If you’re into screen shots, make sure to check out the post. FWIW, I couldn’t produce screen shots of the actual issue due to security protocol nor could I replicate it in my lab.

Tired of reading at this point, what’s the fix?!?!

After reading the post, which validated my thoughts about a possible fix, I granted read rights for the SDK account to the SQL service account. After doing so, all of the SQL events stopped! Winning! Also, I could discover servers and install the agent. Double winning!

Simple fix to a baffling problem. Well, baffling to me, anyway.


Boldly Going Part 2: Shakedown Cruise

$
0
0

Building the Crew

In an earlier blog I laid out some basics needed to build your own portable lab. I focused on the hardware needed and the considerations you need to keep in mind to ensure a successful effort. The hardware, however, is merely a tool (albeit a useful one) and needs a proper configuration to make it produce the results you are looking to achieve. In other words, if you don’t feed that dog, it won’t warp!

Dilithium Schmilithium!

The next step in our bold voyage (See? A theme!) of undiscovered labbing is to enable Hyper-V. For this discussion I am going to assume it is disabled. I am going to demostrate a couple of different ways to do this by using the GUI and then programatically using PowerShell. So let’s get on with the mission.

Grinding the GUI

Installing Hyper-V in the GUI is pretty straightforward, we’ll find it in the Control Panel. Type ‘control’ in the searchbar and click on its icon when it appears in the list:

Once Control Panel is open, start typing ‘feature’ in the search window and then select ‘Turn Windows Features On or Off’

In the ‘Features’ window, check the checkbox next to Hyper-V, expand the selection and make sure that all the sub-checkboxes are also checked. Click on ‘OK’ to finish the install and close out of Control Panel.

At this point Hyper-V is enabled and ready to go. A reboot may be required depending on the Operating System (OS) you’re running.

Warping with the Shell

The GUI is great and as you’ve seen, an easy interface to work with but what if you crave a little automation to open up other options? Well then PowerShell may be a better answer for you. The advantages of using Powershell are many; faster installs, consistent installs across machines and remote installation across networked machines just to name a few.

Now I want to point out that I will be demonstrating two versions of the same code. This is due to the differences between how features are installed on a client OS versus a Server OS. Whichever OS you’re running on your host machine, the following code examples will allow you to enable Hyper-V.

Before proceeding, be sure to open your PowerShell session in an elevated context as administrative right are required to add this feature to the machine.

On a client OS you will need to have the DISM module installed in order to proceed. to do this run the following:

Import-Module DISM
Get-Module

Your results should resemble this:

The server OS requires the Server Manager module to be installed, to do this run the following:

Import-Module ServerManager
Get-Module

Your results should resemble this:

With the necessary management modules in place we are now ready to enable Hyper-V. Depending on your OS type run one of the follow snippets:
Client OS Code:

 Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All 
Server OS Code:
 Install-WindowsFeature -Name Hyper-V -IncludeManagementTools -Restart 

NOTE: The client OS will prompt for a reboot, there is no switch to bypass this.
Below are examples of the code running on a client and a server:
Installing on a Client OS
Installing on a server OS, the reboot is automatic
Engineer’s Formula for an Awesome Reputation (Estimated Time x 4 = Godlike Rep)

Montgomery Scott said it himself, “Never give them a real estimate of a repair time lad! Multiply by four then deliver on your real estimate! They’ll think you’re a bloody miracle worker!”. There is no way we’ll get those kind of results, so we’ll just settle on it being a little quicker and most importantly, successful. To do that, our next step is to verify that Hyper-V is actually enabled.

Log in to the host and re-run the PowerShell console in an elevated context. Run the Get-Feature command for the OS you are using to list the relevant features as demonstrated below:

Using Get-WindowsOptionalFeature to verify Hyper-V (Client OS)
Using Get-WindowsFeature to verify Hyper-V (Server OS)

Stardate Undecided (…the continuing missions…)

The portable lab is a gateway to a new universe. It gives you a local toolkit to build out almost any lab scenario you may need (within the restrictions of your hardware). It will also prepare you for working with Hyper-V Servers and System Center Virtual Machine Manager (SCVMM). SCVVM is designed for managing large virtual environments but if you have the hardware to support it, it is well worth the effort of experimenting with it. The contrasts between it and VMWare ESXi are interesting and well worth exploring.

Understanding the intricacies of virtualization is a major benefit of exploring lab scenarios in Hyper-V. The knowledge you gain will also help you to learn and better understand Azure and its various deployment models. The basic foundations for both are similar.

For those who would like to further explore lab automation I again recommend Jaromir Kaspar’s website. There are several hydration (rapid lab deployment) kits and projects out there and there are two I think are worth investigating. The two specifically are:

  • PSAutoLab – A GitHub project created by Jason Helmick and Melissa Janusko. It is a powerful hydration tool.
  • AutomatedLab – Enables you to setup test and lab environments on Hyper-V or Azure with multiple products or just a single VM in a very short time.

I hope you will enjoy playing with and learning in your lab journeys. I will be posting more fast-fix and simple-trick blogs in the ‘Boldly Going’ series. I hope they will help to tweak your lab experiences and work around any potential gotchas. Until then, “Lab long and prosper“.

Communicate with Confidence – Taking the fear out of public speaking – Part 2

$
0
0

In my previous post, I talked about interpersonal communication and the four levels of conversation. Level 1 started with small talk while at the other end, Level 4, one finds themselves in a full-blown relationship with the other person. We learned that people could have a fear of speaking to others one-on-one as well as speaking in front of groups. In this post, we will discuss tips and tricks to help ease the anxiety of talking to others as well as having those “difficult” conversations we must all face at some time in our careers.

Talking Tips and Tricks

Tip #1 Breathe!

This tip may seem obvious. The next time, however, you are feeling anxious about talking to someone, notice the way you are breathing. You are likely holding your breath or breathing irregularly. This irregular breathing may cause some people to hyperventilate.  According to healthline.com, “Anxiety can affect your breathing. On the other hand, your breathing can affect feelings of anxiety.” Sounds like a catch-22 to me! Breathing properly helps to relieve anxiety as it slows down the heart rate almost immediately.

Few people breathe correctly. To do so, breathe in through your nose to the count of 5, hold your breath for 2-3 seconds, then exhale through your mouth to the count of 5.  When you inhale properly, you should feel your chest expand and feel almost as if you are getting taller. Make sure you inhale all the way into your abdomen dropping your shoulders down. When you exhale, you should feel the air flowing out from the abdomen to the chest and out your mouth letting your muscles relax. You will feel almost as if you are getting shorter.  Think of an eye dropper on a bottle filling up with liquid and then releasing it. Repeat this breathing several times when you are feeling anxious about talking with another person.

Tip #2 – Maintain Eye Contact – Sort of!

You’ve always been told, to be taken seriously you must hold eye contact with the other person. Holding eye contact tells the other person, “I am listening” and let’s face it, we all want to be heard. What you probably didn’t know is there is a hack that allows you to give the other person the feeling you are looking them in the eye even when you are not. How? Look at their face between the eyes and just slightly above the nose. Because your eyes will be directed in an area very close to the eyes, it will be very difficult for the other person to realize you are not gazing into their eyes.

Of course, the ultimate goal is to be able to make eye contact. Why? The eyes can tell you more about a person than almost anything. The eyes can tell if they are interested in you, if they are interested in what you are saying, if their smile is genuine, or if they are being deceptive. Likewise, your eyes can tell the other person the same about you!

Begin by practicing with yourself. Can you look yourself in the eyes while looking in the mirror? If not, begin with just a few seconds and gradually increase until you can comfortably look yourself in the eye for several minutes. Then, move on to someone you know and follow the same pattern. Finally, try looking a stranger in the eye when speaking with them even if only for a few seconds. It’s okay if you look away. Like anything else, it will take practice to be able to do so comfortably; you will find it is worth the effort!

Tip #3 – Practice!

I mentioned Toastmaster’s International in my previous post. Their club meetings are broken into different segments. I believe the most important portion of the meeting is what is known as “Table Topics.” While there are several variations on how Table Topics is conducted, the goal is the same across clubs. “Table Topics is a long-standing Toastmasters tradition intended to help members develop their ability to organize their thoughts quickly and respond to an impromptu question or topic.” In other words, Table Topics was designed to practice thinking on your feet.

You can join a local Toastmasters club or work with a friend for practice and feedback. Ask your friend to come up with several questions on different topics. Then, have them pick one of the questions to ask you. Once they ask the question, pause for a few moments to think about your answer and then answer the question in 1-2 minutes. Remember, even in the real world, it is okay to pause and consider your answer when asked a question instead of blurting out the first thing that comes to mind. Your listener will appreciate you took the time to think about your answer!

Tip #4 – The Underwear Trick

Yes, I said the underwear trick. I have been told many times, to alleviate your fears, picture your audience in their underwear. Unfortunately, this is not effective. Why? Concentrating on visualizing your audience in their underwear will cause one of three things to occur.

  1. You will be so focused on the visualization, you will not hear what they are saying.
  2. You will be so focused on the visualization, you will forget what you are saying.
  3. Once you do get the image into your head, you may find it amusing and laugh at a very inappropriate time!

Difficult Discussions

Even if we are comfortable with someone, we may have to have a difficult or challenging conversation at some point. In addition to utilizing the above tips, the following should help get you through those conversations we just don’t like to have with others.

Tip #5 – Seating

Peoples receptiveness to what you are saying will be affected by where you sit or stand in relation to them. If you want to garner co-operation, keep the following in mind.

Sitting directly across from the other person can put one on the defensive as the table acts as a competitive barrier. If you are seeking to persuade someone, this position will not help to produce the desired results.

Sitting at the table in what is known as the corner position (each person sits on one side of the same corner) allows for good eye contact, opportunity to assess body language and gestures, and avoids the competitive barrier of the table.

Keep the same eye level – if the other person is standing, stand. If the other person is sitting, sit keeping in mind the sitting positions mentioned above. If you stand while the other person is sitting, you will give the impression of speaking down to the other person. Since they will already be on the defensive, you don’t want to make it more difficult by increasing their defensive posture.

In the opposite scenario, if you are sitting while the other person stands, it lessons your position of authority as the other person is now looking down at you. They will find it difficult to take you seriously if you sit while they stand. What if you sit behind a desk you ask? You are introducing that barrier again, much like the table mentioned above. Therefore, you are more apt to put the person on the defensive by creating that competitive barrier.

Tip #6 – Speaking, Listening, Feeling, Believing

  1. Keep your tone of voice neutral controlling how fast or slow you speak. This will allow the other person to hear your words and not your emotions which could negatively affect their ability to receive your message.
  2. Do not attack the other person. Describe the situation as you perceive it in a clear and factual manner. Use specific examples. If you can’t use specific examples, you will not garner agreement or cooperation from the other person on the issue.
  3. Be an active listener. Give the other person a chance to respond and do not interrupt or be thinking about how you will respond to their comments. They may have some valid reasons or concerns. When they are finished speaking, ask clarifying questions. Showing you were actively listening and wanting to get to the truth of the matter will go along way in resolving the issue being discussed.
  4. Give up your need to be right. Needing to be right puts you back into that competitive stance. People are uncomfortable when placed in a win or lose situation and will usually do what is necessary to win. It’s not about who is right and who is wrong. It’s about resolving whatever the issue may be in a cooperative manner.
  5. Take responsibility for how you feel and don’t make assumptions. Only you can let yourself feel the way you do. Begin statements with “I feel..”…” or “I believe…”. Address the person’s actions that are causing the issue and not the person. And do not assume you know or understand what the other person is thinking, feeling, or their motivations. Every person is different, and every person can change or respond differently based on the situation.
  6. Finally, think about how your personal biases may be influencing your perception of a situation and give them the benefit of the doubt. Try to see things from their side and consider factors you may not have known about or considered that could have affected their decision. Realize mistakes are caused by situations not by one’s personality. Learn to assume the best of a person and they will most often rise to the occasion.

What We’ve Learned

In this article, we learned everyone must have difficult and challenging conversations whether it is in our work life or our personal life. There are ways however, to help these conversations go smoother and reach a desired outcome. Two excellent articles that dig deeper into tips and tricks are listed at the bottom of this post. Maintaining composure will take practice for most of us and that’s okay. Before that difficult conversation, practice what you will say and how you will say it. Practice in front of a mirror or record yourself on your phone to allow you to see how you look and sound. Take a deep breath and go for it!

Next Time

In my next post, we will dig into public speaking and what makes a for a great presentation. See you then!

– pzj –

References:
https://westsidetoastmasters.com/resources/book_of_body_language/chap17.html
https://www.psychologytoday.com/us/blog/some-assembly-required/201703/how-have-difficult-conversations
https://www.toastmasters.org/

AppLocker – Part 2

$
0
0

Introduction:
In the previous blog we looked at the two paths, “whitelisting” and “blacklisting”, you could follow implementing AppLocker. In this blog I will look at the AppLocker Rules, Rule Conditions and how to enforce them.

NB. The Application Identity service is required to run for AppLocker to function. This can be configured in a GPO to automatically start the service.

AppLocker Group Policy
AppLocker is configured via GPO by creating various rules to either allow or deny applications. The AppLocker GPO setting can be found under Computer ConfigurationPolicies Windows SettingsSecurity SettingsApplication Control PoliciesAppLocker

AppLocker Rules
AppLocker is organized into four areas called rule collections. The four rule collections are executable files, scripts, Windows Installer files and Packaged app. The following table lists the file formats included in each rule collection.

Rule collection Associated file formats
Executable .exe .com
Scripts .ps1 .bat .cmd .vbs .js
Windows Installer .msi .msp .mst
Packaged Apps Packaged apps and packaged app installers: .appx

AppLocker Rule Conditions
Rule conditions are criteria that the AppLocker rule is based on. Primary conditions are required to create an AppLocker rule. The three primary rule conditions are:

  • Publisher
  • Path
  • File Hash

Publisher

  • This condition identifies an application based on its digital signature and extended attributes. The digital signature contains information about the company that created the application (the publisher). The extended attributes, which are obtained from the binary resource, contain the name of the product that the application is part of and the version number of the application. The publisher may be a software development company, such as Microsoft, or the information technology department of your organization.
  • Publisher conditions can be created to allow applications to continue to function even if the location of the application changes or if the application is updated.
  • When you select a reference file for a publisher condition, the wizard creates a rule that specifies the publisher, product, file name, and version number. You can make the rule more generic by moving the slider down or by using a wildcard character (*) in the product, file name, or version number fields.

Path

  • The Path condition identifies an application by its location in the file system of the computer or on the network.
  • AppLocker uses its own path variables for directories in Windows. (See the table below)
  • AppLocker does not enforce rules that specify paths with short names. You should always specify the full path to a file or folder when creating path rules so that the rule will be properly enforced.
Windows directory or drive AppLocker path variable Windows environment variable
Windows %WINDIR% %SystemRoot%
System32 %SYSTEM32% %SystemDirectory%
Windows installation directory %OSDRIVE% %SystemDrive%
Program Files %PROGRAMFILES% %ProgramFiles% and %ProgramFiles(x86)%
Removable media (for example, CD or DVD) %REMOVABLE%
Removable storage device (for example, USB flash drive) %HOT%

File Hash

  • When the file hash condition is chosen, the system computes a unique cryptographic hash of the identified file that is based on the SHA256 algorithm that Windows uses. The hash condition type is unique. Therefore, each time a publisher updates a file, you must create a new rule.
  • For files that are not digitally signed, file hash rules are more secure than path rules. •Allows applications, which may not be signed by their publishers, to be managed under AppLocker.
  • The advantage is that, because each file has a unique hash, a file hash rule condition applies to only one file.
  • The disadvantage is that each time the file is updated (such as a security update or upgrade) the file’s hash will change, thus making it immune to the current AppLocker policy, requiring a new rule to be created.

AppLocker Enforcement

AppLocker rule enforcement can be configured in the GPO by navigating to Computer ConfigurationPolicies Windows SettingsSecurity SettingsApplication Control PoliciesAppLocker and clicking on the Configure rule enforcement

The rule enforcement option are as follow:

  • Not configured (If rules are present in the corresponding rule collection, they are enforced. If rule enforcement is configured in a higher-level linked Group Policy object (GPO), that enforcement value overrides the Not configured value.)
  • Enforce rules (Rules are enforced for the rule collection, and all rule events are audited.)
  • Audit only (Rule events are audited only. Use this value when planning and testing AppLocker rules.)

    IMPORTANT: By default, AppLocker blocks all Packaged Apps if an EXE ruleset exists without a Packaged App ruleset

Conclusion
In this blog we looked at the rules and rule conditions for AppLocker. It is important to understand each rule condition to ensure you apply the rules effectively.

In the next blog we will look at AppLocker in Audit mode.

Protect Administrative Accounts with Authentication Policies and Silos

$
0
0

Introduction

One of the recommendations to protect privileged accounts from credential theft is to prevent administrative accounts from exposing credentials to unsecure computers, on this post I will show you how to protect administrative accounts using Authentication Policies and Silos.

Definition

A quick definition from Microsoft web site.
Authentication policy silos and the accompanying policies provide a way to contain high-privilege credentials to systems that are only pertinent to selected users, computers, or services. Silos can be defined and managed in Active Directory Domain Services (AD DS) by using the Active Directory Administrative Center and the Active Directory Windows PowerShell cmdlets.

Scenario

The company Lab.dz is following MS best practices 😉 . The environment is configured with tiering model and T0 Admins are using dedicated administrative workstation to access T0 Servers. For more information about tiering model please read the article.

Ok, let’s discover lab.dz environment. Lab.dz contain one domain controller DC01, two member servers MEM01 and MEM02 and some client computers, domain admins are using a dedicated administrative workstation named PAW0 to access domain controllers. During the next steps we will configure Authentication policy and Silos to ensure that domains admins are unable to authenticate to any device except DC01 and PAW0.

Here the input we need.

Who are T0 Admins? two administartors Amine and Mehdi.

On which computers Amine and Mehdi should have access? It’s easy 🙂 The domain controller DC01 and the administartive workstation PAW0.

Step by Step

Now we will go through a step by step to configure Authentication Policy and Silos.

1) Ensure that domain Functional level is 2012 R2 or higher.

Lab.dz domain has DFL of 2016.



2) Configure domain controllers KDC to support claims, compound authentication and Kerberos armoring.

All what we need to do is to enable the policy setting: KDC support for claims, compound authentication and Kerberos armoring under the path.

Computer Configuration\Policies\Administrative Templates\System\KDC

In my lab environment, the setting is enabled on default domain controller policy.

You can confirm application of the setting on domain controllers by checking the attribute msDS-SupportedEncyptionTypes of the krbtgt account. The attribute will take the value of 5000.



3) Configure client computers with the setting Kerberos client support for claims, compound authentication and Kerberos armoring.

All what we need to do here is to enable the policy setting: Kerberos client support for claims, compound authentication and Kerberos armoring under the path.

Computer Configuration\Policies\Administrative Templates\System\Kerberos

In my lab environment, the setting is enabled on default domain policy.

4) Configure the Authentication Policy and the Authentication Policy Silo.

Open Active Directory Administrative Center.

Under the Authentication node, right click Authentication Policies and create a new authentication policy.


Give it a name and make sure to select Enforce Policy restriction. in my example the authentication policy is named T0-Authentication-Policy.



On the User Sign On part you can optionally reduce user TGT lifetime. In my example I reduced TGT lifetime to two hours. Click OK to create the authentication policy.


Under the Authentication node, right click Authentication Policy Silos and create a new authentication policy silo.


Give it a name and make sure to select Enforce silo policies. in my example the authentication policy silo is named T0Silo.


On the Permitted Accounts part add the accounts that need to be member of the silo.


On the Authentication Policy part:

  • Select Use a single policy for all principals that belong to this authentication policy silo
  • Select T0-Authentication-Policy and click OK to create to create the authentication policy slilo.


We will go through properties of the accounts: DC01, PAW0, Amine, Mehdi and assign them T0Silo Authentication Policy Silo. I will show the steps for DC01.

Right click DC01 and click properties.



Assign T0Silo and click OK.


If you return to the authentication policy silo T0Silo you will see the green mark indicating that accounts are assigned.


Finally, go back to T0-Authentication-Policy.

Right click and click properties.


On User Sign On part click Edit


Click Add a condition.


Put the condition as on the screenshot below and click OK.


Click OK to validate the condition.


We must restart the computers member of the silo T0Silo so computers will detect they are on T0Silo (restart will force computer re-authentication with AuthenticationSilo claim).

After restarting DC01 and PAW0, the user Mehdi can logon on DC01 and PAW0 but not on MEM01 and MEM02.



Troubleshooting

There is a specific log named AuthenticationPolicyFailures-DomainController which is disabled by default. The log is located under this path:

Application and Services Logs –> Microsoft –> Windows –> Authentication


We enable this log on DC01. This should be enabled on all domain controllers.


As an example, below the event 105 indicating user mehdi unable to authenticate on the device PAW0.

This was before restarting the computer PAW0 🙂 . The computer doesn’t have the authenticationsilo claim yet 😉 . This was resolved by restarting PAW0.


Another point help in troubleshooting is to have the visibility about user and device claims. This can be enabled easily by using Advanced Audit Policy Configuration. On Default Domain Policy I enabled Audit User/Device claims as you can see on the screenshot below.


As an example, on PAW0 we can see the events 4626 related to User/Device claims.

Event 4626 indicating computer claims, this usually happen after computer restart.
Account Name: PAW0$
User Claims: ad://ext/AuthenticationSilo <String>: “T0Silo”


Event 4626 indicating user claims, this usually happen after user logon.
Account Name: Mehdi
User Claims: ad://ext/AuthenticationSilo <String>: “T0Silo”


To display user claims you can simply type whomai /claims


Conclusion

Authentication policy and Authentication policy silos is the strongest way to prevent your high privilege accounts from being used on unsecure computers. I will recommend protecting all domain administrators with authentication policy except the built-in administrator. This account will give you the ability to authenticate on your domain controllers in case of problems with authentication policy and silos. But what about the built-in administrator ? it doesn’t need protection from credential theft !!! The best protection for the built-in administrator is to not using it for day to day administration tasks.

Thanks for reading 🙂

Manage Azure monitor with Azure Blueprint

$
0
0

Background

Azure resources can be deploy and configure automatically by using ARM Templates, Azure Policy, PS scripts etc… those automation ways have it’s limit to sets of allows and deny functions, And in particular, it can be configure only on the subscription level, with Azure Blueprint [Preview] you can manage policies and target it on Management group level to assign the same policies on all of your subscriptions.

On this post I will show you, how to use Blueprint service to create Log Analytics workspace in all of your subscriptions, Additionally how to enables Azure Monitor policies on all your VMs in those subscriptions, that will connected to this Log Analytics workspace you just deployed, by using this capabilities you can configure all aspects of monitors in your environment.

Steps:

  • Add Management group and link the subscriptions.
  • Add Blueprint targeted to Management Group
  • Add Artifacts to the blueprint:
    • Artifact to add resource group.
    • Artifact to add Log Analytics Workspace in resource group that you just created.
    • Add Policy artifact with built-In policy to – “Enable Azure Monitor for VMs”.
    • Publish the blueprint.
    • Assign blueprint to subscriptions in Management Group . 

Step by Step

Azure Management Group

When you are managing multiple subscriptions its recommended use Management group, using management groups helps you manage access, policy, and compliance by grouping multiple subscriptions together.

For example, you can create a hierarchy that applies a policy, for example, which limits VM locations to the “West Europe” Region in the group called “Production”. This policy will inherit onto both EA subscriptions under that management group and will apply to all VMs under those subscriptions.

The following diagram shows an example of creating a hierarchy for governance using management groups:

https://docs.microsoft.com/en-us/azure/governance/management-groups/overview

Existing subscription can linked to new Management Group.

As you can see in the screen below My Management group “Monitor Team” contain my subscriptions:

Azure Blueprint [PREVIEW]

Assign policies, deploy ARM templates & Roles on Subscriptions and Management Groups

Blueprints are a declarative way to orchestrate the deployment of various resource templates and other artifacts such as:

  • Role Assignments
  • Policy Assignments
  • Azure Resource Manager templates
  • Resource Groups

In first artifacts we will create New Resource group and Log Analytics Workspace

Create Blank blueprint

Artifacts types

Add Resource group

Add new Resource group, the name and location can be specified also when blueprint would be assigned.

Similar to ARM deployment, that mean after you assign to subscription the Resource group will be deployed

Add Azure RM template

“+ Add Artifact” under Resource group, and select Azure resource manager template, the template can contain ARM template, with any resources and parameters, the parameters can be send manually in the parameters tab, or can be part of the deployment.

I found an easier way to build the template, by creating the resource once in Azure Portal, then export the template and copy paste in policy template filed

For example, I created the Workspace and download to zip file.

Open template.json file you just download, and  remove the row with provisioning state:

Remove the row from Template file: <“provisioningState”: “Succeeded”,>

Copy the json file to artifact, you can also import the templates and only remove this row, and click Add.

Be aware that Workspace name must be globally unique across all Azure Monitor subscriptions, that way you need assign it separately on each subscription

For example, this is small json template, contain the deployment of Log Analytics Workspace:

{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"workspaces_loganalyticstemplate_name": {
"defaultValue": "loganalyticstemplate",
"type": "String"
}
},
"variables": {},
"resources": [
{
"type": "microsoft.operationalinsights/workspaces",
"apiVersion": "2015-11-01-preview",
"name": "[parameters('workspaces_loganalyticstemplate_name')]",
"location": "eastus",
"properties": {
"sku": {
"name": "pergb2018"
},
"retentionInDays": 30
}
} ]
}

Add Policy Artifact

The second artifact will be – enabled Az Policy: “Enable Az Monitor on all your VMs”, this artifact should be selected from the top of Resource group artifact, it should be enabled on all of VMs deploy in your subscription.

Select “Add Artifact” then “Policy Assignment” search monitor in Initiative definitions, choose the “Enable Azure Monitor for VMs” policy and click on Add:

Publish blueprint

Only when you publish the blueprint you can assign it, you can assign also previews versions, but you can edit only the last one.
Every time you publish after you made the changes you need to select higher version, start with 1.0 and continue to 1.1 and so forth, every version are saved, so you can assign later the previous version also.

After publishing is completed, on top of this windows you can “assign blueprint” on subscriptions in your Management group.

When you assign the blueprint, you need insert the artifact parameters you selected before, it should be supplied when it will be assigned, in this example showing bellow I wrote the Log Analytics Workspace [Unique Name] the same parameter should be for the LA Workspace and for the Workspace that VMs should be connected to.

TIP - Run the task each time on single subscription to provide the unique Log Analytics Workspace name. 

In the Assigned blueprint blade, you can follow the status of deployment you deploy in this assignment, select the assignment currently in deploy state.

To edit the definitions in blueprint, go back to the “blueprint definition” blade, select “Edit blueprint”, change the settings, save the draft and publish in a higher version

You now have two options, or reassign the same task that you created before , choose the new version in the layout, or reassign a new one.
The reason for assigning with a new name is, if you need to select new parameters in the resources you want to publish, for example if the name is not unique and you need to re-enter the parameters with a new name, then assign.

Deployment Status

  • In the assignment you can see the status of deployment
  • Go to Activity log of subscriptions you deployed the artifact, and detect the success or error events.
  • In resource group you crate, go to Deployment tab, and track the status of your deployment.
  • Policy can be seen in every resource group on policies tab, or in policy resource.

System Center Configuration Manager – "Configure SQL Server Service Broker Failed"

$
0
0

The Issue

Recently I installed the Latest Tech Preview in my lab environment and while running an In-Console update I received an error “Configure SQL Server Service Broker – Failed”.

The Investigation

If you watch a video Steve Rachui posted some time ago, there are some valuable tips for which logs to review and the whole process.

https://msit.microsoftstream.com/video/80b01841-8572-4185-a72f-1468098ba0c8

So for me the 4 most important log files were
– ConfigMgrSetup.log
– CMUpdate.log
– SMSProv.log
– DMPDownloader.log

After reviewing the CMUpdate.log file a found the below error

SQL Error Executing a command

If you try to run this in SQL you get the below error which is more descriptive than the log. *** DO NOT JUST RUN ANY SQL COMMAND IF YOU DO NOT KNOW WHAT THE CONSEQUENCES ARE

Now as you can see the actual error is : “Route is not defined for target site with service name ConfigMgrRCM_SiteSS1”

I had recently installed a secondary site and deleted it from the console without cleaning up the role. If the Secondary site is still on the network always use the “Uninstall” option in the console to avoid issues.

So now that I deleted it I ran into an issue here the secondary site got stuck in the “deleting” state as described here: https://social.technet.microsoft.com/Forums/en-US/11151893-5833-40c7-a71c-134365043c27/secondary-site-stuck-quotdeletingquot?forum=configmanagergeneral

So how do we fix that? By using the Hierarchy Maintenance Tool!
https://docs.microsoft.com/en-us/configmgr/core/servers/manage/hierarchy-maintenance-tool-preinst.exe

It is located in %<YOURsccminstalldirectory>%\bin\X64\00000409

Open up an admin command prompt, browse to that directory and run the below command

preinst.exe /DELSITE <YOURSECONDARY>

This completed the site removal and we can now rerun the in-console update.

The Solution

If we run the update now it succeeds!


I hope this information has been helpful and feel free to correct me in any steps.

Getting Started with Terraform on Azure DevOps

$
0
0

Introduction

As with most things, there are a number of ways to utilize Azure DevOps to orchestrate your management of Azure Resources through terraform. This post will walk through a way that I have found to be successful and relatively easy to maintain. It will not however describe the many benefits of using an Infrastructure as Code approach, as that is a much broader topic.

To follow along with the example below, please ensure you have the Multistage pipeline feature enabled, this is still in preview as of the publishing of this post.

Contents

Prior to using terraform to deploy infrastructure on Azure, there are a few setup steps. The first is to create an Azure Resource Manager service connection within Azure DevOps. From there, I recommend using a script to setup needed variables in KeyVault, but this can be accomplished through the portal, powershell, or through individual az cli commands.

The script I use for this creates a resource group, a keyvault, and a service principal. The service principal will be used by Terraform for it’s interactions with Azure Resource Manager.

The KeyVault you created can then be used in Azure DevOps by creating a variable group that is linked to it.

I recommend using a consistent folder structure for your pipeline and terraform configuration. This allows you to more easily maintain your code, but also significantly improves the usability for future developers. In my case, I like to have a pipelines folder that contains the main pipeline.yml for orchestrating the overall process and a templates folder that contains my pipeline templates.

The initial section of the pipeline are environment independent actions that should only need to be performed once. This is similar to build and unit test phase for an typical application deployment.

For simplicity, I am using template files for the individual steps. For the Setup phase, this includes formatting, init, and validation.

The fun part is the actual deployment. This can be separated into stages for each of the different environment you want to deploy resources. The first job in the deployment is plan, and as you might imagine runs terraform plan. The second job is apply, and this runs terraform apply.

Conclusion

There are many ways to deploy Azure Resources. Hopefully this post provides some ideas on how you can use an Infrastructure as Code approach to deploy using Azure DevOps and Terraform. This link shows a working example that utilizes this approach.


PowerShell: Active Directory Cleanup – Part 1

$
0
0

Hello World, Scott Williamson Senior Premier Field Engineer here. As a PFE, I frequently work with customers who ask how to cleanup Active Directory of old objects and data. To assist them automate cleanup I have wrote several PowerShell scripts, functions, and workflows and I want to share them in this blog series.

This first of these scripts is checking for and cleanup of old Duplicate Computers. Duplicate computers are rarely seen in the newer versions of Active Directory (AD) unless you are having replication issues between domain controllers. Do you have any Duplicate computers in AD? Many customers still have some and don’t know it.

# PowerShell to Report Duplicate Computers
cls
$CDate = Get-Date -format "yyyyMMdd" 
$ScriptPath = Split-Path $MyInvocation.MyCommand.Path -Parent
$ComputerPropsCustom = $("Enabled","Description","LastLogonDate","Modified","whenChanged","PasswordLastSet","OperatingSystem","OperatingSystemServicePack","IPv4Address")
$ComputerPropsSelect = $("Name","SamAccountName","Enabled","DistinguishedName",@{Name="CreatedBy";Expression={$(([ADSI]"LDAP://$($_.DistinguishedName)").psbase.ObjectSecurity.Owner)}},"LastLogonDate","Modified","whenChanged","PasswordLastSet","OperatingSystem","OperatingSystemServicePack","IPv4Address")
        
$DuplicateComputers = Get-ADComputer -Filter {SamAccountName -like "*DUPLICATE-*"} -Properties $ComputerPropsCustom | Select-Object $ComputerPropsSelect | Sort-Object Name
$DuplicateComputers | Export-Csv -Path "$ScriptPath\$($CDate)_DuplicateComputers.csv" -NoTypeInformation
$DuplicateComputers

So let me walk through this short script line by line.

  • Clear the screen with “cls”.
  • Set a variable $CDate with the current date formatted as yyyyMMdd. Example 20191213.
  • Set $ScriptPath to the location of the script we are running.
  • Set $ComputerPropsCustom to a list of custom properties we want to pull.
  • Set $ComputerPropsSelect to all the properties we want in the output in the order desired. Notice we also have a custom defined property CreatedBy which is doing an LDAP lookup on the object to find who created it.
  • Set $DuplicateComputers to the output of the Get-ADComputer cmdlet. We filter the SamAccountName for only computer objects with “DUPLICATE-” in the name. Notice where we use $ComputerPropsCustom and $ComputerPropsSelect. In addition we sort the output by Name.
  • Next we export the $DuplicateComputers to a csv file in the script directory named as as the date underscore and DuplicateComputers.
  • The final line just sends the $DuplicateComputers contents to the screen for us to view.

Notice that the script only pulls information and doesn’t do any actual cleanup yet. Best practice is to do all the gathering, verify several times that you are only getting the data expected, and then add in the code to do the cleanup. Below is the final code to add to the bottom of the script above to perform the computer object removal.

# Uncomment next line to remove duplicate computers with no operating system
$DuplicateComputers | ? {$_.OperatingSystem -eq $null} | % {Remove-ADComputer -Identity $($_.DistinguishedName)}

Let’s walk through this last couple lines.

  • The # line is a comment. I normally # comment out the action code of a script while I’m writing and perfecting the code. Once 100% sure I’m only getting the data I expect I remove the # from the action code. Usually I’m still a little hesitant so I usually add -whatif to the end of the action code for one final let’s test.
  • We send or pipe $DuplicateComputers to a filter that only selects objects without an operating system then pipes that into the Remove-ADComputer cmdlet to do the cleanup.

So although this is a short script it has a hint of some advanced code such as:

  • Date Formatting
  • Determining Script location
  • Setting Properties Arrays
  • Custom Property
  • Cmdlet filtering
  • Object Sorting
  • Exporting to CSV
  • ? = Where
  • % = ForEach-Object

Stay tuned for the next part in this series.

Minecraft on Azure

$
0
0

Introduction

Despite approaching it’s 10th anniversary, Minecraft remains an incredibly popular game with both children and adults. There are many options to play Minecraft — locally, on a Minecraft hosted realm, or on a public server. In some cases, you may want to retain more control and run Minecraft on your own server. This allows you full control over who has access, as well as the ability to use a variety of community plugins for different gameplay options. This post will discuss the basics of running your own Minecraft server on Azure.

Content

The first step in getting setup to run your Minecraft server on Azure, is to login to your Azure account. If you don’t have one already, you can easily sign-up for a trial. Within your Azure subscription, I recommend creating a resource group just for use with the Minecraft related resources.

Within this new resource group you will then create a virtual machine. This virtual machine has very few special requirements. You’ll want to ensure you’re using the latest official Ubuntu image, providing an ssh key and allowing access to port 22.

For the most part you can accept the defaults, but you probably want to be careful with the auto-shutoff settings depending on your personal preferences on play time. After creating the virtual machine (vm), select the vm, and click on the network blade. You’ll want to make sure that the access on port 22 is limited to your ip address, and add access on port 25565. If you know the ip address range for all of the people who will play on this server, you can limit it here as well.

At this point, you will probably want to go back to the main vm screen, and click on configure DNS. This allows you to add a dns prefix; making it easier to remember and share the address of your server.

Now that the VM is up, all that remains is to download and configure minecraft. Begin by connecting via ssh to the vm. This can be done via Azure Cloud Shell or a terminal on your machine. You will need the ssh private key you setup as part of creating the vm.

Upon logging into the machine, copy the contents of this gist to the tmp directory, and then run the following commands. The minecraft server.jar can be found here


sudo su -
cp /tmp/<gist you just copied> /etc/systemd/system/minecraft.service
apt update && apt upgrade -y
apt install default-jre -y
adduser --system --home /minecraft minecraft
addgroup --system minecraft
adduser minecraft minecraft
systemctl enable minecraft.service
cd /minecraft
wget <lastest minecraft server.jar> 
echo eula=true > eula.txt
chown -R minecraft:minecraft ../minecraft
systemctl start minecraft
journalctl -u minecraft -f <allows you to look at the logs>

Conclusions

At this point you have a working minecraft server, and can connect to it as normal from your minecraft application. There are numerous potential plugins and options that you can consider.

Next Steps

While we do have our server up and running, there are number of actions you’ll probably want to text next. These may be covered in a future post.

  • Add Azure Firewall / Load Balancer
  • Move server creation to Azure DevOps
  • Add backup of world files
  • Use paper and add some plugins

Field Notes: Azure AD Connect – Migrating from AD FS to Password Hash Synchronization

$
0
0

This is a continuation of a series on Azure AD Connect. I started off this Azure AD Connect series by going through the express installation path, where the password hash synchronization (PHS) sign-in option is selected by default. This was followed by the custom installation path where I selected pass-through authentication (PTA) as a user sign-in option. The third blog post on user sign-in was configuring federation with Active Directory Federation Service (AD FS). Links to these are provided in the summary section below.

Here, I go through migrating from AD FS to PHS. You may want to do this to reduce complexity and server footprint in your environment.

Before we begin

I am running the latest version of Azure AD Connect that I downloaded from http://aka.ms/aadconnect. At a minimum, version 1.1.819.0 is required to successfully complete the migration using the process we are going to cover. See Azure AD Connect: Version release history to keep track of the versions that have been released, and to understand what the changes are in the latest version.

Federation is currently enabled for one domain. PHS is also enabled and the required permissions for the on-premises directory are already in place as per Azure AD Connect: Accounts and permissions (Replicate Directory Changes | Replicate Directory Changes All).

We’ll be using Azure AD Connect to perform the migration as federation was configured using it. One of the ways that can be used to confirm that AD FS was setup through Azure AD Connect is to open the federation configuration task under manage federation.

Information such as the federation service name, service account, certificate details would be shown here. Be sure to have documented your setup and have a valid backup before you proceed in your environment.

Migrating using Azure AD Connect

The swing itself is pretty straightforward. All we do is launch Azure AD Connect and select configure. At the additional tasks page, we select change user sign-in and click next to proceed.

We then connect Azure AD as normal by providing a Global Admin user name and password. Under user sign-in, we select password hash synchronization. We also need to confirm (by checking the box) that our intention is to convert from federated to managed authentication. Enable single sign-on is turned on by default, and we’ll leave tick-box checked.

Azure AD domains that are currently federated will be converted to managed and user passwords will be synchronized with Azure AD. This process may take a few hours and cause login failures.

Clicking next takes us to the enable single sign-on page, where we are required to enter a domain administrator account to configure the on-premises directory for use with SSO.

If everything goes well, the cross next to the enter credentials button will change to a green icon with a check mark. The next button will also be enabled. There is a problem in our case: an error occurred while locating computer account.

This is also highlighted in the trace file (C:\ProgramData\AADConnect\trace-*.txt).

Our workaround for now is to delete the AZUREADSSOACC computer account in AD DS that was created by a previous installation. I’ll cover this case in detail in a future post.

That’s it! The conversion happens once we go through the single sign-on page. Another look at Azure AD and voila, federation is now disabled, and seamless single sign-on is enabled for idrockstar.co.za.

A quick test performed by accessing http://aka.ms/myapps reveals that we are no longer redirected to AD FS, but authentication takes place in Azure AD.

Summary

We have just quickly gone through the process of migrating sign-on in Azure AD from federation with AD FS to PHS. Be sure to check the deployment considerations if you plan to perform the migration in your environment.

Related posts

Till next time…

PowerShell: Active Directory Cleanup – Part 2 – Spacey Computer Names

$
0
0

Introduction

Hello again, Scott Williamson back with the next installment in the series “PowerShell: Active Directory Cleanup”. For this installment we going to take a look at a script that finds computers that have a space in their name. Per RFC 1123 DNS host names cannot contain white space (blank) in their names. This is the most common issue I’ve found when computers are entered manually by IT administrators. When typing we get so used to adding a space between words that we accidentally do it when creating computer names. Usually the space is at the end of the computer name so it’s not easily spotted. This script looks searches Active Directory for computers with a space in their name, writes them to a CSV file, and displays them to the screen for review.

Find Computers with Space(s) in the Name

# Clear the Screen
cls

# This section sets the common variables for the script.
# Get the current date and format it as yyyyMMdd.  The 4 digit year, 2 digit Month and 2 digit day.  Exmaple 20191213
$CDate = Get-Date -format "yyyyMMdd" 

# Get the location this script was executed from.
$ScriptPath = Split-Path $MyInvocation.MyCommand.Path -Parent

# Set an array to the additonal Computer Properties we need.
$ComputerPropsCustom = $("Enabled","Description","LastLogonDate","Modified","whenChanged","PasswordLastSet","OperatingSystem","OperatingSystemServicePack","IPv4Address")

# Set an array to all the computer properties we want to display.
$ComputerPropsSelect = $("Name","SamAccountName","Enabled","DistinguishedName",@{Name="CreatedBy";Expression={$(([ADSI]"LDAP://$($_.DistinguishedName)").psbase.ObjectSecurity.Owner)}},"LastLogonDate","Modified","whenChanged","PasswordLastSet","OperatingSystem","OperatingSystemServicePack","IPv4Address")

# Search Active Directory for computer objects with a space in their name and sort them by Name.
$ComputerWithSpaces = Get-ADComputer -Filter {Name -like "* *"} -Properties $ComputerPropsCustom | Select-Object $ComputerPropsSelect | Sort-Object Name

# Export the results to a CSV file for review.
$ComputerWithSpaces | Export-Csv -Path "$ScriptPath\$($CDate)_ComputersWithSpaces.csv" -NoTypeInformation

# Display the results to the screen.
$ComputerWithSpaces

I included comment lines above each step to explain what the next line is doing. When writing PowerShell scripts it’s extremely helpful to add comments so that others viewing your scripts can understand what they are doing. These comments will also help you a year or two from now when you go back to use or modify the script.

Summary

Notice the similarities between the script above and the one from Part One. They both have very similar code with the exception of the filter and result variable names. Stay tuned for Part 3 of the series.

Series Links:

How to enable Internet and vNET connectivity for nested VMs in Azure

$
0
0

For a full walk-through of this setup, please watch the video at the end of this post.

Greetings readers,

Hyper-V nested virtualization in Azure has unlocked different scenarios and use cases such as sandbox environments, running unsupported operating systems or legacy applications that require specific features that are not natively supported in Azure, think about that application that has licenses tied to a MAC address for example.

In certain scenarios, you want those nested VMs to connect to the Internet or other VMs in Azure, however, due to restrictions on the network fabric, it is not possible to create an external switch and give VMs direct access to the host’s physical network. A solution to this is to configure NAT so that VMs can access the Internet with the host NATed public IP and also routing to enable connectivity to other VMs in Azure. In this blog post, I will walk you through the process of configuring nested VMs networking to achieve those goals.

Build the virtual network

We will need to build a vNet with two subnets, one for the host LAN traffic which may include other Azure VMs as well and another one for Internet traffic where we will enable NAT.

Example:

LabVnet – 10.2.0.0/16 (Main address space)

NAT Subnet – 10.2.0.0/24

LAN Subnet – 10.2.1.0/24

Later on, we will use 10.2.2.0/24 virtual address space for the nested VMs running inside the Hyper-V host.

Build the Hyper-V Host VM

  • Create a new Azure VM that will be your Hyper-V host. Make sure you pick a size that supports nested virtualization and connect the first network adapter to the NAT subnet as you build the VM. It is important that the first adapter is connected to the NAT subnet because by default all outbound traffic is sent through the primary network interface.
  • Once the VM is provisioned, add a secondary network adapter and connect it to the LAN subnet

Configure the Hyper-V Host

Install the necessary roles for the next steps:

  • Hyper-V
  • DHCP
  • Routing (RRAS)

DHCP will be used to automatically assign IP addresses to the nested VMs and RRAS will be used to route traffic between the nested VMs and other Azure VMs as well as provide NAT for Internet access.

Install-WindowsFeature -Name Hyper-V,DHCP,Routing -IncludeManagementTools -Restart

Create a virtual switch that will be used by the nested VMs as a bridge for NAT and Routing

New-VMSwitch -Name "Nested" -SwitchType Internal
New-NetIPAddress –IPAddress 10.2.2.1 -PrefixLength 24 -InterfaceAlias "vEthernet (Nested)"

Rename the network adapter names on the Hyper-V host to match the subnet names in Azure, this will make it easier to identify the networks when we are configuring routing. In this example, this is what the host network settings look like after creating the switch.

Configure DHCP

Create a DHCP scope that will be used to automatically assign IP to the nested VMs. Make sure you use a valid DNS server so the VMs can connect to the internet. In this example, we are using 8.8.8.8 which is Google’s public DNS.

Add-DhcpServerV4Scope -Name "Nested" -StartRange 10.2.2.2 -EndRange 10.2.2.254 -SubnetMask 255.255.255.0
Set-DhcpServerV4OptionValue -DnsServer 8.8.8.8 -Router 10.2.2.1

Configure RRAS

First, we will enable NAT for Internet access. Open the Routing and Remoting Access console, use custom configuration, select NAT and Routing, once the service is started, navigate to IPV4 , right-click NAT and select New Interface. Now select the interface that matches your NAT subnet and enable NAT as follows:

We will now configure static rules to routes to allow traffic from nested VMs to other VMs connected to the Azure virtual network.

Under IPv4, right-click static routes, select new static route and create routes as follows:

This route is to allow the primary interface to respond to traffic destined to it out of its own interface. This is needed to avoid an asymmetric route.

Create a second route to route traffic destined to the Azure vNet. In this case, we are using 10.0.0.0/16 which encompasses our labvnet including the Hyper-V LAN subnet.

At this point, our host is ready to automatically assign IPs to the nested VMs, it can now also allow VMs to connect to the Internet with RRAS NATing the traffic.

Configure User-Defined Routes

The last step in the process is to configure UDRs in Azure to enable traffic to flow back and forth between VMs connected to the Azure vNet and nested VM’s in our Hyper-V host. We do so by telling Azure to send all traffic destined to our nested VMs, 10.2.2.0/24 in this example, to the LAN IP of our Hyper-V host where RRAS will route the traffic to the VMs via the internal switch created earlier.

#Create Route Table
$routeTableNested = New-AzRouteTable `
  -Name 'nestedroutetable' `
  -ResourceGroupName nestedvm-rg `
  -location EastUS

#Create route with nested VMs destination and Hyper-V host LAN IP as a next-hop
$routeTableNested  | Add-AzRouteConfig `
  -Name "nestedvm-route" `
  -AddressPrefix 10.2.2.0/24 `
  -NextHopType "VirtualAppliance" `
  -NextHopIpAddress 10.2.1.4 `
 | Set-AzRouteTable

#Associate the route table to the LAN subnet
 Get-AzVirtualNetwork -Name labvnet | Set-AzVirtualNetworkSubnetConfig `
 -Name 'lan' `
 -AddressPrefix 10.2.1.0/24 `
 -RouteTable $routeTableNested | `
Set-AzVirtualNetwork

After creating an additional Azure VM which we want to use to test connectivity from outside the host,  our final network topology is this:

 

Conclusion

We now have full connectivity to both the Internet and other VMs connected to the Azure vNet allowing the nested VMs to be reachable by other devices outside the Hyper-V host.

Refer to the video below for a full walk-through: 

Nested VMs Networking

Configuration Manager – How Updates install during a Maintenance Window.

$
0
0

This is a question I have had since I started with SCCM 2007. I thought I had a grasp of it until I was talking with a customer and started second guessing myself.

Why aren’t all my updates installing during the Maintenance Window?

Why do I have Servers in a Reboot Pending State after our scheduled Windows Update weekend?

I have a 3-hour Maintenance Window defined, that should be lots of time…

Customer Questions

I started doing some research to find a definitive answer to these questions and everything I could find referenced old blog posts that don’t exist anymore or some pretty unclear information, so I setup my lab and set down to get some concrete information…

I started with a pretty old Windows Server 2012 R2 image, so I know there are lots of updates to apply.

In the UpdatesDeployment.log we see that it is trying to install the update when the deadline hits, and since we aren’t in a Maintenance Window it will wait until it is in a Maintenance Window before attempting the install again.

 No current service window available to run updates assignment with time required = 600 UpdatesDeploymentAgent 11/29/2019 3:20:06 PM 3840 (0x0F00)
No service window available to run updates assignment UpdatesDeploymentAgent 11/29/2019 3:20:06 PM 3840 (0x0F00)
This assignment ({CDCB2B61-2743-4A16-A8B4-CA2949E85BF3}) will be retried once the service window is available. UpdatesDeploymentAgent 11/29/2019 3:20:06 PM 3840 (0x0F00)

I then deployed a Maintenance Window so when the Maintenance Window starts we see in the ServiceWindowManager.log that when each update attempts to install it will check to see if there is enough time remaining in the Maintenance Window to complete the install. This is based on the Max Run Time attribute of the software update.

If there is enough time remaining in the Maintenance Windows, you will see the following entries in ServiceWindowManager.log:

OnIsServiceWindowAvailable called with: Runtime:600, Type:4    ServiceWindowManager    11/29/2019 3:30:05 PM   560 (0x0230)
No Service Windows exist for this type. Will check if the program can run in the All Programs window… ServiceWindowManager 11/29/2019 3:30:05 PM 560 (0x0230)
Biggest Active Service Window has ID = {14D90B4F-4BB8-4070-85A0-806C2800AD5D} having Starttime=11/29/19 15:30:00 ServiceWindowManager 11/29/2019 3:30:05 PM 560 (0x0230)
Duration is 0 days, 01 hours, 00 mins, 00 secs ServiceWindowManager 11/29/2019 3:30:05 PM 560 (0x0230)
ActiveServiceWindow has 3595 seconds left ServiceWindowManager 11/29/2019 3:30:05 PM 560 (0x0230)
Program can run! Setting *canProgramRun to TRUE ServiceWindowManager 11/29/2019 3:30:05 PM 560 (0x0230)

If there isn’t enough time remaining in the Maintenance Windows, you will see the following entries in ServiceWindowManager.log:

OnIsServiceWindowAvailable called with: Runtime:3600, Type:4    ServiceWindowManager    11/29/2019 3:31:13 PM   2764 (0x0ACC)
No Service Windows exist for this type. Will check if the program can run in the All Programs window… ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
Biggest Active Service Window has ID = {14D90B4F-4BB8-4070-85A0-806C2800AD5D} having Starttime=11/29/19 15:30:00 ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
Duration is 0 days, 01 hours, 00 mins, 00 secs ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
FindBiggestMergedTimeWindow called with TimeStart=11/29/19 15:31:13 and TimeEnd=11/29/19 16:30:00 ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
Biggest Chainable Service Window for Type=1 not found ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)

Program cannot Run! Setting *canProgramRun to FALSE ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
WillProgramRun called with: Runtime:3600, Type:4 ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
No Service Windows of this type exist. ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)
There exists an All Programs window for this duration. The Program will run eventually. ServiceWindowManager 11/29/2019 3:31:13 PM 2764 (0x0ACC)

As well as the following entries in UpdateHandler.log:

No current service window available with time required = 3600    UpdatesHandler  11/29/2019 3:32:56 PM   2764 (0x0ACC)
Not enough service window available to run update (03a8098b-7740-40da-9082-00ea285035be) UpdatesHandler 11/29/2019 3:32:56 PM 2764 (0x0ACC)

Once everything that can be installed during the Maintenance Window is installed, it will attempt to reboot the machine. This is where the next thing can interfere. Computer Restart settings, specifically “Display a temporary notification to the user that indicates the interval before the user is logged off or the computer restarts (minutes)”. For a workstation, this setting makes sense, but for a server this could cause the machine to overshoot its maintenance window.

Assuming someone is logged onto the server, and you have this set to the default, which is 90 minutes (5400 seconds). Once you are within 90 minutes of the end of your maintenance window the machine will not reboot automatically, and you will see the following in the ServiceWindowManager.log

OnIsServiceWindowAvailable called with: Runtime:5400, Type:4    ServiceWindowManager    11/29/2019 4:16:04 PM   4072 (0x0FE8)
No Service Windows exist for this type. Will check if the program can run in the All Programs window… ServiceWindowManager 11/29/2019 4:16:04 PM 4072 (0x0FE8)
Biggest Active Service Window has ID = {14D90B4F-4BB8-4070-85A0-806C2800AD5D} having Starttime=11/29/2019 3:30:00 PM ServiceWindowManager 11/29/2019 4:16:04 PM 4072 (0x0FE8)
Duration is 0 days, 01 hours, 00 mins, 00 secs ServiceWindowManager 11/29/2019 4:16:04 PM 4072 (0x0FE8)
FindBiggestMergedTimeWindow called with TimeStart=11/29/2019 4:16:04 PM and TimeEnd=11/29/2019 4:30:00 PM ServiceWindowManager 11/29/2019 4:16:04 PM 4072 (0x0FE8)
Biggest Chainable Service Window for Type=1 not found ServiceWindowManager 11/29/2019 4:16:04 PM 4072 (0x0FE8)

Program cannot Run! Setting *canProgramRun to FALSE ServiceWindowManager 11/29/2019 4:16:04 PM 4072 (0x0FE8)

When you logon to the server you will see the “Recently installed software requires a computer restart” message, along with the Task Bar Icon.

The computer will automatically reboot during the next Maintenance Window – this is usually is too late, and you are attempting to install more updates.

Now, to answer the customers questions:

Why aren’t all my updates installing during the Maintenance Window? – If the Max Run Time is set to 120 minutes (2 hours) once you are within 120 minutes of the end of the maintenance window, we no longer have enough time to install those updates.

Why do I have Servers in a Reboot Pending State after our scheduled Windows Update weekend? – If someone is logged onto the Server (even in a disconnected state), your maintenance window is effectively reduced by the time specified in the Computer Restart setting “Display a temporary notification to the user that indicates the interval before the user is logged off or the computer restarts (minutes)”. So for your server infrastructure you make want to reduce this down to 2 minutes with the “Display a dialog box that the user cannot close, which displays the countdown interval before the user is logged off or the computer restarts (minutes)” set to 1 minute.

I have a 3-hour Maintenance Window defined, that should be lots of time… – Well that does depend on what the Max Run Time is for all deployments, along with what the Reboot Settings are if someone is logged on.

I hope I have imparted information regarding how updates and Maintenance Windows interact. I know I learned a lot doing this.

Setup Hybrid Azure AD Join – Part 1

$
0
0

In addition to users, device identities can be managed by Azure Active Directory as well, event if they are already managed by your on-premise network. This two part series will walk you throught the step to allow your devices to be both on-premise and Azure active directory joined, otherwise known as hybrid Azure ad join. Part 1 and 2 are listed below. This post will step you through configuring pass-through authentication.

  1. Configure Pass-through authentication
  2. Setup Hybrid Azure AD Join

Configure Pass-Through Authentication

Pass-through authentication (PTA) allow users to use the same password to connect with their organizations network and Azure cloud applications. For more info on PTA click here

Prerequisites

  • Install the latest version of AD Connect (1.4.38.0)
  • Install AD Connect on Windows Server 2012 R2 or later
  • Authentication Agents need access to
    • login.windows.net
    • login.microsoftonline.com
  • Whitelist connections to:
    • *.msappproxy.net
    • *.servicebus.windows.net

Steps to configure pass-through authentication

After installing AD Connect, the configuration screen will open, click Customize.

Accept the defaults on this page and click Install. SQL express will be install which support 100,000 users. Install SQL 2016 or higher to support more than 100,000 users.

Select Pass-Through Authentication

Use your Azure AD global administrator credential to login. Enter your username and password.

Select the first option to create a new AD account. This will require your on-premise enterprise admin account. This account will be used for periodic synchronization.

Click Add Directory for synchronization

The UPN domains present in your organization AD which have been verified in Azure AD. You can also use this page to configure the attribute to use for the userPrincipalName.

Select the OU’s that you would like to synchronize.

Select how users should be identified in your on-premises directories. You can leave the defaults.

Select which users and devices to synchronize.

Select optional features if desired.

On the ready to configure page, select start the synchronization process when configuration completes.

A successful configuration page.

This process will install the first authentication agent. To validate the process, login to Azure and confirm that the Sync Status is “enabled” and that pass-through authentication is “enabled”.


Setup Hybrid Azure AD Join – Part 2

$
0
0

Welcome back to the second and last post to setup hybrid Azure ad join. Hopefully all went well with configuring Pass-Through Authentication. Below you will find a link back to part 1.

  1. Configure Pass-Through Authentication
  2. Setup Hybrid Azure AD Join

Setup Hybrid Azure AD Join

Consider the following prerequisites before moving forward.

Prerequisites

Steps to configue hybrid Azure AD join

Because we ran AD Connect in part 1 to connect active directory to Azure AD, the initial options at first run will not be available. When AD Connect opens, click on Customize

Select “Configure device options” – This option is used to configure device registration for Hybrid Azure AD Join.

On the overview page, click Next

Connect to Azure AD by using a user with global administrator rights.

On the device options page, select “Configure Hybrid Azure AD join” then Next

Supported devices

  • Windows 10
  • Windows Server 2016
  • Windows Server 2019

Downlevel devices

  • Windows 8.1
  • Windows 7
  • Windows Server 2012
  • Windows Server 2012 R2
  • Windows Server 2008 R2

Select the appropriate option on the device operating system page based on the devices that you have in your organization

On the SCP configuration page, do the following

  • First check the box under Forest
  • Under Authentication Service – click on the drop down and select Azure Active Directory. If a federation service have been configured, select that option.
  • Click ADD to supply the enterprise admin account for the on-premises forest.

On the ready to configure page, click Configure

Confirm device registration.

Use the Get-MsolDevice cmdlet in the Msonline module to verify the device registration state in your Azure tenant. Before you begin you will need the deviceId of a computer that should be registered in Azure AD. Find the computer in your on-premise Active Directory, right click on the computer > properties > Attribute Editor > scroll down to objectGUID and use that number as the deviceId. OPen PowerShell ISE and the run the code below.

Install-module MSonline -force
import-module msonline
$msolcred = Get-credential
Connect-MsolService -Credential $Msolcred -AzureEnvironment AzureCloud
Get-msoldevice -deviceId 7q52824c-30k1-8d1c-a947-ab34643ffddc

From the results above confirm the following.

  • An object with the device id that matches the ObjectGUID on the on-premise computer must exist.
  • The value for DeviceTrustType must be Domain Joined. This is equivalent to the Hybrid Azure AD joined state on the Devices page in the Azure AD portal.
  • The value for Enabled must be True and DeviceTrustLevel must be Managed for devices that are used in conditional access.

Troubleshoot Hybrid Azure AD join:
If you are experiencing issues with completing hybrid Azure AD join for domain joined Windows devices, see:

Cleaning Up the Mess in Your Group Policy (GPO) Environment

$
0
0

Intro

Group Policy is a great way to enforce policies and set preferences for any user or computer in your organization.
However, anyone who managed Group Policy knows it might become very messy after some time, especially if there are a lot of administrators who manage the Group Policy Objects (GPOs) in the company.

In this blog post series, we will cover some useful scripts and methods which will help you to organize and maintain your GPOs, and clean up the mess surrounded in your Group Policy environment.

First Things First – Create a backup

Before removing and modifying any Group Policy Object, It is highly recommended to create a backup of the current state of your Group Policy Objects.
This can be done using the Group Policy Management Console MMC, or by using the PowerShell cmdlet “Backup-GPO”.
To back up all GPOs, run the following PowerShell command:

Backup-GPO -All -Path "C:\Backup\GPO"

You can also create a scheduled task to back up Group Policy on a daily/weekly basis.
Use the following script to automatically create the backup schedule task for you:

Function Create-GPScheduleBackup
{
    $Message = "Please enter the credentials of the user which will run the schedule task"; 
    $Credential = $Host.UI.PromptForCredential("Please enter username and password",$Message,"$env:userdomain\$env:username",$env:userdomain)
    $SchTaskUsername = $credential.UserName
    $SchTaskPassword = $credential.GetNetworkCredential().Password
    $SchTaskScriptCode = '$Date = Get-Date -Format "yyyy-MM-dd_hh-mm"
    $BackupDir = "C:\Backup\GPO\$Date"
    $BackupRootDir = "C:\Backup\GPO"
    if (-Not (Test-Path -Path $BackupDir)) {
        New-Item -ItemType Directory -Path $BackupDir
    }
    $ErrorActionPreference = "SilentlyContinue" 
    Get-ChildItem $BackupRootDir | Where-Object {$_.CreationTime -le (Get-Date).AddMonths(-3)} | Foreach-Object { Remove-Item $_.FullName -Recurse -Force}
    Backup-GPO -All -Path $BackupDir'
    $SchTaskScriptFolder = "C:\Scripts\GPO"
    $SchTaskScriptPath = "C:\Scripts\GPO\GPOBackup.ps1"
    if (-Not (Test-Path -Path $SchTaskScriptFolder)) {
        New-Item -ItemType Directory -Path $SchTaskScriptFolder
    }
    if (-Not (Test-Path -Path $SchTaskScriptPath)) {
        New-Item -ItemType File -Path $SchTaskScriptPath
    }
    $SchTaskScriptCode | Out-File $SchTaskScriptPath
    $SchTaskAction = New-ScheduledTaskAction -Execute 'PowerShell.exe' -Argument "-ExecutionPolicy Bypass $SchTaskScriptPath"
    $Frequency = "Daily","Weekly"
    $SelectedFrequnecy = $Frequency | Out-GridView -OutputMode Single -Title "Please select the required frequency"
    Switch ($SelectedFrequnecy) {
        Daily {
            $SchTaskTrigger =  New-ScheduledTaskTrigger -Daily -At 1am
        }
        Weekly {
            $Days = "Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"
            $SelectedDays = $Days | Out-GridView -OutputMode Multiple -Title "Please select the relevant days in which the schedule task will run"
            $SchTaskTrigger =  New-ScheduledTaskTrigger -Weekly -DaysOfWeek $SelectedDays -At 1am
        }
    }  
    Try {
        Register-ScheduledTask -Action $SchTaskAction -Trigger $SchTaskTrigger -TaskName "Group Policy Schedule Backup" -Description "Group Policy $SelectedFrequnecy Backup" -User $SchTaskUsername -Password $SchTaskPassword -RunLevel Highest -ErrorAction Stop
    }
    Catch {
        $ErrorMessage = $_.Exception.Message
        Write-Host "Schedule Task regisration was failed due to the following error: $ErrorMessage" -f Red
    }
}

Step 2 – Get Rid of Useless GPOs

There are probably a lot of useless GPOs in your Group Policy environment.
By useless, I mean Group Policies that are empty, disabled or not linked to any Organizational Unit (OU).

Each of the PowerShell functions below will create a report (Gird-View) with the affected GPOs (Disabled, Empty and Not-Linked), and remove those GPOs if requested by the user.

Please pay attention that all scripts are using ‘ReadOnlyMode’ parameter, which is set to ‘True’ by default to prevent any unwelcome changes and modifications on your environment.

Remove Disabled GPOs

Disabled GPOs are Group Policies which configured with GPO Status: “All Settings Disabled”, making it completely meaningless to computers and users policy. The following PowerShell script will identify those ‘Disabled’ Group Policies and provide you with the option to delete selected objects from your environment.

Function Get-GPDisabledGPOs ($ReadOnlyMode = $True) {
    ""
    "Looking for disabled GPOs..."
    $DisabledGPOs = @()
    Get-GPO -All | ForEach-Object {
        if ($_.GpoStatus -eq "AllSettingsDisabled") {
            Write-Host "Group Policy " -NoNewline; Write-Host $_.DisplayName -f Yellow -NoNewline; Write-Host " is configured with 'All Settings Disabled'"
            $DisabledGPOs += $_
        }
        Else {
            Write-Host "Group Policy " -NoNewline; Write-Host $_.DisplayName -f Green -NoNewline; Write-Host " is enabled"         
        }
    }
    Write-Host "Total GPOs with 'All Settings Disabled': $($DisabledGPOs.Count)" -f Yellow
    $GPOsToRemove = $DisabledGPOs | Select Id,DisplayName,ModificationTime,GpoStatus | Out-GridView -Title "Showing disabled Group Policies. Select GPOs you would like to delete" -OutputMode Multiple
    if ($ReadOnlyMode -eq $False -and $GPOsToRemove) {
        $GPOsToRemove | ForEach-Object {Remove-GPO -Guid $_.Id -Verbose}
    }
    if ($ReadOnlyMode -eq $True -and $GPOsToRemove) {
       Write-Host "Read-Only mode in enabled. Change 'ReadOnlyMode' parameter to 'False' in order to allow the script make changes" -ForegroundColor Red 
    }
}


Remove Unlinked GPOs

Group Policies can be linked to an AD Site, to a specific OU or the domain level.
Unlinked GPOs are Group Policies that are not linked to any of the above, and therefore have zero effect on computers and users on the domain. The following PowerShell script will identify those ‘Unlinked’ Group Policies and provide you with the option to delete selected objects from your environment.

Function Get-GPUnlinkedGPOs ($ReadOnlyMode = $True) { 
    ""
    "Looking for unlinked GPOs..."
    $UnlinkedGPOs = @()
    Get-GPO -All | ForEach-Object {
        If ($_ |Get-GPOReport -ReportType XML | Select-String -NotMatch "<LinksTo>" ) {
            Write-Host "Group Policy " -NoNewline; Write-Host $_.DisplayName -f Yellow -NoNewline; Write-Host " is not linked to any object (OU/Site/Domain)"
            $UnlinkedGPOs += $_
        }
        Else {
            Write-Host "Group Policy " -NoNewline; Write-Host $_.DisplayName -f Green -NoNewline; Write-Host " is linked"         
        }
    }
    Write-Host "Total of unlinked GPOs: $($UnlinkedGPOs.Count)" -f Yellow
    $GPOsToRemove = $UnlinkedGPOs | Select Id,DisplayName,ModificationTime | Out-GridView -Title "Showing unlinked Group Policies. Select GPOs you would like to delete" -OutputMode Multiple
    if ($ReadOnlyMode -eq $False -and $GPOsToRemove) {
        $GPOsToRemove | ForEach-Object {Remove-GPO -Guid $_.Id -Verbose}
    }
    if ($ReadOnlyMode -eq $True -and $GPOsToRemove) {
       Write-Host "Read-Only mode in enabled. Change 'ReadOnlyMode' parameter to 'False' in order to allow the script make changes" -ForegroundColor Red 
    }
}


Remove Empty GPOs

Empty GPO is a Group Policy Object which does not contain any settings.
An empty Group Policy can be identified using the User/Computer version of the GPO (when they are both equal to ‘0’), or when the Group Policy Report extension data is NULL.

The following PowerShell script will identify ‘Empty’ Group Policies using the methods described above, and provide you with the option to delete selected objects from your environment.

Function Get-GPEmptyGPOs ($ReadOnlyMode = $True) {
    ""
    "Looking for empty GPOs..."
    $EmptyGPOs = @()
    Get-GPO -All | ForEach-Object {
        $IsEmpty = $False
        If ($_.User.DSVersion -eq 0 -and $_.Computer.DSVersion -eq 0) {
            Write-Host "The Group Policy " -nonewline; Write-Host $_.DisplayName -f Yellow -NoNewline; Write-Host " is empty (no settings configured - User and Computer versions are both '0')"
            $EmptyGPOs += $_
            $IsEmpty = $True
        }
        Else {
            [xml]$Report = $_ | Get-GPOReport -ReportType Xml
            If ($Report.GPO.Computer.ExtensionData -eq $NULL -and $Report.GPO.User.ExtensionData -eq $NULL) {
                Write-Host "The Group Policy " -nonewline; Write-Host $_.DisplayName -f Yellow -NoNewline; Write-Host " is empty (no settings configured - No data exist)"
                $EmptyGPOs += $_
                $IsEmpty = $True
            }
        }
        If (-Not $IsEmpty) {
            Write-Host "Group Policy " -NoNewline; Write-Host $_.DisplayName -f Green -NoNewline; Write-Host " is not empty (contains data)"        
        }
    }
    Write-Host "Total of empty GPOs: $($EmptyGPOs.Count)" -f Yellow
    $GPOsToRemove = $EmptyGPOs | Select Id,DisplayName,ModificationTime | Out-GridView -Title "Showing empty Group Policies. Select GPOs you would like to delete" -OutputMode Multiple
    if ($ReadOnlyMode -eq $False -and $GPOsToRemove) {
        $GPOsToRemove | ForEach-Object {Remove-GPO -Guid $_.Id -Verbose}
    }
    if ($ReadOnlyMode -eq $True -and $GPOsToRemove) {
       Write-Host "Read-Only mode in enabled. Change 'ReadOnlyMode' parameter to 'False' in order to allow the script make changes" -ForegroundColor Red 
    }
}

In the next chapter, we will continue to review advanced methods and different ways of cleaning up Group Policy form unwanted GPOs. Stay tuned!

Viewing all 177 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>