Quantcast
Channel: Secure Infrastructure Blog
Viewing all 177 articles
Browse latest View live

Field Notes: The case of buried Active Directory Account Management Security Audit Policy events

$
0
0

Security auditing is one of the most powerful tools that you can use to maintain the integrity of your system.  As part of your overall security strategy, you should determine the level of auditing that is appropriate for your environment.  Auditing should identify attacks (successful or not) that pose a threat to your network, and attacks against resources that you have determined to be valuable in your risk assessment.

In this blog post, I discuss a common security audit policy configuration I come across in a number of environments (with special focus on Account Management).  I also highlight the difference between basic and advanced security audit policy settings.  Lastly, I point you to where recommendations that can help you fine-tune these policies can be obtained.

Background

It may appear that events relating to user account management activities in Active Directory (AD) are not logged in the security event logs on domain controllers (DC).  This is an example of a view on one DC:

Cluttered Security Event Log

Here we see a lot of events from the Filtering Platform Packet Drop and Filtering Platform Connection subcategories – the image shows ten of these within the same second!

We see the following events on the same log about two minutes later (Directory Service Replication):

Cluttered Security Event Log

It can also be seen that there was an event relating to a successful Directory Service Access (DS Access) activity, but this is only one out of quite a bit!

Running the following command in an elevated prompt helps in figuring out what triggers these events:

 auditpol /get /category:"DS Access,Object Access" 

The output below reveals that every subcategory in both the Policy Change and DS Access categories is set to capture success and failure events.

Auditpol Output

Note: running auditpol unelevated will result in the following error:

Error 0x00000522 occurred:
A required privilege is not held by the client.

To complete the picture, this is what it looked like in the Group Policy Editor:

Basic Audit Policy Settings Group Policy Management Editor

Do we need all these security audit events?  Let us look at what some of the recommendations are.

 

Security auditing recommendations

Guidance from tools such as the Security Compliance Manager (SCM) states that if audit settings are not configured, it can be difficult or impossible to determine what occurred during a security incident.  However, if audit settings are configured so that events are generated for all activities the security log will be filled with data and hard to use.  We need a good balance.

Let us take a closer look at these subcategories:

Filtering Platform Packet Drop

This subcategory reports when packets are dropped by Windows Filtering Platform (WFP).  These events can be very high in volume.  The default and recommended setting is no auditing on AD domain controllers.

Filtering Platform Connection

This subcategory reports when connections are allowed or blocked by WFP.  These events can be high in volume.  The default and recommended setting is no auditing on AD domain controllers.

Directory Service Replication

This subcategory reports when replication between two domain controllers begins and ends.  The default and recommended setting is no auditing on AD domain controllers.

These descriptions and recommendations are from SCM but there is also the Policy Analyzer, which is part of the Microsoft Security Compliance Toolkit, you can look at using for guidance.  There’s also this document if you do not have any of these tools installed.

Tuning audit settings

Turning on everything – success and failures, is obviously not inline with security audit policy recommendations.  If you have an environment that was built on Windows Server 2008 R2 or above, the advanced audit policy configuration is available to use in Group Policy.

Important

Basic versus Advanced

Reference: https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd692792(v=ws.10)

If you already have settings configured in the basic audit policy and want to start leveraging the advanced audit policy in order to benefit from granularity offered by the latter, you need to carefully plan for the migration.

Getting Started with Advanced Audit Policy Configuration

In case you are wondering what I mean by granularity, see a comparison of the two below.

Basic Audit Policy Settings

In this example, I set the audit directory service access (DS Access) category to success:

Example of Basic Audit Policy Settings

Notice that all subcategories are affected as there is no granularity offered here (every subcategory is set to success):

Outcome of Basic Audit Policy Setting

Side note: take a look back at the Group Policy Management Editor window focusing on Audit Policy while we are here.  Notice that audit policy change is set to no auditing instead of not defined.  Here is the difference between the two:

  • Not defined means that group policy does not enforce this setting – Windows (Server) will assume the default setting
  • No auditing means that auditing is turned off – see example below

No Auditing

Advanced Audit Policy Settings

On the other hand, the advanced security audit policy does offer fine-grained control.  The example below demonstrates granularity that could be realized when using the advanced security audit policies:

Subcategory Setting
Audit Detailed Directory Service Replication No Auditing
Audit Directory Service Access Success and Failure
Audit Directory Service Changes Success
Audit Directory Service Replication No Auditing

Example of Advanced Audit Policy Settings

The output of auditpol confirms expected the expected result:

Outcome of Advanced Audit Policy Settings

The outcome

After turning off basic security audit policies and implementing the advanced settings based on the recommendations shared above, the security event logs start to make sense since a lot of the “noise” has been removed.  We start seeing desired events logged in the security log as depicted below:

Neat Security Event Log

Keep in mind that these events are local to each DC, and that the event logs are configured to overwrite events as needed (oldest events first) by default.  Solutions such as System Center Operations Manager Audit Collection Services can help capture, centralize and archive these events.

Till next time…


Going Serverless with Azure Functions & Powershell: SendGrid

$
0
0

 

In this post, we will discuss the process of creating that solution using Azure Function Apps and SendGrid to send emails based on your Powershell Functions which runs in your local machine. We will see that how can we build a serverless email solution that reports your disk usage with Azure Functions, SendGrid and Powershell.

We are going to examine this scenario in the following categories:

1- Configure SendGrid and Azure

2- Create an Azure Function App

3- Create an Azure Funciton – Experimental language was used  in this scenario

4- Create your local Powershell Script to learn aobut your disk space and usage

6- (Optional) Put your poweshell script into the task schedule and send your disk usage as an email by calling your Azure Function URL

Please keep in mind that you can customize your Azure Powershell function and local PowerShell script based on your requirements and needs. I think that it might be useful to implement this kind of logic into your solutions.

 

Configure SendGrid and Azure

In this demo, I preferred to use free SendGrid account which provides us a software plan as 25,000 emails per month to get things going.

You can find SendGrid Email Delivery in Azure Marketplace in the Web Category. Once the SendGrid account is successfully created, you need to obtain your SendGrid API key to use this API key later during the creation of the function. So that, make sure that you are going to keep it in a secure place.

Annotation 2019-01-12 165233

 

Annotation 2019-01-12 165405Annotation 2019-01-12 165449

 

Once your SendGrid Account is created, you can click Manage in the left corner, and it will automatically direct you into the SendGrid Account. The next step that we are going to to do is to obtain your API key which is going to be needed to be used in your Function App.

Annotation 2019-01-12 165808

Please select Create API Key on the top of the right Corner. Specify your API Key details such as Name and Permissions and then Click “Create & View.”

image           Annotation 2019-01-12 165944

image

Remember that this is your only option to view and obtain your API Key. For security purposes, it will not allow the API Key value to be displayed.


Create an Azure Function App

As a next step, we are going to create our new function app service. As you see in the following screenshot, We need to use resource group and storage account to store the function code and its components.

 

Annotation 2019-01-12 170335

 

Since we are going to create our function app as Powershell script, we need to review and change our platform features. Please go to your previously created function and from the General Settings open up the Function App Settings section. Now we have to change our RunTime version from ~2 to ~1 to be able to access the Powershell language support once we are choosing our function template when experimental language support is enabled.

Annotation 2019-01-12 171430

 

image

 

Annotation 2019-01-12 171738

Since we are going to trigger our SendGrid API we are going to need to choose our template as HTTP Trigger. You can specify your function language, name and Authorization Level.

Annotation 2019-01-12 171759

The function HttpTriggerPowershell1 and run.ps1 script are going to be our sources which is going to be responsible to call the SendGrid API.


Create and Modify an Azure Function

Please provide following expected variables into your run.ps1 script. We need to define “to” and “from” sections in the body variable.

To be able to call post method, we need to populate our header variable, please provide your API KEY which is obtained in previous section.

run.ps1
# POST method: $req

$requestBody = Get-Content $req -Raw | ConvertFrom-Json
$count = $requestBody.value.count $date = $requestBody.date
$firstline = “Id | Type | Size(GB) | FreeSpace(GB) | FreeSpace(%)”+”nnn”
$info += $firstline

for($i=0;$i -lt $count;$i++){
$line = $requestBody.value[$i].DeviceID + “t” + $requestBody.value[$i].DriveType + “t” + $requestBody.value[$i].’Size (GB)’ + “t”  + $requestBody.value[$i].’Free Space (GB)’ + “t” + $requestBody.value[$i].’Free Space (%)’ +  “ttnnn”

$info +=  $line }
$body = @” ‘{“personalizations”: [{“to”: [{“email”: “TO_EMAIL_ADDRESS”}]}],”from”: {“email”: “FROM_EMAIL_ADDRESS”},”subject”: “Current Disk Space Status –> $date”,”content”: [{“type”: “text/plain”, “value”: “$info”}]}’ “@
$header = @{“Authorization”=”Bearer YOUR API KEY HERE”;”Content-Type”=”application/json”}
Invoke-RestMethod -Uri https://api.sendgrid.com/v3/mail/send -Method Post -Headers $header -Body $body



Create a Powershell Script to learn about your disk space and usage

 

Now we need to customize our local PS Script which call our Azure Function app to be able to invoke rest method. We need to obtain our function URL from Azure Portal and place into the expected parameter in the “Invoke-RestMethod”.

2

 

get_disk_space.ps1
$servername = “localhost”
$diskinfo = Get-WmiObject -Class Win32_LogicalDisk -ComputerName $servername |
Select-Object @{Name=”DeviceID”;Expression={$_.DeviceID}},          @{Name=”DriveType”;Expression={switch ($_.DriveType){
0 {“Unknown”}

1 {“No Root Directory”}
2 {“Removable Disk”}
3 {“Local Disk”}
4 {“Network Drive”}
5 {“Compact Disc”}
6 {“RAM Disk”}
}};
},
@{Name=”Size (GB)”;Expression={“{0:N1}” -f ($_.Size/1GB)}},
@{Name=”Free Space (GB)”;Expression={“{0:N1}” -f($_.FreeSpace/1GB)}},
@{Name=”Free Space (%)”;
Expression={
if ($_.Size -gt 0){
“{0:P0}” -f($_.FreeSpace/$_.Size)
}
else
{
0
}
}
}
$data = @{date=”{0:MM}/{0:dd}/{0:yyyy} {0:hh}:{0:mm}” -f (get-date);value = $diskinfo} $json = $data | ConvertTo-Json Invoke-RestMethod -Method post -uri “YOUR COPIED FUNCTION URL HERE” -Body $json


 


Configure to run a Powershell Script into Task Scheduler

To create a periodic action which send your participant about given specified machine’s disk usage status, we need to call our get_disk_space.ps1 script on a daily basis.

As an example I configured my schedule to send an email at 8:30 pm every day with the information of my current disk space usage.

tempsnip

In the action tab we need to call required script which collect our disk usage info and by calling Invoke-RestMethod with your function URL.

tempsnip2

 


Here is the outcome of our scenario;

tempsnip3


 

Please let me know if you have any suggestion & question about my post.

Thanks for your time!

References

https://sendgrid.com/docs/index.html

https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-sendgrid

https://social.technet.microsoft.com/wiki/contents/articles/38580.configure-to-run-a-powershell-script-into-task-scheduler.aspx

Windows Admin Center–Part 1 of Optimization Series

$
0
0

 

This is going to be the first in a series of posts on how to optimize your environment, with the tools provided by Microsoft at no cost to you.

 

Windows Admin Center which can be downloaded here is the Natural evolution and ultimate replacement for Server Manager.

 

It is a Free, single lightweight MSI download that uses Remote PowerShell and WMI over WinRM, to connect to, manage the devices (Windows Server 2008 R2– later, Windows 10) through the Windows Admin Center gateway installed on Windows Server or Windows 10

 

It Provides a Single plane of glass view, for performing multiple tasks across a range of Servers in your Environment, without having to use multiple tools anymore (MMC, Hyper-V Manager etc).

 

wac-complements

 

Today I am going to be looking at 3 features to help you as an Admin with running your infrastructure more effectively:

 

1. Managing Certificates like a Pro (easy as 1 – 2 – 3 )

2. Enabling your Nested Virtualization

3. Quickly Enable Azure Backup  ( under 5 minutes start to Finish)

Before we get started, we need to ensure that we have a list of the machines available that we want to manage

 

This can be done by either adding the Server names in manually

 

image

 

or adding a txt file with the server names in, for managing your entire environment

 

image

 

Now that we have added the servers, we connect to a Machine, and can start from there

 

1. Managing Certificates like a Pro (easy as 1 – 2 – 3 )

 

After selecting a machine to manage, select Certificates (step 1)

 

SNAGHTMLfbe2018

 

I Now have an overview of the Certs that are installed on the machine, and can view the number of expired Certs( or import new certs etc)

 

For this example, we are cleaning up expired Certs on the machine. Select Expired (Step 2)

 

SNAGHTMLfc20365

 

Now once I have opened the expired Certs, I can now deleterequest renewal of the certs (step 3)

 

SNAGHTMLfc7aad8

 

That is certificate management like a Pro.

 

From one pane of glass, I can easily manage the certs, quickly and effectively, without having to launch MMC – Certificates.

 

2. Enabling your Nested Virtualization

 

When selecting Virtual Machines, you will get a summary of the VM’s running on the ServerPC and the impact of that on the system

 

SNAGHTML2e72695

 

Now we select Inventory, then the Virtual machine we want to editmanage

 

SNAGHTML3003ca3

 

Now select More – Settings from the Drop down list

 

SNAGHTML3010c09

 

Note : Remember that the VM must not be in a running State, else you cannot make changes to the Hardware of it

 

Select Processors – Enable nested virtualization

 

SNAGHTML301a6e2

 

That simple.

 

Signing into Azure

 

For the Next Step you need to have already signed the Gateway in to your Azure Subscription

If you have not, the steps are listed below:

 

In Admin Center – Select the Gear Icon for Settings

 

SNAGHTML357bbdf

 

Select Azure – Register

 

image

 

Follow the Steps to sign in, grant permissions to the Gateway App on the subscription.

 

3. Quickly Enable Azure Backup ( under 5 Minutes)

The Following Video, will guide you through setting up Azure Backup from scratch in under 5 minutes

 

Quickly enable azure backup

 

I hope that this will help you in getting used to and start using new Windows Admin Center.

Please check back later for Part 2 of the Blog

Field Notes: The case of Active Directory Diagnostics – Data Collector Set Fails to Start

$
0
0

Performance Monitor is a great tool for collecting and analyzing performance data in Windows and Windows Server.  There are many counters available that one can look at to help understand how the system is performing.  Unfortunately analysis of performance data may not always be straightforward for some system administrators.  Luckily, there is the built-in Data Collector Set for Active Directory Diagnostics in Windows Server once the Active Directory Domain Services role is installed on a machine.  This feature makes the life of an Active Directory administrator easy as most of the analysis is automated.

In this blog post, I briefly explain how the Active Directory Diagnostics works.  I also take you through what I see in some environments where this feature does not work due to inadequate user rights.

The Active Directory Diagnostics Report

Say you are already familiar with the Active Directory (AD) Diagnostics Data Collector Set (DCS) in Performance Monitor, or you have read this blog post and are interested in a report similar to the one below created by the default AD DCS.  In the example, we see that there is a warning indicating that the system is experiencing excessive paging.  The cause here is that available memory on the system is low.  The report also suggests that we upgrade the physical memory or reduce system load.  This report allows us to drill into desired areas of interest such as Active Directory, CPU, network, disk, memory, etc.

Diagnostics Results

The Data Collector Set Fails to Start

Unfortunately the AD DCS may fail to start in some instances due to inadequate user rights, which I see often in the field.  Instead of starting up and visually indicating with the green play icon as depicted below, there would not even be a pop-up dialog box with a warning or error indicating that there is a problem – the DCS just does not start!

Running Data Collector Set

Attempting to kick of the DCS via command line also does not help:

 Logman Start "SystemActive Directory Diagnostics" –ets 

Behind The Scenes

Before we get into what exactly the issue is and how we would go about resolving it, let us briefly take a look at how this feature works.

Working environment

The Active Directory Diagnostics DCS leverages the Windows Task Scheduler in order to complete what it is requested to perform.  I grabbed a screenshot from the Task Scheduler to help paint a picture:

Scheduled Task History

Following the sequence of events that took place (reading from bottom to top), we get an idea on what happens behind the scenes when the play button is pressed in Performance Monitor.  Here are a few informational events that stand out:

  • Event ID 100 – Task Started
  • Event ID 200 – Action Started
  • Event ID 201 – Action Completed
  • Event ID 102 – Task Completed

 

Broken environment

Looking at the task where the Data Collector Set fails to launch, we see the following:

Scheduled Task History

From the image above, we can see Event ID 101.  This event means the Task Scheduler failed to start the AD Diagnostics task for the currently logged on user.

Note: These tasks are created under Microsoft | Windows | PLA |System

Taking a look in the Event Viewer (Microsoft-Windows-TaskScheduler | Operational), there is also an Event ID 104 logged indicating that the task scheduler failed to logon…

Event 104

Required Rights

How do we proceed with this background information?  Taking a look back at the scheduled task, we see the following under general options.  The specified account is the currently logged on user (which is also reflected in Event ID 101):

Task User Account

You may begin to wonder at this stage as you are currently logged on to the DC with an account that is in the Domain Admins group.  What permissions/rights are missing?  The log on as a batch job user right assignment, which determines which accounts can log on by using a batch-queue tool such as the Task Scheduler service.

Default Behavior

In Windows Server 2012 R2, this setting is set to “Not Configured” in the Default Domain Controllers Policy.  Domain Controllers would then assume the default behavior, which assigns this user right to the following security groups:

  • Administrators
  • Backup Operators

 

Common Case

If you look at the policy setting (where the DCS fails to start), you would see that user accounts or groups have explicitly been granted the right.  This unfortunately overrides the default behavior – only the accounts and/or security groups listed here have this right (the explain tab lists the default groups):

User Rights Assignment

I observed something interesting when I tested on a few Windows Server 2016 machines in my lab.   Default groups are pre-populated when you modify this setting, therefore, chances of accidentally hurting yourself are lower.

The Fix is Very Easy

Administrators and Backup Operators would have to been added over and above the IDRSfw-service account (in this example) if you still want them to have this user right, as depicted below:

User Rights Assignment

After adding the Administrators group back to the list of security principles allowed to log on as a batch job, the DCS successfully starts:

 Logman Query “SystemActive Directory Diagnostics” – ets 

 

Running Data Collector Set (command-line)

Closure

Be careful when modifying policy settings such as User Rights Assignments as you could end up seeing unexpected results if they are not properly configured.  In this instance, the Administrators and Backup Operators groups would have to be explicitly added with the IDRSfw-service account in order not to negatively impact the default behavior.  Be sure check tools such as the Policy Analyzer and the Security Compliance Manager for guidance on what the recommendations are.  This is one example and there are others, such as the inability to add a new DC to an existing domain due to lack inadequate rights!

Till next time…

SCOM Advanced Authoring: TCP Port Monitoring

$
0
0

It’s been long time since I blogged about SCOM Authoring. Following the Blog Post on PowerShell Discovery from CSV File I got numerous feedbacks from fellow techies to complete the TCP Port Monitoring MP with monitors and rules. One of our friend even has created an MP which he has blogged out here.

Anyways, I thought it would be good to finish what I started with detailed explanation. Better Late than Never!

When we start talking about creating Custom Monitors and Rules in SCOM, we must understand how they are structured.

Monitors: Each Monitor is based on a Monitor Type where you define the number of states (two or three) and their criteria along with necessary modules.

Rules: Rules are built on top of modules directly.

To understand about modules, please take a minute to go through the WIKI article.

Coming back to the scenario, now we must build 4 Monitors and 1 Rule.

  • TCP Unreachable Monitor
  • TCP Timeout Monitor
  • DNS Resolution Monitor
  • Connection Refused Monitor
  • Connection Time Performance Collection Rule

This means, we must build 4 Monitor Types on top of which 4 Monitors can be created. For all 4 Monitors and 1 Rule, the Data Source is same (i.e., Synthetic Transaction to test port connectivity). So, we will start with creating a Data Source following with 4 Monitor Types and finally our 4 Monitors and a Rule.

Data Source:

Below is the XML fragment for the Data Source Module. We use System.SimpleScheduler Data Source module and “Microsoft.SystemCenter.SyntheticTransactions.TCPPortCheckProbe” probe action module to create composite Data Source. As discussed in previous blog posts, we promote the fields that are customizable such as “IntervalSeconds”, “SyncTime”,”ServerName” and “Port”.

<DataSourceModuleType ID=”GKLab.TCP.Port.Monitoring.Monitoring.DataSource” Accessibility=”Internal” Batching=”false”>

<Configuration>

<xsd:element minOccurs=”1″ name=”IntervalSeconds” type=”xsd:integer” />

<xsd:element minOccurs=”1″ name=”SyncTime” type=”xsd:string” />

<xsd:element minOccurs=”1″ name=”ServerName” type=”xsd:string” />

<xsd:element minOccurs=”1″ name=”Port” type=”xsd:integer” />

</Configuration>

<OverrideableParameters>

<OverrideableParameter ID=”IntervalSeconds” Selector=”$Config/IntervalSeconds$” ParameterType=”int” />

</OverrideableParameters>

<ModuleImplementation Isolation=”Any”>

<Composite>

<MemberModules>

<DataSource ID=”DS” TypeID=”System!System.SimpleScheduler”>

<IntervalSeconds>$Config/IntervalSeconds$</IntervalSeconds>

<SyncTime>$Config/SyncTime$</SyncTime>

</DataSource>

<ProbeAction ID=”Probe” TypeID=”MicrosoftSystemCenterSyntheticTransactionsLibrary!Microsoft.SystemCenter.SyntheticTransactions.TCPPortCheckProbe”>

<ServerName>$Config/ServerName$</ServerName>

<Port>$Config/Port$</Port>

</ProbeAction>

</MemberModules>

<Composition>

<Node ID=”Probe”>

<Node ID=”DS” />

</Node>

</Composition>

</Composite>

</ModuleImplementation>

<OutputType>MicrosoftSystemCenterSyntheticTransactionsLibrary!Microsoft.SystemCenter.SyntheticTransactions.TCPPortCheckData</OutputType>

</DataSourceModuleType>

Monitor Type:

Next, we will create 4 monitor types. Each Monitor Type has two states. We will use the Data Source created above and define two conditions which will correspond to two states of Monitor Type. Below is code for Connection Refused Monitor Type. If StatusCode from Data Source equals 2147952461 then the monitor state will be
ConnectionRefusedFailure
which will be mapped to Critical health of monitor. If not, monitor state will be NoConnectionRefusedFailure which will be mapped to Success health of monitor.

<UnitMonitorType ID=”GKLab.TCP.Port.Monitoring.MonitorType.ConnectionRefused” Accessibility=”Internal”>

<MonitorTypeStates>

<MonitorTypeState ID=”ConnectionRefusedFailure” NoDetection=”false” />

<MonitorTypeState ID=”NoConnectionRefusedFailure” NoDetection=”false” />

</MonitorTypeStates>

<Configuration>

<xsd:element minOccurs=”1″ name=”IntervalSeconds” type=”xsd:integer” />

<xsd:element minOccurs=”1″ name=”SyncTime” type=”xsd:string” />

<xsd:element minOccurs=”1″ name=”ServerName” type=”xsd:string” />

<xsd:element minOccurs=”1″ name=”Port” type=”xsd:integer” />

<xsd:element minOccurs=”1″ name=”TimeWindowInSeconds” type=”xsd:integer” />

<xsd:element minOccurs=”1″ name=”NoOfRetries” type=”xsd:integer” />

</Configuration>

<OverrideableParameters>

<OverrideableParameter ID=”IntervalSeconds” Selector=”$Config/IntervalSeconds$” ParameterType=”int” />

</OverrideableParameters>

<MonitorImplementation>

<MemberModules>

<DataSource ID=”DS” TypeID=”GKLab.TCP.Port.Monitoring.Monitoring.DataSource“>

<IntervalSeconds>$Config/IntervalSeconds$</IntervalSeconds>

<SyncTime>$Config/SyncTime$</SyncTime>

<ServerName>$Config/ServerName$</ServerName>

<Port>$Config/Port$</Port>

</DataSource>

<ProbeAction ID=”PassThrough” TypeID=”System!System.PassThroughProbe” />

<ConditionDetection ID=”ConditionOK” TypeID=”System!System.ExpressionFilter”>

<Expression>

<SimpleExpression>

<ValueExpression>

<XPathQuery Type=”UnsignedInteger”>StatusCode</XPathQuery>

</ValueExpression>

<Operator>NotEqual</Operator>

<ValueExpression>

<Value Type=”UnsignedInteger”>2147952461</Value>

</ValueExpression>

</SimpleExpression>

</Expression>

</ConditionDetection>

<ConditionDetection ID=”ConditionFailure” TypeID=”System!System.ExpressionFilter”>

<Expression>

<SimpleExpression>

<ValueExpression>

<XPathQuery Type=”UnsignedInteger”>StatusCode</XPathQuery>

</ValueExpression>

<Operator>Equal</Operator>

<ValueExpression>

<Value Type=”UnsignedInteger”>2147952461</Value>

</ValueExpression>

</SimpleExpression>

</Expression>

</ConditionDetection>

<ConditionDetection ID=”Consolidator” TypeID=”System!System.ConsolidatorCondition”>

<Consolidator>

<ConsolidationProperties />

<TimeControl>

<WithinTimeSchedule>

<Interval>$Config/TimeWindowInSeconds$</Interval>

</WithinTimeSchedule>

</TimeControl>

<CountingCondition>

<Count>$Config/NoOfRetries$</Count>

<CountMode>OnNewItemTestOutputRestart_OnTimerSlideByOne</CountMode>

</CountingCondition>

</Consolidator>

</ConditionDetection>

</MemberModules>

<RegularDetections>

<RegularDetection MonitorTypeStateID=”ConnectionRefusedFailure“>

<Node ID=”Consolidator”>

<Node ID=”ConditionFailure“>

<Node ID=”DS” />

</Node>

</Node>

</RegularDetection>

<RegularDetection MonitorTypeStateID=”NoConnectionRefusedFailure“>

<Node ID=”ConditionOK“>

<Node ID=”DS” />

</Node>

</RegularDetection>

</RegularDetections>

<OnDemandDetections>

<OnDemandDetection MonitorTypeStateID=”ConnectionRefusedFailure”>

<Node ID=”ConditionFailure”>

<Node ID=”PassThrough” />

</Node>

</OnDemandDetection>

<OnDemandDetection MonitorTypeStateID=”NoConnectionRefusedFailure”>

<Node ID=”ConditionOK”>

<Node ID=”PassThrough” />

</Node>

</OnDemandDetection>

</OnDemandDetections>

</MonitorImplementation>

</UnitMonitorType>

Monitors:

Finally, the Monitors. The Monitor is targeted to the custom Class we created earlier – GKLab.TCP.Port.Monitoring.Class which hosts the instances from the CSV file. The Target Instance data is passed as configuration to the Monitor (refer <Configuration> tag) and the Alert parameters are defined. Notice the health state mapping with MonitorTypeStateId which we defined earlier in Monitor Types.

<UnitMonitor ID=”GKLab.TCP.Port.Monitoring.Monitor.ConnectionRefused” Accessibility=”Internal” Enabled=”true” Target=”GKLab.TCP.Port.Monitoring.Class” ParentMonitorID=”Health!System.Health.AvailabilityState” Remotable=”true” Priority=”Normal” TypeID=”GKLab.TCP.Port.Monitoring.MonitorType.ConnectionRefused” ConfirmDelivery=”true”>

<Category>Custom</Category>

<AlertSettings AlertMessage=”GKLab.TCP.Port.Monitoring.Monitor.ConnectionRefused_AlertMessageResourceID”>

<AlertOnState>Error</AlertOnState>

<AutoResolve>true</AutoResolve>

<AlertPriority>Normal</AlertPriority>

<AlertSeverity>Error</AlertSeverity>

<AlertParameters>

<AlertParameter1>$Target/Property[Type=”GKLab.TCP.Port.Monitoring.Class”]/Port$</AlertParameter1>

<AlertParameter2>$Target/Property[Type=”GKLab.TCP.Port.Monitoring.Class”]/ServerName$</AlertParameter2>

<AlertParameter3>$Target/Host/Property[Type=”Windows!Microsoft.Windows.Computer”]/PrincipalName$</AlertParameter3>

</AlertParameters>

</AlertSettings>

<OperationalStates>

<OperationalState ID=”UIGeneratedOpStateIdde249d72023f429ab12b926b5bc21ca4″ MonitorTypeStateID=”ConnectionRefusedFailure” HealthState=”Error” />

<OperationalState ID=”UIGeneratedOpStateId86f579e32c97416b824528157ecd2c71″ MonitorTypeStateID=”NoConnectionRefusedFailure” HealthState=”Success” />

</OperationalStates>


<Configuration>

<IntervalSeconds>300</IntervalSeconds>

<SyncTime>00:00</SyncTime>

<ServerName>$Target/Property[Type=”GKLab.TCP.Port.Monitoring.Class”]/ServerName$</ServerName>

<Port>$Target/Property[Type=”GKLab.TCP.Port.Monitoring.Class”]/Port$</Port>

<TimeWindowInSeconds>$Target/Property[Type=”GKLab.TCP.Port.Monitoring.Class”]/TimeWindowInSeconds$</TimeWindowInSeconds>

<NoOfRetries>$Target/Property[Type=”GKLab.TCP.Port.Monitoring.Class”]/NoOfRetries$</NoOfRetries>

</Configuration>

</UnitMonitor>

Rules:

Like the Monitors, we need target Rule to the custom Class. We need to define the Data Source and relevant modules based on whether the rule is alerting or non-alerting rule. Since we are building a performance collection rule, we ought to use Performance!System.Performance.DataGenericMapper to map the performance data collected and Write Action modules to write the collected data to Ops DB and Ops DW DB.


<Rules>

<Rule ID=”Virtusa.TCP.Port.Monitoring.Rule.ConnectionTime” Enabled=”true” Target=”Virtusa.TCP.Port.Monitoring.Class” ConfirmDelivery=”true” Remotable=”true” Priority=”Normal” DiscardLevel=”100″>

<Category>PerformanceCollection</Category>

<DataSources>

<DataSource ID=”DS” TypeID=”Virtusa.TCP.Port.Monitoring.Monitoring.DataSource”>

<IntervalSeconds>300</IntervalSeconds>

<SyncTime>00:00</SyncTime>

<ServerName>$Target/Property[Type=”Virtusa.TCP.Port.Monitoring.Class”]/ServerName$</ServerName>

<Port>$Target/Property[Type=”Virtusa.TCP.Port.Monitoring.Class”]/Port$</Port>

</DataSource>

</DataSources>

<ConditionDetection ID=”PerfMapper” TypeID=”Performance!System.Performance.DataGenericMapper”>

<ObjectName>TCP Port Check</ObjectName>

<CounterName>Connection Time</CounterName>

<InstanceName>$Data/ServerName$:$Data/Port$</InstanceName>

<Value>$Data/ConnectionTime$</Value>

</ConditionDetection>

<WriteActions>

<WriteAction ID=”WriteToDB” TypeID=”SC!Microsoft.SystemCenter.CollectPerformanceData” />

<WriteAction ID=”WriteToDW” TypeID=”SystemCenter!Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData” />

</WriteActions>

</Rule>

</Rules>

Wrap Up:

Finally add missing xml fragments for Folders, Views, String Resources and Language Pack elements.

You can download the final XML here.

For any SCOM Monitoring requirements, please feel free to add a comment.

Happy SCOMing!

OMS Assessment : “No Data Found” Or Server Not Showing

$
0
0

Problem Description and Symptoms:

How many of us activated an OMS solution and is getting “No Data Found” as an assessment result. How many added a server to an already assessed solution ( e.g. Active directory Assessment , Active Directory Replication or SQL Assessment) without being able to to see the newly added server in the assessed solution. This blog will help us to identify and solve these problems in a handy and easy way.

image          image

Solution:

Prerequisites that needs to be checked:

  • Verify that the MMA agent is installed and connected properly to the workspace where the assessment is activated.

                     image

  • Ensure that .Net Framework 4 or above is installed on the machine having the issue.

                     image

If these prerequisites are met then you should proceed with the below.

Forcing the assessment workflows to be re-executed:

  • Login to the machine having the problem , and open the Registry Editor.
  • Locate the following registry key  :

                    HKLM\SYSTEM\CurrentControlSet\Services\HealthService\Parameters\Management Groups\<Your MG Name >\Solutions\

                    image

  • Choose the solution having issue with .

                     image

  • Delete and confirm the deletion of the “LastExecuted” Key .

                     image

                     image

  • Now recycle Microsoft Monitoring Agent service  (HealthService) either from services.msc or open CMD as Administrator .

                     image

                     image

  • You can see that the Key that we deleted previously is recreated . Wait for few minutes (around 5 to 10 minutes) and then open the assessment again.

                     image

Voila here is the result , data is collected and the machine is assessed by the solution.

Deploying Windows Virtual Desktop on Azure – Step by Step

$
0
0

On the 21/03/2019, Microsoft released Windows Virtual Desktop (aka WVD) as a public preview.

If you haven’t heard about it so far, you can think about Windows Virtual Desktop as a SaaS (Software as a Service) for Remote Desktop Services (RDS) on Azure. All the infrastructure elements like RD Gateway, Connection Brokers and RD Licensing are all provided as a service in Azure.

Windows Virtual Desktop also introduces Windows 10 multi-session, which, for the first time, allows multiple users to log into the same Windows 10 machine simultaneously using RDP.

In this post, we’ll walk through the steps required to establish Windows Virtual Desktop on your Azure tenant.

Prerequisites

Before we begin, please pay attention to the following requirements:

  • Azure AD in sync with Active Directory Domain Services (ADDS) through Azure AD Connect or Azure AD DS.
  • An Azure subscription within the Azure tenant.
  • A virtual network that either contains or is connected to the Active Directory Domain Services and configured to use the Domain Controllers IPs as its DNS servers. This is required because your Session Host VMs must be join-to-domain. Pay attention that Azure AD-joined is not supported.
  • For the full list of requirements, please see official docs.

Deploying Windows Virtual Desktop

The deployment of Windows Virtual Desktop consist of the following high-level steps:

  1. Create a Windows Virtual Desktop tenant.
  2. Create a host pool and session host VMs.
  3. Test connection and manage Windows Virtual Desktop users.

We will cover each of these steps in details, following with screenshots and examples.

(1) Create a Windows Virtual Desktop tenant

In this step, we will perform the following tasks:

  • Allow Windows Virtual Desktop service to access Azure AD.
  • Assign the “TenantCreator” role to a user account (required to create the WVD tenant).
  • Create the Windows Virtual Desktop tenant itself.

Allow Windows Virtual Desktop service to access Azure AD

  1. go to https://rdweb.wvd.microsoft.com.
  2. Select ‘Server App‘ under consent option, provide your Azure AD tenant GUID and click Submit. (to find your Azure AD GUID, go to Azure Portal, select Azure Active Directory -> Properties, and look for Directory ID).
    screen-shot-2019-04-12-at-21.33.05.png
  3. Repeat the same process while selecting ‘Client App’ under the consent option.

Assign the “TenantCreator” role to a user account

In order to assign the “TenantCreator” role to the user we are using for creating the WVD tenant,  we’ll use PowerShell.

The following script is asking the user for credentials, and assign the provided user with the “TenantCreator” role:

Create the Windows Virtual Desktop tenant

The creation of the Windows Virtual Desktop tenant is done by using PowerShell.
The following script creates the WVD tenant with the relevant parameters:

To make things easy, you can use the following PowerShell script to assign the ‘TenantCreator’ role and to create the WVD tenant. The script will ask you for your credentials, relevant subscription Id and the name you would like to give to your WVD tenant.

(2) Create a host pool and session host VMs

In this section, we will deploy a new host pool and one (or more) session host VMs in the WVD tenant we just created.

Unlike the previous steps, we will use the Azure Portal this time.
In order to start, click on Create a resource on the left sidebar, search for Windows Virtual Desktop – Provision a host pool, and click Create.

WVD_Hostpool00

A wizard will take you through the required steps. We will cover each of them in depth:

Step 1 – Basic

wvd_hostpool01.png

Hostpool Name – Choose a preferred name for the new hostpool.
Desktop type (Pooled/Personal) – For most of the cases, we will choose Pooled desktop type. Personal will be selected only if you would like a dedicated VM/session host for each user.
Default Desktop Users – These are the users who will get permission to access the hostpool.
You can select multiple users separated by comma (e.g. Omer@contoso.com,Itamar@contoso.com)
Subscription – Select the subscription where the new hostpool will be created.
Resource group – Create a new resource group or use an empty resource group you’ve created for this purpose.
Location – Select the preferred location for your hostpool.  Pay attention that during the Public Preview,  Windows Virtual Desktop service will be available only in ‘West US 2’.

Step 2 – Configure number of VMs based on profile usage

WVD_Hostpool02

Usage Profile– Let you choose the nubmer of users per vCPU. You can use custom to select the number of VMs.
Virtual machine size – Select the VM type and size you would like to use for your session host servers.
Virtual machine name prefix – Select a prefix for your session host VMs. This can be WVD, Hostpool01 or any other prefix that associate the VM with your Windows Virtual Desktop deployment. Pay attention that a ‘_’ (underscore) character is automatically added to your selected prefix (e.g If you choose ‘Hostpool01’ as your prefix, the VM name will be ‘Hostpool01-0’, ‘Hostpool01-1’ and so on).

Step 3 – Configure the VMs for Azure

wvd_hostpool03.png

Image source – Select the image for your session host VMs. In this example, we choose the Gallery option, which let you select an image from the Azure Gallery.
Image OS version – When selecting the Gallery option as the image source, you can choose between Windows Server 2016 and the new Windows 10 Enterprise multi-session.
Disk Type – Choose between HDD and SSD.
AD domain join UPN – Provide a user account UPN (e.g. admin@contoso.com) that has the join to domain permission. Usually, a Domain Admin account will be used.
Pay attention that a local user account with the same name will be created on each virtual machine.
Admin Password – Provide the password corresponding to the AD domain-join account you entered. Pay attention that this password will be used by the local user account, and therefore required to have at least 12 characters.
Specify domain or OU – Select ‘Yes’ if you would like to join the virtual machines to a specific domain or organization unit (OU). When selecting ‘No’, the virtual machines will be joined to the same domain as the suffix of the ‘AD domain join UPN’, and will be created under the ‘Computers’ container in Active Directory.
Virtual network – Select or create a vnet (virtual network) that will connect your VMs with Active Directory and Domain Controller/s. If the selected vnet could not contact the domain, the VMs will not be able to join the domain and the whole deployment process will fail. Make sure that the selected vnet is configured with the IPs of the internal DNS servers and that it has connectivity to them.
Subnets – Select or create the subnet to host the new session host VMs.

Step 4 – Authenticate to Windows Virtual Desktop

WVD_Hostpool04

Windows Virtual Desktop tenant group  – You should keep the default value and use the ‘Default Tenant Group’ unless told otherwise.
Windows Virtual Desktop tenant name  – This should be the tenant name you chose when you created the tenant. In our example, this is the ‘$RDSTenantName’ variable.
UPN – Enter credentials of Azure AD account who has ‘RDS Owner’ or ‘RDS Contributor’ permissions.

Step 5 – Summary

WVD_Hostpool05

Review your configuration. Pay special attention to the following:
AD domain join UPN – Account with insufficient permissions or wrong username/password will make the deployment fail.
Virtual network – Make sure the selected VNET has connectivity to your Active Directory Domain Services by configuring the relevant DNS servers and creating peering if needed.
Windows Virtual Desktop tenant name  – double check that this name is the name you used when you created the tenant. You can use the command ‘Get-RdsTenant’ to get the tenants information and names.

Step 6 – Buy

WVD_Hostpool06

Here you’ll find the terms of use and links to Azure pricing calculator to help you estimate the costs for your Windows Virtual Desktop deployment.
Click Create when ready to start the deployment process.

(3) Test connection and manage Windows Virtual Desktop users

After the deployment has completed successfully, you can start using and testing it by performing the following tasks:

  1. Open your browser and go to http://aka.ms/wvdweb (alias to the full URL: https://rdweb.wvd.microsoft.com/webclient/index.html).
  2. Authenticate using the credentials of a user in the ‘Default desktop users’ you provided in step 1:WVD_Hostpool01b
  3. Select the Session Desktop and provide your credentials again if asking.
    wvd_testdesktopsession01b.png
  4. Enjoy your full desktop session with Windows Virtual Desktop!
    WVD_TestDesktopSession02

If you would add more users to your Windows Virtual Desktop deployment, you can use the following PowerShell script.
The script lets you select the relevant tenant and hostpool (in case you have more than one), display the current RDS users within this hostpool (for the default ‘Desktop Application Group’) and give you the ability to add additional RDS users if required.

Importance of the Microsoft Product lifecycle dashboard – Keeping your environment Supported

$
0
0

The following post was contributed by Meriem Jlassi, a PFE working for Microsoft

Introduction

As a Premier Field Engineer (PFE) at Microsoft, I get asked by a lot of customers about whether products are still supported, are they close to end of life or when do I need to plan upgrades. And before the release of the Configuration Manager Product Lifecycle Dashboard it was more of a manual task of checking the Microsoft Lifecycle Policy site and confirming the end of mainstream support or extended support dates, but that still didn’t give you a list of systems in your environment that are reaching end of support.

And maybe you just never knew you still had this version of software in your environment.…

Solution

Beginning with version 1806, you can use the Configuration Manager product lifecycle dashboard to view the Microsoft Lifecycle Policy. The dashboard shows the state of the Microsoft Lifecycle Policy for Microsoft products installed on devices managed with Configuration Manager.

You can now start pro-actively planning for product upgrades because the dashboard displays what needs to be replaced within the next 18 months.

Prerequisites

To see data in the product lifecycle dashboard, the following components are required:

  • Internet Explorer 9 or later must be installed on the computer running the Configuration Manager console.
  • A reporting services point is required for hyperlink functionality in the dashboard.
  • The asset intelligence synchronization point must be configured and synchronized. The dashboard uses the asset intelligence catalog as metadata for product titles. The metadata is compared against inventory data in your hierarchy. For more information, see Configure asset intelligence in Configuration Manager.

Configuration Manager Product Lifecycle Dashboard

Screenshot of the product lifecycle dashboard in the console

How can I tell which computers are running these older versions of SCCM, Windows or SQL Server? I can drill-through to another report by simply clicking on the hyperlinks found in the Number in environment column. Doing this brings me to the, Lifecycle 01A – Computers with a specific software product report.

There are also additional reports that can be utilized to allow customers to export the data out of SCCM:

  • Lifecycle 02A – List of machines with expired products in the organization: View computers that have expired products on them. You can filter this report by product name.
  • Lifecycle 03A – List of expired products found in the organization: View details for products in your environment that have expired lifecycle dates.
  • Lifecycle 04A – General Product Lifecycle overview: View a list of product lifecycles. Filter the list by product name and days to expiration.
  • Lifecycle 05A – Product lifecycle dashboard: Starting in version 1810, this report includes similar information as the in-console dashboard. Select a category to view the count of products in your environment, and the days of support remaining.

So what’s New since its release!!

Added in the latest version of SCCM 1902 is information for installed versions of Office 2003 through Office 2016. Data shows up after the site runs the lifecycle summarization task, which is every 24 hours.

Configuration Manager Product Lifecycle Dashboard – SCCM 1902

Product LifeCycle - Office

 

Some might ask – but what if I don’t have Configuration ManagerNyah-Nyah

That’s where “Azure Monitor logsformerly named Azure Log Analytics” could be used to provide a Dashboard to help with managing supportability of your environment.

Prerequisites:

  • Azure Tenant
  • Azure Subscription
  • Log Analytics Workspace
  • Monitoring Contributor role (at least)
  • Update Management Solution Enabled (no need for Deployment schedule)
  • Microsoft Monitoring Agent:
    • Direct Agent or
    • Log Analytics Integrated with SCOM or
    • Log Analytics Gateway

This will allow you to start using Kusto query language to find products which are end of support based on the Microsoft Lifecycle Policy site  information and create your dashboard based on specific software.

Example Query: Update | where Product contains “Windows Server 2008 R2” | distinct Computer

End of Life Support

Conclusion

The new product lifecycle dashboard will give you an indication of products that are past their end-of-life, products that are nearing end-of-life and also general information about the products that have been inventoried to help you manage the environment in a more proactive way and plan for upgrades.

I would have to say probably a underutilised capability that will help customers maintain an optimal environment.

So if you have Configuration Manager then it’s all ready to go but if you looking at the Azure Log Analytics option then you could start here to get going – https://docs.microsoft.com/en-us/azure/azure-monitor/log-query/log-query-overview.

If you are a Microsoft Premier customer you can reach out to your TAMs for delivery options available!!


Field Notes: Cloud Management Gateway – Failed to provision cloud service

$
0
0

Background

I recently had to setup the Cloud Management Gateway at a customer and followed all the steps and requirements to implement but still encountered the below error message which does not really give you an exact reason as to why it is failing.

Error

In this post I would like to take you through the steps on how it was resolved.

Solution

I navigated to the Activity log on the Resource Group for the subscription and saw the below error –  the Microsoft.ClassicCompute resource provider is not registered.

Resource error

A resource provider is a service that supplies the resources that can be deployed and managed through ARM. Each provider has its own APIs for accessing and manipulating the service
Microsoft.Compute represents the Virtual Machine  resources.

And since the Configuration Manger CMG still requires this use of this Classic Provider, we need to enable it.


This can be done under “Resource Providers” in the subscription.

To register a resource provider on the subscription, follow these steps:

1. In the Azure portal, All Services > Subscriptions

2. Select the subscription being used

3. Click Resource Providers

4. Find Microsoft.ClassicCompute in the list of available resource providers and hit Register


register

Deleted the failed Cloud Management Gateway, re-created it, and everything deployed successfully…


Deployed

Conclusion

So its seems that as of recently when creating a new Azure subscription (or at least a trial subscription for sure), it appears this resource provider is not automatically registered. As CMG requires it we can just manually register the provider as shown above or even use PowerShell or Azure CLI in this https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-register-provider-errors

Field Notes: A quick tip on DFSR Automatic Recovery while you prepare for an AD domain upgrade

$
0
0

With Windows Server 2008 R2 reaching end of life in January 2020, many organizations have been migrating their workloads to Windows Server 2016 or newer.  Unfortunately with Active Directory (AD), an attempt to introduce a first Windows Server 2019 or version 1709 domain controller (DC) in an domain that still uses the File Replication Service (FRS) engine to replicate SYSVOL content fails.  FRS deprecation intentionally blocks the installation.  Because of this, a migration to the Distributed File System Replication (DFSR) service is required before the AD upgrade.  My aim in this blog post is to highlight a change that was introduced in Windows Server 2008 R2 and 2012 (not R2) that may cause headaches if not catered for in your upgrade plan.  That is, how the DFSR service recovers from dirty or unexpected shutdown (manual versus automatic).

A fellow blogger Jose Rodrigues posted useful information on the importance of the Microsoft Product Lifecycle Dashboard, which can help identify if products are no longer supported or reaching end of life and keep your environment supported.

FRS to DFSR Migration

The process of migrating from FRS to DFSR is pretty straightforward and not in the scope of this post.  However, the following series provides good guidance on the migration journey:

Test Environment

It is always recommended to test any changes in a designated, isolated test environment before rolling out in production.  In my case, I am using one of the Azure Quickstart Templates (https://azure.microsoft.com/en-us/resources/templates/) to build a lab in my Azure subscription with just a few mouse clicks:

Azure Quickstart Templates

Check them out if you have an Azure subscription as they save a lot of time and effort.  A nice place to start would be to create a free trial here if you don’t already have a subscription.

For the purpose of this blog, I created a single-domain forest with domain controllers running Windows Server 2012.

DFSR Dirty Shutdown Recovery

The Understanding DFSR Dirty (Unexpected) Shutdown Recovery blog post does a great job in explaining how DFSR recovers from an unexpected shutdown.  This document also points out a change to the DFS Replication (DFSR) service for Windows Server 2008 R2 through hotfix 2663685.  The change is that the DFSR service no longer performs automatic recovery of the Extensible Storage Engine database after the database experiences a dirty shutdown.  Instead, when the new DFSR behaviour is triggered, event ID 2213 is logged in the DFSR log.  An administrator must manually resume replication after a dirty shutdown is detected by DFSR.  This change in behaviour also applies to Windows Server 2012, but not in later versions of Windows Server.

Default Behaviour

Taking a look at a Windows Server 2012 DC using DFSR for replicating SYSVOL content, this is what would be in the registry by default – the StopReplicationOnAutoRecovery key is set to 1 (automatic recovery is turned off):

StopReplicationOnAutoRecovery: HKLM\SYSTEM\CurrentControlSet\Services\DFSR\Parameters

HKLM\SYSTEM\CurrentControlSet\Services\DFSR\Parameters

Unexpected shutdown

With this configuration in place, this is what would happen after an unexpected shutdown.  A warning event 2213 is logged in the DFSR log indicating that the DFS Replication service stopped replication on the volume.  This event contains important information on how to recover from this situation and manual intervention is required.

Event ID 2213

Manual Recovery Steps

Event 2213 suggests that the administrator performs the following actions to recover:

  1. Back up the files in all replicated folders on the volume.  Failure to do so may result in data loss due to unexpected conflict resolution during the recovery of the replicated folders.
  2. To resume the replication for this volume, use the WMI method ResumeReplication of the DfsrVolumeConfig class.  For example, from an elevated command prompt, type the following command:

wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid=”<GUID>” call ResumeReplication

Step 1 is self-explanatory.  For the second step, just copy and paste the command provided in event 2213 into a command prompt as follows:

wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid="2AE6FE88-83DB-4DF3-B81F-049D216194FB" call ResumeReplication

The GUID is included in the warning event 2213 so there is no additional effort required here.

After this has been performed, the following event (2212) will be logged stating that the DFS Replication service has detected an unexpected shutdown on the volume.  This event further states that the service will rebuild the database if it determines it cannot reliably recover.

Event ID 2212

This will be followed by two informational events if everything went well:

  • Event 2218 – The DFS Replication service is in the second step of replication database consistency checks after an unexpected shutdown.  The database will be rebuilt if it cannot be recovered. 
  • Event 2214 – The DFS Replication service successfully recovered from an unexpected shutdown on the volume.  This can occur if the service terminated abnormally (due to a power loss, for example) or an error occurred on the volume. 

Recommendations for Domain Controllers

From this document, it is clear that the recommendation is to disable the Stop Replication functionality. 

AutoRecovery Best Practices

To enable automatic recovery, set the following registry key to the value of zero:

Enable AutoRecovery

HKLM\SYSTEM\CurrentControlSet\Services\DFSR\Parameters

Once this is in place, DFSR will automatically recover from unexpected shutdowns.  The same events we saw with manual recovery will be logged, but no user intervention is required with this configuration in place.

DFSR Event Logs

To sum up…

Migrating from FRS to DRSR is a straight forward process.  Please just add an additional checkbox in your migration/upgrade plan to ensure that manual recovery will not cause unnecessary headaches while you proceed with the journey to eradicating systems reaching end of life soon.

Till next time…

System Center Configuration Manager – Keep User Domain Profile after reloading

$
0
0

The issue:

A Task sequence was used to reload a machine from Windows 7/10 to Windows 10 that has User Data on the D: Drive and Operating system on the C: Drive. After the machine was added to the domain the user data was put back to the D: drive but upon first login a New user profile was created with the same name but followed by a “.DOMAIN”. The data would have had to be copied over manually.

The Investigation

Upon further investigation, we could see the profile capture method was a batch file that exports a registry key (From a fresh/clean Windows 10 machine). This was then added as a package in the Task Sequence that runs after the domain join step.

Why this will not work?

Capturing a profile list from a fresh Windows install will only keep the profiles from the original clean Windows 10 machine. It will never remember the SIDs of users added by Active Directory to machines that have been used in the environment.

The Solution

Exporting the profile needs to take place on a “per machine” basis.

This can be achieved by running a command line and storing that profile reg key in a variable that could be referenced later in the task sequence after the machine has been added to the domain.

reg export “HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList” %_SMSTSUserStatePath%\ProfileList.REG

Call the variable and replace the freshly loaded machine with the original profile list.

Now when you log in the SID will match the user SID as it is in Active Directory.

System Center Configuration Manager – Upgrade to 1902 Error – “SQL Server Configuration for site upgrade”

$
0
0

The issue:

After trying to upgrade a client Configuration Manager Environment from 1810 to 1902 the upgrade failed with the error “SQL Server Configuration for site upgrade”. The first thing that crosses your mind is the SQL version is wrong or the memory is not configured correctly. But in my case SQL was SQL Server 2016 Standard which is well supported and memory was configured correctly.

The Investigation

Upon further investigation, you will look in the Pre Requisite Log file (C:\ConfigMgrPrereq.log) and see that not all Errors are highlighted. Sometimes an error will show up a few lines before the highlighted error.

The Solution

Following another technet blog (https://social.technet.microsoft.com/Forums/en-US/53196fc0-ea14-47c6-b1ab-80f21fc6e070/1810-hotfix-rollup-kb4486457-prerequisite-check-failed-failed-sql-server-configuration-for)

I ran the below SQL Server command to simulate the pre requisite check on the DB.

SET NOCOUNT ON

    DECLARE @dbname NVARCHAR(128)
 
    SELECT @dbname = sd.name FROM sys.sysdatabases sd WHERE sd.dbid = DB_ID()
 
    IF (@dbname = N'master' OR @dbname = N'model' OR @dbname = N'msdb' OR @dbname = N'tempdb' OR @dbname = N'distribution' ) BEGIN
    RAISERROR(N'ERROR: Script is targetting a system database.  It should be targeting the DB you created instead.', 0, 1)
    GOTO Branch_Exit;
    END ELSE
    PRINT N'INFO: Targeted database is ' + @dbname + N'.'
 
    PRINT N'INFO: Running verifications....'
 
    IF NOT EXISTS (SELECT * FROM sys.configurations c WHERE c.name = 'clr enabled' AND c.value_in_use = 1)
    PRINT N'ERROR: CLR is not enabled!'
    ELSE
    PRINT N'PASS: CLR is enabled.'
 
    DECLARE @repltable TABLE (
    name nvarchar(max),
    minimum int,
    maximum int,
    config_value int,
    run_value int )
 
    INSERT INTO @repltable
    EXEC sp_configure 'max text repl size (B)'
 
    IF NOT EXISTS(SELECT * from @repltable where config_value = 2147483647 and run_value = 2147483647 )
    PRINT N'ERROR: Max text repl size is not correct!'
    ELSE
    PRINT N'PASS: Max text repl size is correct.'
 
    IF NOT EXISTS (SELECT db.owner_sid FROM sys.databases db WHERE db.database_id = DB_ID() AND db.owner_sid = 0x01)
    PRINT N'ERROR: Database owner is not sa account!'
    ELSE
    PRINT N'PASS: Database owner is sa account.'
 
    IF NOT EXISTS( SELECT * FROM sys.databases db WHERE db.database_id = DB_ID() AND db.is_trustworthy_on = 1 )
    PRINT N'ERROR: Trustworthy bit is not on!'
    ELSE
    PRINT N'PASS: Trustworthy bit is on.'
 
    IF NOT EXISTS( SELECT * FROM sys.databases db WHERE db.database_id = DB_ID() AND db.is_broker_enabled = 1 )
    PRINT N'ERROR: Service broker is not enabled!'
    ELSE
    PRINT N'PASS: Service broker is enabled.'
 
    IF NOT EXISTS( SELECT * FROM sys.databases db WHERE db.database_id = DB_ID() AND db.is_honor_broker_priority_on = 1 )
    PRINT N'ERROR: Service broker priority is not set!'
    ELSE
    PRINT N'PASS: Service broker priority is set.'
 
    PRINT N'Done!'
    Branch_Exit:

And found that the Max Text repl size is incorrect…

I changed the size from the default to 2147483647

Default
Adjusted

After running the query now you can see it passes and the upgrade also completes successfully.

Feel free to experiment with solution and add or correct me in any of the steps.

Field Notes: Access denied when removing Active Directory integrated DNS Zones

$
0
0

With Windows Server 2008 R2 reaching end of life in January 2020, many organizations have been migrating their workloads to Windows Server 2016 or newer.  This period is also an opportunity for some to decommission and consolidate domains to reduce complexities where possible.  I posted about an upgrade blocker when the File Replication Service is still in use for replicating SYSVOL content here.  In there, I also share a link on where you can find useful information on the End of Life Dashboard that one of my follows blogged about.

I this post, I would like to discuss an additional check point you may want to include in your upgrade plan as one of the clean-up actions after removing a domain.  This tip would also be applicable when you are looking at removing Active Directory-Integrated Zones that are no longer required or wanted.

Repro Environment Details

I leveraged one of the Azure Quickstart Templates to help accelerate a deployment of a 3-domain forest in my Azure subscription.  In there, I have:

  • A forest root domain named forestroot.co.za
  • 2 child domain named east.forestroot.co.za and west.forestroot.co.za

For demonstration, I decommission one of the child domains (west.forestroot.co.za) and go through the process of cleaning up the expired stub DNS zone that was created in the other child domain (east.forestroot.co.za).  Here’s what it initially looks like in the DNS Management Console.

West domain stub zone

Protecting DNS Zones from accidental deletion

It is recommended to have DNS zones protected from accidental deletion.  Here is an oldie but goodie with details:

In the test lab, I ran the following piece of PowerShell code to protect the west.forestroot.co.za DNS stub zone hosted in the east domain from accidental deletion:

Get-ADObject -Server EASTDC01.east.forestroot.co.za -Filter 'Name -eq "west.forestroot.co.za"' -SearchBase "DC=DomainDNSZones,DC=east,DC=forestroot,DC=co,DC=za" -Properties ProtectedFromAccidentalDeletion | Set-ADObject -ProtectedFromAccidentalDeletion $true

Protect DNS zone from accidental deletion

Decommissioning the child domain

We are now at a point where we need to decommission the west domain.  As part of the removal process, the Active Directory Domain Services configuration wizard removes the DNS zone for the west domain, as well as the DNS delegation.

ADDS domain removal options

However, the stub zone remains and traces of it are visible in the DNS Management Console after the domain removal.  The DNS server from the east domain is unable to load the zone as the transfer of zone data from the master server (the decommissioned DC) failed.

Zone not loaded by DNS Server

Additionally, there is an error (Event ID 6527) logged in the DNS Server event log stating that the zone expired before it could obtain a successful zone transfer or update from a master server acting as its source for the zone.

Expired DNS zone.

The clean-up process

With protection from accidental deletion in place, an attempt to remove this stale zone results in the following error:

The zone cannot be deleted.  Access was denied

The zone cannot be deleted.  Access was denied.

In this case, I am using an account belonging to the Enterprise Admins group.  From a perspective of group membership, inadequate permissions is not an issue here and we already have an idea that this is due to the protection we put on earlier.

The account is a member of the Enterprise Admins group.

What I need to do here is to remove the protection flag by running the following PowerShell code:

Get-ADObject -Server EASTDC01.east.forestroot.co.za -Filter ‘Name -eq “west.forestroot.co.za”‘ -Properties ProtectedFromAccidentalDeletion | Set-ADObject -ProtectedFromAccidentalDeletion $false

To confirm, here is the PowerShell code that can help us:

Get-ADObject -Server EASTDC01.east.forestroot.co.za -Filter ‘Name -eq “west.forestroot.co.za”‘ -Properties ProtectedFromAccidentalDeletion

Remove the protection flag

With this flag turned off (or set to false), removal of this zone succeeds.

One more thing to watch out for…

So far, this is pretty straightforward but I would like to leave you my last tip.  You may have come across the warning (Event ID 4515) depicted below.  The zone west.forestroot.co.za was previously loaded from the directory partition MicrosoftDNS but another copy of the zone has been found in directory partition DomainDnsZones.east.forestroot.co.za.  

Duplicate zone

So what do we have here?  The zone west.forestroot.co.za is stored in two partitions – one in DomainDNSZones and another in the domain partition.

Duplicate zone

Be sure to check if you don’t have a duplicate copy of the zone too if you are not sure why you still get access denied with everything that has been mentioned above in place. 

In closing

Protect your Active Directory-Integrated DNS zones from accidental deletion, but don’t forget about this when it’s time to get rid of DNS zones you no longer require.  Also please watch out for duplicate copies of DNS zones and remember that storing zones in the domain partition is not recommended.  Happy upgrading!

Till next time…

Field Notes: Azure Active Directory Connect – Express Installation

$
0
0

Integrating your on-premises directories with Azure Active Directory makes your users more productive by providing a common identity for accessing both cloud and on-premises resources.  Azure AD Connect is the Microsoft tool designed to meet and accomplish your hybrid identity goals.  It provides features such as password hash synchronization, pass-through authentication, federation integration, and health monitoring. 

In this series of blog posts, I go through some of the installation and configuration options that are available for Azure AD Connect.  I begin with the express installation option, as it is the easiest and common.  I the next parts of the series, I’ll discuss some of the other options we have such as the custom installation option and more.

Azure AD Connect splash

Express Installation

When you launch the Azure AD Connect installation wizard, you are prompted to either use express settings or customize the installation experience.

Use express settings

For the purpose of this first part of the series, we select use express settings, which will:

  • Configure synchronization of identities in the current Active Directory forest
  • Configure password hash synchronization from on-premises Active Directory to the Azure AD tenant
  • Start the initial synchronization
  • Synchronize all attributes, and
  • enable the option to automatically upgrade

This option applies to most environments, and we will go through the custom installation in the next part of this series.

Connect to Azure AD

The express installation option presents the initial screen requesting Azure AD global administrator credentials.

Connect to Azure AD

Enter the username in the format of username@verifieddomain.co.za  or username@tenant.onmicrosoft.com, followed by the associated password.  If you hover over the blue question mark, you’ll realize that the credentials are used to configure Azure features and create a more limited account for periodic synchronization.

Connect to on-premises AD

The next screen requests an on-premises Active Directory account that is a member of the enterprise admins group.

Connect to the Active Directory Forest 

These credentials are used to create the local Active Directory account that is only used for synchronization and to assign the correct permissions for this account.  The format can either be username@domain.co.za or DOMAIN\username.

Azure AD sign-in configuration

To use on-premises credentials for Azure AD sign-in, UPN suffixes should match one of the verified custom domains in Azure AD. 

Azure AD sign-in configuration

In my case, I have one of the domains verified.  This can also be confirmed in Azure under custom domains of the Azure AD tenant.

Azure AD verified domain

Ready to configure

Just as the wizard promised, below is a summary of what would happen if you went ahead to install.

Ready to configure

To avoid synchronization conflicts, do not deploy more than one active server.  I’ll go through supported scenarios and options in a future post in this series.

In an environment that’s not already configured, clicking install here configures the service and starts synchronization with Azure AD.

Summary

Taking a quick look at the Azure Active Directory blade, we see that Azure AD Connect sync is enabled.

Azure Active Directory Overview

The express settings option is quick, easy and applicable in most deployments.  In the next parts of the series, I’ll cover the customized installation path and take a closer look at some of the objects that are created on-premises in Active Directory and in the Azure AD tenant. 

References

Till next time…

Changing and Maintaining Office 365 ProPlus Update Channels using ConfigMgr

$
0
0

Background

Following on from the Blog Post on Office 365 End to End Servicing. I thought my next post would be on how to change the Office 365 ProPlus Update Channel as I had a customer recently who had deployed the Monthly Channel to all his users and wanted to change them to “Semi-Annual Channel” to allow for more testing between releases. This post will show you how you can use the compliance settings feature in SCCM to change and manage the update channel in Office 365 ProPlus by changing CDNBaseUrl key in the registry. Things to consider when deciding which Channel to select:                                                
Update channel Primary purpose How often updated with new features Default update channel for the following products
Monthly Channel  Provide users with the newest features of Office as soon as they’re available. Monthly
  • Project Online
  • Visio Online Plan 2 (previously named Visio Pro for Office 365)
  • Office 365 Business, which is the version of Office that comes with some Office 365 plans, such as Business Premium.
Semi-Annual Channel  Provide users with new features of Office only a few times a year. Every six months, in January and July Office 365 ProPlus
Semi-Annual Channel (Targeted) Provide pilot users and application compatibility testers the opportunity to test the next Semi-Annual Channel. Every six months, in March and September None
All the channels will receive updates for security and critical non-security issues when needed. These updates usually occur on the second Tuesday of the month.
More info about that can be read here Overview of update channels for Office 365 ProPlus Which users should get which update channel depends on several factors, including how many line-of-business applications, add-ins, or macros that you need to test. To ensure you can test new updates to Office before deploying them to your entire organization, we recommend deploying two update channels:
  • Deploy the Semi-Annual Channel (Targeted) to a targeted group of representative users who can pilot new features of Office.
  • Deploy the Semi-Annual Channel to the remaining users in your organization. They receive feature updates every six months, four months after the users with the Semi-Annual Channel (Targeted).
With this approach, you can test new Office features in your environment, particularly with your hardware and device drivers. For reference there is comprehensive guidance on Planning your O365 Deployment here – https://docs.microsoft.com/en-us/DeployOffice/plan-office-365-proplus

Implementation

First thing we are going to have to do is create the Configuration item to check if the CDNBaseUrl key exists and set the specified channel as required. In the example I will set it to Semi-Annual Channel.

Step 1 – Creating the Configuration Item:

  • In the Configuration Manager console, click Assets and Compliance > Compliance Settings > Configuration Items
  • Click Create Configuration Item
  Image
  • Specify a unique name and a description and Click Next
  Image1
  • Select the required Operating Systems and Click Next
  Image3  
  • On the settings page Select New, we now need to configure the settings required, and whether to remediate them when they aren’t compliant on devices.
Image4
  • With the Setting Type set to Registry Value select Browse and connect to another machine with Office 365 installed and browse to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\ClickToRun\Configuration\CDNBaseUrl
Image4.1  
  • Select CDNBaseURL and select The selected registry value must exist on client devices.
Image5   Image6
  • Click Next and on the Compliance Rule Screen Select New
  • On the settings screen select the previously created setting above and change the “Equals the following values” to the update channel you would like to change to – In this example “Semi-Annual Channel
  • And Select the “Remediate noncompliant rules when Supported
Image7
  • Select OK and Next and Complete the wizard to create the Configuration Item.

Step 2 – Creating the Configuration Baseline:

Now we need to create the Configuration Baseline and add the previously created Configuration Item.
  • In the Configuration Manager console, click Assets and Compliance > Compliance Settings > Configuration Baselines
  • Click Create Configuration Baseline
Image8  
  • In the Create Configuration Baseline dialog box, enter a unique name and a description for the configuration baseline.
  • The Configuration data list displays all configuration items or configuration baselines that are included in this configuration baseline. Click Add to add the previously created configuration item to the list.
Image9
  • Click OK
The Last step will be to deploy the Configuration Baseline to a collection of machines that need to be changed to Semi-Annual Channel.
  • Select the configuration baseline that you want to deploy, and then click Deploy
  Image10  
  • Make sure the correct Configuration Baseline is selected, and Remediate noncompliant rules when supported(if required select Allow remediation outside the maintenance window). Select the collection of machines that need to be changed to Semi-Annual Channel.
  • Specify the compliance evaluation schedule for this configuration baseline
  • Click OK to complete
Image11
  • Machines will now see this new configuration baseline on there next Machine Policy Evaluation and evaluate it based on the schedule created.

Conclusion

Using compliance settings in SCCM will really help in maintaining the Office 365 channels in your environment and ensure that the required users have the correct channels. Another option to change or maintain the channel would be to use Group Policy Administrative Template files (ADMX/ADML) for Office as mentioned here Configure the update channel to be used by Office 365 ProPlus. Hope this helps to give you an overview of the different Office 365 channels and how to easily maintain it in your environment.Smile

System Center Configuration Manager – PXE Error –“Windows Failed to start Status: 0xc0000001”

$
0
0

The Issue

When using System Center Configuration Manager to image machine the download of the boot image freezes and stops. The error message “Windows Failed to start” Status: 0xc0000001.

The Investigation

If you are familiar with Configuration Manager Operating System Deployment and PXE process then below concepts will be easy to grasp.

  1. Event Viewer : Windows Deployment Services
WDS Event 4101

Researching Event 4101 you will find that TFTP could be a culprit. What is Trivial File Transfer Protocol? To learn more about it check out this Wiki page : https://en.wikipedia.org/wiki/Trivial_File_Transfer_Protocol

2. SMSPXE.log

Lets look at the PXE log for more clues:

No Errors or Warning or Clues, the request just ends

3. From DHCP Scope Options to IP-Helpers

Running out of ideas and areas to look at, let us refer back to Microsoft supported configurations.

DHCP Scope options Not Supported. Only use scope options in testing scenarios

The below image will explain what is required

https://blog.ctglobalservices.com/configuration-manager-sccm/rja/dhcp-guide/

So not being a network engineer what do I say to my network engineer should be configured on the switches? PXE Server = SCCM PXE Enabled DP

https://social.technet.microsoft.com/Forums/systemcenter/en-US/a83237d8-55d7-4b99-bc1c-10c3bbf5aef2/pxe-os-deployment

4. MTU Sizes

Changing MTU(Maximum Transmission Unit) sizes on Network Interface Adapters could affect your TFTP download and should be left on Default of 1500. Follow the below link on how to correct it

https://jabbertech.wordpress.com/2013/10/07/error-received-from-tftp-server-wds/

5. Network Trace

Using a network trace tool let’s see what is happening.

There are two clues here, the first one is that we can see the package is timing out as it is stuck on delivering the same block. The second is the blksize = 1456. Why would this be the setting?

According to below page the specific brand of Laptop we were testing only allows the max TFTPBlockSize of 1456. Which means we would have to increase our TFTPWindowSize for Faster Delivery so the timeout does not occur. https://ccmexec.com/2016/09/tweaking-pxe-boot-times-in-configuration-manager-1606/

The Resolution

On the Distribution Point you want to PXE from modify the

RamDiskTFTPBlockSize = 1456 (Decimal)

RamDiskTFTPWindowSize = 16 (Decimal)

System Center Configuration Manager – Powershell Query .MIF .SID and .SIC files in inboxes

$
0
0

The Issue

Is there a script that can ‘read’ through the Configuration Manager inboxes ( \Microsoft Configuration Manager\inboxes\auth\sinv.box\BADSinv) and can output/return a list of computer names which failed their software inventory?

There was a similar query that does this for Hardware Inventory by Querying *.MIF files

$ConfigMgrBoxPath = “C:\Program Files\Microsoft Configuration Manager\inboxes\auth\dataldr.box\BADMIFS”
Get-ChildItem -Path $ConfigMgrBoxPath -Include *.MIF -Recurse -Force -ErrorAction SilentlyContinue | ForEach-Object {
    $File = $_.FullName ;
    try {
        (
            Get-Content -ReadCount 1 -TotalCount 6 -Path $_.FullName -ErrorAction Stop  |
            Select-String -Pattern “//KeyAttribute<NetBIOS\sName><(?<ComputerName>.*)>” -ErrorAction Stop
        ).Matches.Groups[-1].Value
    } catch {
        Write-Warning -Message “Failed for $File”
    }
} | out-file -FilePath “c:\test\output.txt”

To read more about it click the link ( https://blogs.technet.microsoft.com/scotts-it-blog/2015/04/29/identifying-and-counting-computers-sending-badmif-files/ )

The Investigation

The issue with the above script is that Hardware Inventory (*.MIF) files are much better structured than Software Inventory files (*.SID, *.SIC)

MIF

Compared to

So I modified the original script to try and query *.SID files, but it failed, even after trying to learn String Patterns and Queries in REGEX ( https://regexr.com/)

The closest I got was the below, but this was still not good enough because when there are more than one set of dashes it doesn’t show the correct computer name.

The Solution

The final solution was simplifying the PowerShell script to the below

Removing the ‘#’ will run it and create a notepad called output.txt (make sure to specify the path)

$ConfigMgrBoxPath = “C:\Program Files\Microsoft Configuration Manager\inboxes\sinv.box\BADSinv”
Get-ChildItem -Path $ConfigMgrBoxPath -Include .SID,.SIC -Recurse -Force -ErrorAction SilentlyContinue | foreach {(Get-Content $_).Split("0″)[12]} | out-file -FilePath “c:\test\output.txt”

How to associate an account to SCOM unit monitor

$
0
0

In this blogpost, I’ll run through an example of how to associate a Run as Account to script monitor.

In SCOM, the way to delegate permissions is by setting a profile and an account that is linked to that profile, we will create the profile in the Management Pack and then attach the profile to the monitoring workflow and configure the account in the profile.

The account in Run As account has special permissions for query database for example.

1. Write the PS / VB Script you with to use for monitoring e.g. Monitor SQL server database query [An example is in the Management Pack attached to this article]
2. Debug custom scripts on target server (debug vb script with Cscript command line tool) with an account that has permissions, and make sure the result is fine.
3. Add new Unit Monitor, then add the script and their properties expressions.
4. Create new Run as Profile in this monitor management pack.
5. Export the Management pack contain the Script and the new run as profile.
6. Open the MP with your preferred editor.
7. Copy “RunAsProfile_ID” from Secure Reference section:

<SecureReference ID=”RunAsProfile_1905759fda4f4af2b2a8346fa2d7610a

8. Add RunAs parameter to unit monitor line:

Unit monitor without RunAs parameter:

<UnitMonitor ID=”Unit.Monitor” Accessibility=”Internal” Enabled=”true” Target=”Windows!Microsoft.Windows.Computer” ParentMonitorID=”Health!System.Health.AvailabilityState” Remotable=”true” Priority=”Normal” TypeID=”Custom.MyPSTransactionMonitorType.UnitMonitorType” ConfirmDelivery=”false”>

Unit Monitor with RunAs parameter:

<UnitMonitor ID=”Unit.Monitor” Accessibility=”Internal” Enabled=”true” Target=”Windows!Microsoft.Windows.Computer” ParentMonitorID=”Health!System.Health.AvailabilityState” Remotable=”true” Priority=”Normal” TypeID=”Custom.MyPSTransactionMonitorType.UnitMonitorType” ConfirmDelivery=”false” RunAs=” RunAsProfile_1905759fda4f4af2b2a8346fa2d7610a”>

9. Save and import the updated Management Pack.

10. Add ‘Run as Account’ to this ‘Run as Profile’.

——————————————————————————————————–

To ensure that the process is run with the defined account:

  • Add “write to log” function that write the account name running this script in Agent Operations Manager event log:

Add “Log script event” to VB Script monitor:

Set objNet = CreateObject(“WScript.NetWork”)

Call objAPI.LogScriptEvent(“Script_Monitor.vbs”,5555,2, objNet.UserName)

Add “Write event log” function to Powershell script monitor:

Write-EventLog -LogName “Operations manager” -Source “Health Service Script” -EventId 5555 -Message “ Script running under account – $(whoami)”

  • Open task manager in the target server and verify you have one “monitoring host” process is running under this user account.

How to create a new SCOM class and subclass

$
0
0

SCOM Admin needs to know the basic structure of Management packs and knowledge about classes and objects, what are the differences between the classes, and what is the projection of choosing a class.

Management packs provided by the products companies like Microsoft for Active Directory Exchange, and so forth, do the work for us, by providing the classes and discoveries for monitoring targets, in order to adapt the good method for the custom management packs, we have to write classes of our own.

When we need to create new custom monitors you must select a Target, Target is a class that host objects, the monitoring that we create will apply to all objects in their classes. For example, Windows Computer contains all the Computer objects, you can override on a group or object.

When we decide which target to set up, and you need to enable the monitor on part of objects, it is wrong to think that all the monitors can be manipulated on Windows Computer Class or on any other existing class, the main effect is on system performance the unmonitored monitors on windows computers objects are cause slow down the system in the future.  

Example of side effects will arise when we will try to manage and display the services state, we couldn’t select the Windows Computer Object in the dashboards, because not all monitors necessarily belong to this service, and it’s will have effect on this service.

Therefore, there is a need to create classes then identify the servers on which the services are based, and on which we will define the monitoring.

Some of the tools for creating classes are MP Author and Visual Studio.

Kevin Holman has written a library that contains numerous examples of using VSAE to create class: https://gallery.technet.microsoft.com/SCOM-Management-Pack-VSAE-2c506737    

But the discovery classes in the library are based on Local Application that represent only one base class type and the main roles on server and not the components of this role.

——————————————————————————————————

Local Application and Application Component Base Classes types and their differences:

Windows LocalApplication  / Unix LocalApplication

Application often installed with others on the local computer

  • Hosted by Windows Computer or Unix Computer
  • Automatic Health Rollup to Computer

Windows ApplicationComponent / Unix ApplicationComponent

Component of a local application or computer role

  • Unhosted; Create your own relationship

Local Application classes, represent a defined role installed on the server, and are hosted by default under the Windows / Linux Computer class, therefore computer automatically inherit the State, and if so, when a monitor in Child goes to Critical, the Parent is also colored and changes its state.

When we need to monitor components of this Role, and we want to represent them under it, we need to create sub-classes based on Application component, but since the Unhosted class definition, is required to define a relationship between the parent class and the sub class that we will now going to create.

Example of Parent and child class:

In the MP attached, there is a class based on Local Application that be the main class for this example (I used the Kevin Holman Fragment – Class.And.Discovery.Script.PowerShell.mpx).

Now we will create an Application Component sub-class that will link to the parent Local Application to which we want to associate to.

It is important to understand that the state will not affect the parent unless we want it to [hence the continuation].

Steps:

  1. Add new Sub-class base on Windows!Microsoft.Windows.ApplicationComponent:

<ClassType ID=”DEMO.ApplicationComponent.Class” Abstract=”false” Accessibility=”Public” Base=”Windows!Microsoft.Windows.ApplicationComponent” Hosted=”true” Singleton=”false”>

<Property ID=”<PropertyA>” Key=”false” Type=”string”/>

<Property ID=”<PropertyB>” Key=”false” Type=”string”/>

2. Create the relationship to the Local Application main class:

<RelationshipType ID=”LocalApplicationHostsApplicationComponent” Base=”System!System.Hosting” Accessibility=”Public”>

<Source ID=”LocalApplication” Type=”DEMO.LocalApplication.Class”/>

<Target ID=”ApplicationComponent” Type=”DEMO.ApplicationComponent.Class”/>

3. Add discovery targeted to Windows Operating System class, to discover the components [Script Discovery, you can use any discovery process according to the application setting]:

<Discovery ID=”DEMO.ApplicationComponent.Class.Discovery” Target=”Windows!Microsoft.Windows.Server.OperatingSystem” Enabled=”true” ConfirmDelivery=”false” Remotable=”true” Priority=”Normal”>

<Category>Discovery</Category>

<DiscoveryTypes>

<DiscoveryClass TypeID=”DEMO.ApplicationComponent.Class“>

<Property PropertyID=”PropertyA”/>

<Property PropertyID=”PropertyB”/>

</DiscoveryClass>

</DiscoveryTypes>

<DataSource ID=”DS” TypeID=”Windows!Microsoft.Windows.TimedPowerShell.DiscoveryProvider”>

<IntervalSeconds>86400</IntervalSeconds>

<SyncTime />

<ScriptName>DEMO.ApplicationComponent.Class.Discovery.ps1</ScriptName>

<ScriptBody>

<Discovery Script body>

</ScriptBody>

<Parameters>

<Parameter>

<Name>SourceID</Name>

<Value>$MPElement$</Value>

</Parameter>

<Parameter>

<Name>ManagedEntityID</Name

< Value> $Target/Id$</Value>

</Parameter>

<Parameter>

< Name> ComputerName</Name>

< Value> $Target/Host/Property[Type=”Windows!Microsoft.Windows.Computer”]/PrincipalName$</Value>

</Parameter>

</Parameters>

<TimeoutSeconds>120</ TimeoutSeconds >

</DataSource>

</Discovery>

3. Import the Management Pack – Local Application class and a Sub-class based on Application Component are created based on the discovery condition

NOTE – When the child object goes to Unhealthy the father by default remains Healthy

To add interdependence, you need to add Dependency Monitor and select Object (Hosting)

Now when the child object goes to Unhealthy the parent changed also to Unhealthy

Manage SCOM Alerts Using REST API

$
0
0

In this blog post, I will walk through how to get alerts from SCOM using REST API.

REST API is applicable from 1801 version which support a set of HTTP operations, in this guide I’ll explained how to filter the alerts to get only the scope you need.

In the examples in the following article – https://docs.microsoft.com/en-us/rest/operationsmanager/ demonstrated only on how to make calls to use a “custom widget” in the new HTML web console, in this guide I’ll explain how to get the alerts by REST API to forward it to another systems for example, by Powershell script.

All available operations you can call, is listed here – https://docs.microsoft.com/en-us/rest/api/operationsmanager/data 

Powershell Script – output only new critical alerts:

# Set header and the body

$scomHeaders = New-Object “System.Collections.Generic.Dictionary[[String],[String]]”

$scomHeaders.Add(‘Content-Type’,’application/json; charset=utf-8′)

$bodyraw = “Windows”

$Bytes = [System.Text.Encoding]::UTF8.GetBytes($bodyraw)

$EncodedText =[Convert]::ToBase64String($Bytes)

$jsonbody = $EncodedText | ConvertTo-Json

#Authenticate

$uriBase = ‘http://<Your SCOM MS>/OperationsManager/authenticate’

$auth = Invoke-RestMethod -Method POST -Uri $uriBase -Headers $scomheaders -body $jsonbody -UseDefaultCredentials -SessionVariable websession

# Add Criteria – Specify the criteria (such as severity, priority, resolution state, etc.)

# Display Columns – Specify the columns which needs to be displayed.

$query = @(@{ “classId”= “”

                  # Criteria output the critical new alerts

                    “criteria” = “((Severity = ‘2’) AND (ResolutionState = ‘0’))”

                    “displayColumns” =”severity”,”monitoringobjectdisplayname”,”name”,”age”,”repeatcount”,”lastModified”

 })

$jsonquery = $query | ConvertTo-Json

$Response = Invoke-RestMethod -Uri “http:// <Your SCOM MS> /OperationsManager/data/alert” -Method Post -Body $jsonquery -ContentType “application/json” -UseDefaultCredentials -WebSession $websession

$alerts = $Response.Rows

$alerts


#Using Powershell script above and set the query without criteria, will retrieve All alerts

$query = @(@{ “classId”= “”

                  # Get All Alerts

                    “displayColumns” =”severity”,”monitoringobjectdisplayname”,”name”,”age”,”repeatcount”,”lastModified”

 })


#In “DisplayColumns” value, you can add any alert property, for example add alert description:

$query = @(@{ “classId”= “”

            “criteria” = “((Severity = ‘2’) and (ResolutionState = ‘0’))”

                      “displayColumns” = “id”,”name”,”description”

})

#Id, Name, and Description:

Viewing all 177 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>