Quantcast
Channel: Secure Infrastructure Blog
Viewing all 177 articles
Browse latest View live

Configuration Manager Advanced Dashboards – Rich view of your Configuration Manager environment

$
0
0


Introduction


As a Premier Field Engineer (PFE) at Microsoft, I get asked by a lot of customers about custom dashboards and reports that are available or can be created for monitoring the SCCM environment , checking the status on client activity, client health, deployments or content status to provide to support teams, SCCM administrators and managers.

So yes there are tons of native built in reports to get that data but putting it all together to get an overall view of the environment is the challenge.…

Solution


The Configuration Manager Advanced Dashboards (CMAD) have been created within Microsoft by a few PFE’s along with myself who form part of the development team with Lead Stephane Serero@StephSlol.

The Configuration Manager Advanced Dashboards (CMAD) are designed to offer:

  • At a glance a view of the Configuration Manager environment
  • Immediately pinpoint specific issues
  • Monitor the undergoing activities

The CMAD solution (Configuration Manager Advanced Dashboards) delivers a data-driven reporting overview of the System Center Configuration Manager environment.

This solution consists of a rich set of dashboards designed to deliver real-time reporting of ongoing activity in your Configuration Manager environment.

Native Configuration Manager Reports are not replaced with this solution, the CMAD solution amplifies the data they show by providing additional data insights.

The dashboards in this solution were created based on field experience and on customers’ needs to provide an overall view of various Configuration Manager functionality. The embedded charts and graphics provide details across the entire infrastructure.


Dashboard – Software Updates

image

Dashboard – ConfigMgr Servers Health

image

Dashboard – Client Health Statistics

image

Dashboard – Security Audit

clip_image002[7]

Key Features and Benefits

The CMAD solution consists of 180+ dashboards/reports covering the following Configuration Manager topics:

  • Asset Inventory
  • Software Update Management
  • Application Deployment
  • Compliance Settings
  • Infrastructure Monitoring:
  • Site Replica
  • Content replication
  • Software Distribution
  • Clients Health
  • Servers Health
  • SCEP Technical Highlights

The CMAD is supported on Configuration Manager 2012 and later releases (including Current Branch versions). The CMAD is supported on Reporting Services 2008 R2 and later releases.


Some might ask – but SSRS is so last yearNyah-Nyah

That's why the team has also created a PowerBI versionSmile which comes with the offering “System Center Configuration Manager and Intune PowerBI Dashboard Integration”.

So now you can harness all the capabilities of PowerBI to enhance the reporting experience.

clip_image002


Conclusion


The Introduction of this solution has allowed SCCM Administrators to get a better view of the state of there SCCM environments.

So you ask how do we get these Dashboards??Sarcastic smile

If you are a Microsoft Premier customer you can reach out to your TAMs for delivery questions!!


Field Notes: The case of a crashing Hyper-V VM – Integration Services Out of Date

$
0
0

Background

I recently had an opportunity to offer assistance on a case relating to stop errors (blues screens) experienced in a Virtual Machine (VM) running on a Hyper-V Failover Cluster.  I was advised that two attempts to increase memory on the VM did not provide positive results (I’ll explain later on why the amount of memory assigned to the VM was suspect).  The only thing I could initially get my hands on was a memory dump file, and I would like to take you through how one command in WinDbg can give you clues on what the cause of the issue was and how it was resolved.

Quick Memory Dump Analysis

So I started to take a look at the Kernel Memory Dump that was generated during the most recent crash using the Debugging Tools for Windows (WinDbg).  WinDbg can be downloaded at https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugger-download-tools.  I’m not a regular debugger but I immediately made interesting discoveries when I opened the dump file.

The following are noticeable from the image above:

  • User address space may not be available as this is a kernel dump
  • Symbols and other information that may be useful such as product build
  • Bugcheck analysis (in case of a crash) with some good guidance on next steps

Let us get the issue of assigned memory out of the way before we look at other data.  I used the !mem command from the MEX Debugging Extension for WinDbg (https://www.microsoft.com/en-us/download/details.aspx?id=53304) to dump memory information.  As it can be seen on the image below, available memory is definitely low, which explains the reason for increasing assigned memory (which was later dropped as it did not help in this case).

image

The !vm command provides similar output if you don’t use the MEX extension.

I ran !analyze –v to get detailed debugging information as WinDbg suggests.

image

The output above shows that this was a Bug Check 0x7A: KERNEL_DATA_INPAGE_ERROR (https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/bug-check-0x7a--kernel-data-inpage-error).  More information can also be found in the WinDbg help file if you are unable to access the Internet.  Additional debug text states that the Windows Memory Manager detected corruption of a pagefile page while performing an in-page operation.  The data read from storage does not match the original data written.  This indicates the data was corrupted by the storage stack, or device hardware.  Just be careful since this is a VM and does not have direct access to hardware!

This explanation is inline with what I picked up in the stack:

image

How to determine the appropriate page file size for 64-bit versions of Windows provides a nice summary and guidance on paging files.

Let’s take a brief look at the !analyze window above (Bugcheck Analysis).  Here it can be seen that the BIOS date is 05/23/2012.  This is concerning as system BIOS should be kept up to date.  This also gave me a clue that we could be dealing with outdated Integration Services, which was the case.

Hyper-V Integration Services allow a virtual machine to communicate with the Hyper-V host.  Many of these services are conveniences, such as guest file copy, while others are important to the virtual machine's ability to function correctly.

 

What’s the cause of this unexpected behavior?

You’ve guessed it! Outdated Integration Services.   Here’s what happened:

  • The VM was configured with a startup RAM of 4 GB
  • Guest physical memory dropped when the VM did not need it (memory was reclaimed by Hyper-V)
  • An attempt by the VM to reclaim this RAM later when it was required failed as it (the VM) had difficulties communicating with the host through the Dynamic Memory Integration Service

 

Our Solution

Upgrading Integration Services resolved the issue.  After monitoring for some time, the VM was stable and there was no more memory pressure – it was able to reclaim memory as it needed it.  Here is an example of what it looked like in Process Explorer’s System Information View.

clip_image001

This document also states that Integration Services must be upgraded to the latest version and that the guest operating system (VM) must supports Dynamic Memory in order for this feature to function properly.

Summary

I demonstrated how one command in WinDbg (!analyze –v) can help you with some clues when dealing with system crashes.  In this case, it was outdated Integration Services (BIOS date was the clue).  I would also like to highlight the importance of monitoring.  There is a lot of information on the Internet on ensuring smooth and reliable operation of Hyper-V hosts and VMs.

If WinDbg and a memory dump was all you had, this would be one of the ways to go.  Grab a free copy and have it ready on your workstation if you don’t already have it installed : )

Till next time…

Understanding Volume Activation Services – Part 1 (KMS and MAK)

$
0
0

Windows Activation and KMS have been around for many years - and still - a lot of people don't understand the basics of Windows activation, what are the differences between KMS and MAK, and how to choose the best Volume Activation method that meets the organization’s needs.

In this blog post, we'll shed some light on these subjects and explain how to deploy and use Volume Activation services correctly.
This will be the first part in the series.

Series:

 

So... What is KMS?

KMS, like MAK, is an activation method for Microsoft products, including Windows and Office.
KMS stands for Key Management Service. The KMS server, called 'KMS host', is installed on a server in your local network. The KMS clients connect to the KMS host for activation of both Windows and Office.

Prerequisites

A KMS host running on Windows Server 2019/2016/2012R2 can activate all Windows versions, including Windows Server 2019 and Windows 10 all the way down to Windows Server 2008R2 and Windows 7. Semi-Annual and Long-Term Service Channel (LTSC) are both supported by the KMS.

Pay attention that Windows Server 2016 and Windows Server 2012R2 will require to install the following KB's in order activate the newest Windows 10 Enterprise LTSC and Windows Server 2019:

For Windows 2016:
1. KB4132216 - Servicing stack update, May 18 (Information, Download).
2. KB4467684 - November Cumulative Update or above (Information, Download).

For Windows 2012R2:
1. KB3173424 - Servicing stack update (Information, Download).
2. KB4471320 - December 2018 Monthly Rollup (Information, Download).

In order to activate clients, the KMS uses a KMS host key. This key can be obtained from the Microsoft VLSC (Volume Licensing Service Center) website. By installing that key, you are configuring the server to act as a KMS host.
Because KMS host key of newer Windows version can be used to activate older Windows versions, you should only obtain and install the latest KMS host key available in VLSC.
Also, note that KMS host key for Windows server can be used to activate Windows clients - so you can (and should) use one KMS host key to rule them all.

Now that you understand those facts, you know you should look for 'Windows Server 2019' version in VLSC and obtain the KMS host key for that version. Once again, this key will let you activate any Windows server and Windows client version in your environment.

Deploying the KMS host

After getting the KMS host key from VLSC, you'll need to install it. For that, we'll use the Volume Activation Tools feature, available on Windows Server 2012R2 and above.
You can install the Volume Activation Tools feature using the Server Manager (Remote Server Administration Tools -> Role Administration Tools -> Volume Activation Tools) or by using the following PowerShell command: Install-WindowsFeature RSAT-VA-Tools.

Run the tool right from Server Manager -> Tools or by typing 'vmw' in your PowerShell screen.
Volume Activation Tools lets you choose between Active Directory-Based Activation (will be covered in the second post) and Key Management Service (KMS). For now, we'll choose to the KMS activation method.

After selecting the activation method, you'll be asked to provide the KMS host key obtained from the VLSC.

Choose your preferred activation method (by phone or online using the internet) to activate the KMS host key for the selected product.

In the 'Configuration' step, pay attention to the following settings:

  1. Volume license activation interval (Hours) - determines how often the KMS client attempts activation before it is activated. The default is 2 hours.
  2. Volume license renewal interval (Days) - determines how often the KMS client attempts reactivation with KMS (after it has been activated). The default is 7 days.
    By default, Windows activates by the KMS host for 180 days. After 7 days, when there are 173 days left for the volume activation to be expired, the client attempts reactivation against the KMS host and gets a new 180 days activation period.
  3. KMS TCP listening port - By default, the KMS host is listening on port 1688 (TCP). You can change the port if needed using this setting.
  4. KMS firewall exceptions - Creating the relevant firewall exceptions for the Private/Domain/Public profiles.
  5. DNS Records - By selecting 'Publish', the Volume Activation Tools wizard creates the _vlmcs SRV record (e.g _vlmcs._tcp.contoso.com). Windows uses this SRV record to automatically find the KMS server address.

Reviewing KMS client settings

By now, you should be running a KMS host configured with KMS a host key for Windows Server 2019.

Any Windows client that configured to use 'KMS Client Channel' will be activated against the new KMS host automatically within 2 hours (as this is the 'KMS Activation Interval' default value).
The 'KMS Client Channel' determined by the product key used in the client. By default, computers that are running volume-licensed editions are KMS clients with no additional configuration needed.
In case you'll be required to convert a computer from a MAK or retail edition to a KMS client, you can override the currently installed product key and replace it with an applicable KMS client key that suitable for your Windows version. Pay attention that the selected key should exactly match the Windows version you're using, otherwise it won't work.
These KMS clients keys, also known as Generic Volume License Keys (GVLK), are public and can be found in the KMS Client Setup Keys page.

From the client perspective, you can use the slmgr.vbs script to manage and view the license configuration.
For a start, you can run 'slmgr.vbs /dli' to display the license information currently applied on the client.
You can see in the screenshot that a KMS client channel is being used.

If required, use 'slmgr.vbs /ipk PRODUCTKEY' (e.g slmgr.vbs /ipk WC2BQ-8NRM3-FDDYY-2BFGV-KHKQY) to replace the current product key with a new one (KMS client channel in this example).

In order to initiate an activation attempt, you can use the 'slmgr.vbs /ato', which will immediately try to activate Windows.
The KMS host will respond to the activation request with the count of how many computers have already contacted the KMS host for activation. Computers that receive a count below the activation threshold are not activated.
The activation threshold is different for Windows clients and servers:

  • Clients will activate if the count is 25 or higher
  • Servers will activate if the count is 5 or higher.

You can find the full list of slmgr.vbs command-line options right here.

When to use KMS

Compared to MAK, KMS should be your preferable activation method as long as you meet the activation threshold and the (very) basic requirements for deploying KMS (which are DNS and TCP/IP connectivity between the clients and the KMS host).
Saying that we'll see in part 2 why Active Directory-Based Activation is actually even better than KMS for most scenarios.

What is MAK?

MAK (Multiple Activation Key) is another activation method for Microsoft products, including Windows and Office.
Unlike KMS, MAK activation is used for a one-time activation against Microsoft's hosted activation services.
This means that MAK does not require any server or services within your network - the activation request approved by Microsoft servers either online or by phone (for isolated environments that can't reach the internet).

Just like KMS, the MAK keys can be found in your VLSC portal. Each MAK has a predefined number of allowed activations, and each activation occurrence will incrementally increase the number of used activation for that MAK.
In the screenshot above, you can see that 3 activations (out of 300 allowed activation/seats) were completed using a MAK for Windows Server 2016 Standard.

How to use MAK

Using MAC for activation is very simple.
First, you'll have to go to VLSC and obtain the suitable MAK for your product (like Windows Server 2016 Standard).
then, open Command Prompt (cmd) in elevated mode and run the following commands:

  1. Install your MAK key using 'slmgr.vbs /ipk MAKProductKey' (e.g slmgr.vbs /ipk ABCDE-123456-ABCDE-123456-ABCDE).
  2. Activate Windows using 'slmgr.vbs /ato'. The following message should appear:
  3. To view the activation details you can use the 'slmgr /dli' command.

When to use it

MAK activation method should be used only for computers that never connect to the corporate network and for environments where the number of physical computers does not meet the KMS activation threshold and Active Directory-based activation could not be used for some reason.

Summary

In the first part of the series we learned about KMS and MAK, and we understood the purpose of each activation method.
As a thumb rule, you should always try to stick with KMS activation as long as it possible.
When KMS is not an option (usually due to lack of connectivity to the corporate network), consider using a MAK.

Remember that one KMS host key can be used to activate all of your Windows versions includes servers and clients. Grab the latest version from your VLSC and you're good to go.
If you encounter problems when trying to activate, check that your KMS server is available and running, and use slmgr.vbs tool to get more details about your client's activation status.

Office 365 ProPlus – End to End Servicing in Configuration Manager

$
0
0


The following post was contributed by Cliff Jones a Consultant working for Microsoft.

Background


Recently I was asked by a few of my customers on how to simplify the deployment of Office 365 ProPlus updates in their environment to keep within support but at the same time take advantage of the latest features available with each release.

Both Windows 10 and Office 365 have adopted the servicing model for client updates. This means that new features, non-security updates, and security updates are released regularly, so your users can have the latest functionality and improvements. The servicing model also includes time for enterprise organizations to test and validate releases before adopting them.

By default, Office 365 ProPlus is set to use Semi-Annual Channel, which is also what a lot of customers deploy.

In this blogpost I will focus on the setup of the Automatic Deployment Rule that will be used for the servicing of Office 365 ProPlus configured to use the Semi-Annual Channel.

Solution


System Center Configuration Manager has the ability to manage Office 365 client updates by using the Software Update management workflow.  First we need to confirm all the requirements and prerequisites are in place to be able to deploy the O365 updates.

If you still need to create the O365 Package in SCCM you can have a read through This Blog from Prajwal Desai with all the required steps..


High Level steps to deploy Office 365 updates with Configuration Manager:


  1. Verify the requirements for using Configuration Manager to manage Office 365 client updates:
    • System Center Configuration Manager, update 1602 or later
    • An Office 365 client - Office 365 ProPlus, Visio Online Plan 2 (previously named Visio Pro for Office 365), Project Online Desktop Client, or Office 365 Business
    • Supported channel version for Office 365 client. For more details, see Release information for updates to Office 365 ProPlus
    • Windows Server Update Services (WSUS) 4.0

You can't use WSUS by itself to deploy these updates. You need to use WSUS in conjunction with Configuration Manager

  • The hierarchy's top level WSUS server and the top level Configuration Manager site server must have internet access.
  • On the computers that have the Office 365 client installed, the Office COM object is enabled.
  • Configure software update points to synchronize the Office 365 client updates. Set Updates for the classification and select Office 365 Client for the product. Synchronize software updates after you configure the software update points to use the Updates classification.
  • Enable Office 365 clients to receive updates from Configuration Manager. Use Configuration Manager client settings or group policy to enable the client.

    Method 1: Beginning in Configuration Manager version 1606, you can use the Configuration Manager client setting to manage the Office 365 client agent. After you configure this setting and deploy Office 365 updates, the Configuration Manager client agent communicates with the Office 365 client agent to download the updates from a distribution point and install them. Configuration Manager takes inventory of Office 365 ProPlus Client settings.

    1. In the Configuration Manager console, click Administration > Overview > Client Settings.

    2. Open the appropriate device settings to enable the client agent. For more information about default and custom client settings, see How to configure client settings in System Center Configuration Manager.

    3. Click Software Updates and select Yes for the Enable management of the Office 365 Client Agent setting.

    Method 2: Enable Office 365 clients to receive updates from Configuration Manager by using the Office Deployment Tool or Group Policy.

  • Create Automatic Deployment Rule to deploy the updates using the below steps:


  • Step 1 – Create Office 365 ProPlus Collections


    First we will create a few collections to assist with the management of Office 365 updates. These Collections include: each possible Office Channel, versions released of the Semi-Annual channel and Semi-Annual servicing rings which will be used for the deployments later in the post.



    Office 365 Channels

    Each Collection is defined by the CDNBaseURL which gets populated upon installation. This property is leveraged over other options as it provides the most consistent and accurate definition of the Office Channel.

    The following query rule should be used for each of the Channels. Be sure to update each with the proper CDNBaseURL value:

    select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_OFFICE365PROPLUSCONFIGURATIONS on SMS_G_System_OFFICE365PROPLUSCONFIGURATIONS.ResourceID = SMS_R_System.ResourceId where SMS_G_System_OFFICE365PROPLUSCONFIGURATIONS.CDNBaseUrl = "http://officecdn.microsoft.com/pr/7ffbc6bf-bc32-4f92-8982-f9dd17fd3114"

    • Monthly Channel
      (formerly Current Channel):
      CDNBaseUrl = http://officecdn.microsoft.com/pr/492350f6-3a01-4f97-b9c0-c7c6ddf67d60

    • Semi-Annual Channel
      (formerly Deferred Channel):
      CDNBaseUrl = http://officecdn.microsoft.com/pr/7ffbc6bf-bc32-4f92-8982-f9dd17fd3114

    • Monthly Channel (Targeted)
      (formerly First Release for Current Channel):
      CDNBaseUrl = http://officecdn.microsoft.com/pr/64256afe-f5d9-4f86-8936-8840a6a4f5be

    • Semi-Annual Channel (Targeted)
      (formerly First Release for Deferred Channel):
      CDNBaseUrl = http://officecdn.microsoft.com/pr/b8f9b850-328d-4355-9145-c59439a0c4cf

    Annotation 2019-01-15 094557


    Office 365 Versions

    To maintain compliance and understand current supported and unsupported clients it is recommended to keep an updated Collection based on the versions of the Semi-Annual Channels.

    When a channel reaches the unsupported time frame the Collection name is updated to reflect this. A new Collection is then created representing the new Semi-Annual release.

    Each Collection query is based on the property call VersionToReport with the Collection limited to All Semi-Annual Channel Clients created in the previous section. The build numbers can found here. The Collection query is structured as:

    Office 365 ProPlus Semi-Annual v1708:

    select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_OFFICE365PROPLUSCONFIGURATIONS on SMS_G_System_OFFICE365PROPLUSCONFIGURATIONS.ResourceID = SMS_R_System.ResourceId where SMS_G_System_OFFICE365PROPLUSCONFIGURATIONS.VersionToReport like "16.0.8431%"

    Annotation 2019-01-15 093217

    Note: you can also take advantage of this great script to create the collections which also includes some other very useful operational and maintenance collections for SCCM.


    Semi-Annual Channel Servicing Rings

    Depending on the customer, their deployment needs, and timing, the number of Rings will differ. This example will showcase 3 servicing rings each allowing 1 month of deployment availability. This provides time for an Administrator to delay a deployment if an issue is identified.

    The availability date is based on the date when the new version of Semi-Annual Channel is released (Every six months, in January and July) and when the ADR rule is scheduled to run.

    Example servicing breakdown:

    Phase

    Identified Reason

    Availability Date

    Install After Available Date

    Phase 1

    Pilot - IT Organization

    Immediately

    1 Months

    Phase 2

    Identified Office addon\macro Application owners

    +1 Month

    1 Months

    Phase 3

    Remaining machines in the environment

    +2 Months

    1 Months




    Annotation 2019-01-15 094753


    Step 2 - Create Automatic Deployment Rule


    So the last step is now to create the ADR rule that will be used to deploy the O365 updates.

    Unfortunately, there is no way to fully automate the creation of the required Deployments with an Automatic Deployment Rule (ADR) every time a new Semi-Annual Office Channel version is released. This just means that every 6 months an update to the ADR will be needed. This can be as simple as updating the search criteria of the rule to include the latest release version.

    This ADR will be scheduled to run every 6 months on the 3rd Wednesday of the month. This gives the IT Administrator the necessary time to update this rule to reflect the most recent version of Semi-Annual Channel released build.

    Annotation 2019-01-15 100126


    Select the below criteria for the version to be released

    Annotation 2019-01-15 100538


    Set the schedule to run every 6 months on the 3rd Wednesday of the month

    Annotation 2019-01-15 101109


    For the Pilot group it will be available immediately and with deadline of 1 month

    Annotation 2019-01-15 102338


    Select ‘Display In Software Center and show all notifications’

    Annotation 2019-01-15 101305


    Create the deployment package that will contain the O365 updates

    Annotation 2019-01-15 101551


    Step 3 - Create Additional Deployments

    Once the rule has been created add additional deployments for each of the required phases


    • Office 365 ProPlus Updates Phase2 - Identified Office addon\macro Application owners

    Annotation 2019-01-15 103134


    • Office 365 ProPlus Updates Phase3 - Remaining machines in the environment

    Annotation 2019-01-15 105201

    And this will be the end result:

    Annotation 2019-01-15 103739


    Conclusion


    With the increased update cadence, upgrading Office 365 ProPlus improperly is a key concern as it could result in a Customer accidently deploying a Feature Update resulting in unexpected issues – so PROPER testing is critical!!

    So I hope that the above process will help to simplify the deployment of O365 updates as much as possible.

    Maybe there might be some new features in upcoming SCCM releases to even further automate it completely.Smile

    Till the next blog.…

    Cheers Smile

    System Center Configuration Manager Client Health – Toolset to identify and remediate client issues

    $
    0
    0

    Introduction

    I get asked by a lot of customers "How can we reduce the amount of time spent on manually troubleshooting agents, correctly identify what is wrong with the systems, quickly and automatically remediate the issues on the systems"?

    Even though we do have a built-in check for the client that runs daily, as described here, that may not be enough for most Administrators.

    The Solution

    The System Center Configuration Manager Client Health (CMCH) solution has been created within Microsoft by PFE's to address the need that has been identified by engineers for more expansive checks\remediation.

    The CMCH solution provides years of client health knowledge focused on proactive monitoring and automated remediation to ensure that clients are fully functional while reducing risks and increasing reliability. The framework is fully customizable and built with remediation in mind.

    The CMCH solution contains different components for System Center Configuration Manager such as:

    • Custom Collections
    • Configuration Baseline
    • Configuration Items (CIs)
    • Custom Client Health reports

    The CMCH toolset is supported on Configuration Manager Current Branch and Configuration Manager 2012/2012 R2 with the latest Service Pack.

    Key Features and Benefits of CMCH include:

    • a Powerful remediation agent and approximately 26 Client Health focused Configuration Items.

    In all, there are 30+ Client Component issues and 37+ Operating System dependency issues that are addressed.

    Detailed trending analysis identifies systems that are recently confirmed to be on the network but remain unhealthy. The service has a proven track record for scalability and is leveraged in hierarchies with over 200,000 clients.

    Detailed trending analysis identifies systems that are recently confirmed to be on the network but remain unhealthy.


    Technical Highlights

    • Automatic Remediation through the PFE Remediation script and various remediation programs

    • Leverage collections to easily identify issues and target resolutions

    • Trending reports and dashboards

    • Low Network and Database Footprint

    • Right-Click Tools for the Solution

     

    Right-Click tools

     

    Baseline Filtering Collections

    Here we start filtering out noise, to get the PFE Baseline that we use for all additional checks

     

    Client Components

     

    OS Dependency Rules

     

    Custom Configuration Items

     

    Customisation of the Collections\CI's are performed by the PFE during the Delivery, to match the customer's environment (ex: Contoso Antivirus)

    One Main Dashboard

    Conclusion

     

    The Introduction of this solution has allowed SCCM Administrators to more effectively identify, remediate and report on client health issues.

    How can I get this into my SCCM environment?

    If you are a Microsoft Premier customer please reach out to your TAMs for delivery questions!!

     

    Using ‘Scripts’ Feature in Configuration Manager for Ad Hoc WSUS Maintenance

    $
    0
    0

    Background

     

    We all know how important WSUS maintenance is and there are numerous posts on how to automate and the scripts/queries to run.

    Have a read through This amazing Blog from Meghan Stewart | Support Escalation Engineer if you are looking at automating the WSUS Maintenance. It has all the required information on When, Why and How to implement your WSUS Maintenance, as well as having a great PowerShell script to help..

    But… have you ever just needed to kick of WSUS maintenance or SQL defrag remotely and on multiple servers at the same time. I had this requirement a while back where the customer has 150 secondary sites with Software Update Points installed and wanted to do WSUS maintenance but only when they had a change window available and also create a log file in a central share – so NOT fully automated!!

    So I decided to put the ‘scripts’ feature in SCCM to the test and try implement a solution to assist the customerSmile.

     

    Solution

     

    Important Considerations

    Before we get started, it’s important that I mention a few things:

    1. Remember that when doing WSUS maintenance when you have downstream servers, you add to the WSUS servers from the top down, but remove from the bottom up. So if you are syncing/adding updates, they flow into the top (upstream WSUS server) then replicate down to the downstream servers. When you do a cleanup, you are removing things from the WSUS servers, so you should remove from the bottom of the hierarchy and allow the changes to flow up to the top.
    2. It’s important to note that this WSUS maintenance can be performed simultaneously on multiple servers in the same tier. You do however want to make sure that one tier is done before moving onto the next one when doing a cleanup. The cleanup and re-index steps I talk about below should be run on all WSUS servers regardless of whether they are a replica WSUS server or not.
    3. This is a big one. You must ensure that you do not sync your SUPs during this maintenance process as it is possible you will lose some of the work you have already done if you do. You may want to check your SUP sync schedule and set it to manual during this process.

    Step 1 – Create Software Update Point Collections

     

    The first step will be to create the collections that we will run the script against.

     

    Below is the 2 collections for the Primary(Upstream WSUS) and secondary(Downstream WSUS) servers

    Collections

     

    Step 2 – Create Scripts in SCCM

     

    The next step was to create the scripts in SCCM which will be used to run against the above collections created.

    1. Create a share on a server and copy the below .ps1 and .sql files into the share
    2. Create a log file folder beneath that where the output logs will be written to

     

    • WSUS database(SUSDB) Re-index script

      PowerShell Script (SUSDB_Reindex.ps1):

      $Logfile = $env:computername + "_reindex"
      Invoke-sqlcmd -ServerInstance "localhost" -Database "SUSDB" -InputFile "\\ServerName\Share\Scripts\WSUS_Cleanup\SUSDB_reindex.sql" -Verbose *> "C:\Windows\Temp\$Logfile.log"
      cd e:
      copy-item C:\Windows\Temp\$Logfile.log -destination \\ServerName\Share\Scripts\WSUS_Cleanup\Logs\
      exit $LASTEXITCODE
      • script runs “SUSDB_reindex.sql” file against each server in the collection
      • Outputs a logfile to the specified share

      Note: Change Servername, share and e: to the drive letter where scripts are located

       

      • WSUS database(SUSDB) Cleanup

       

      PowerShell Script (SUSDB_Cleanup.ps1):

      $Logfile = $env:computername + "_cleanup"
      Invoke-sqlcmd -ServerInstance "localhost" -Database "SUSDB" -ConnectionTimeout "0" -QueryTimeout "65535" -InputFile "\\Servername\Share\Scripts\WSUS_Cleanup\SUSDB_Cleanup.sql" -Verbose *> "C:\Windows\Temp\$Logfile.log"
      cd e:
      copy-item C:\Windows\Temp\$Logfile.log -destination \\Servername\Share\Scripts\WSUS_Cleanup\Logs
      exit $LASTEXITCODE
      • script runs “SUSDB_Cleanup.sql” file against each server in the collection
      • Outputs a logfile to the specified share

      Note: Change Servername, share and e: to the drive letter where scripts are located

       

      SQL script (SUSDB_Cleanup.sql):

       

      use susdb
      DECLARE @msg nvarchar(100)
      DECLARE @NumberRecords int, @RowCount int, @var1 int
      -- Create a temporary table with an Identity column
      CREATE TABLE #results (RowID INT IDENTITY(1, 1), Col1 INT)
      -- Call the Stored Procedure to get the updates to delete & insert them into the table
      INSERT INTO #results(Col1) 
      EXEC spGetObsoleteUpdatesToCleanup 
      
      
      -- Get the number of records in the temporary table
      SET @NumberRecords = @@ROWCOUNT
      SET @RowCount = 1
      -- Show records in the temporary table
      select * from #results
      -- Loop through all records in the temporary table
      -- using the WHILE loop construct & call the Stored Procedure to delete them
      WHILE @RowCount <= @NumberRecords
      BEGIN
      SELECT @var1 = Col1 FROM #results where RowID = @rowcount
      SET @msg = 'Deleting UpdateID ' + CONVERT(varchar(10), @var1) + ', Rowcount '+ CONVERT(varchar(10), @rowcount)
                     RAISERROR(@msg,0,1) WITH NOWAIT 
       EXEC spDeleteUpdate @localUpdateID=@var1 
       SET @RowCount = @RowCount + 1
      END
       -- Drop the temporary table when completed
      DROP TABLE #results
      

       

      • WSUS Cleanup

       

      PowerShell Script (WSUS_Cleanup.ps1):

      $WSUSServer = @(      (hostname)      )
      Get-WsusServer -Name localhost -PortNumber 8530 | Invoke-WsusServerCleanup -CleanupObsoleteComputers -CleanupObsoleteUpdates -CleanupUnneededContentFiles -CompressUpdates -DeclineExpiredUpdates -Verbose *> "\\Servername\Share\Scripts\WSUS_Cleanup\Logs\$wsusserver _WSUSCleanup.log"
      • script runs WsusServerCleanup against each server in the collection
      • Outputs a logfile to the specified share

      Note: Change Servername, share and e: to the drive letter where scripts are located

       

      Scripts

       

      All that's left is to import them into SCCM.

      1. In the Configuration Manager console, click Software Library.
      2. In the Software Library workspace, click Scripts.
      3. On the Home tab, in the Create group, click Create Script.
      4. On the Script page of the Create Script wizard, configure the following settings:CreateScript
        • Script Name - Enter a name for the script. Although you can create multiple scripts with the same name, using duplicate names makes it harder for you to find the script you need in the Configuration Manager console.
        • Script language - Currently, only PowerShell scripts are supported.
        • Import - Import a PowerShell script into the console. The script is displayed in the Script field.
        • Clear - Removes the current script from the Script field.
        • Script - Displays the currently imported script. You can edit the script in this field as necessary.
      5. Complete the wizard. The new script is displayed in the Script list with a status of Waiting for approval. Before you can run this script on client devices, you must approve it.

       

      Scripts must be approved, by the script approver role, before they can be run. To approve a script:

      1. In the Configuration Manager console, click Software Library.
      2. In the Software Library workspace, click Scripts.
      3. In the Script list, choose the script you want to approve or deny and then, on the Home tab, in the Script group, click Approve/Deny.
      4. In the Approve or deny script dialog box, select Approve, or Deny for the script. Optionally, enter a comment about your decision. If you deny a script, it cannot be run on client devices.
        Script - Approval
      5. Complete the wizard. In the Script list, you see the Approval State column change depending on the action you took.

       

      SCCMScripts

       

      Step 3 – Running The Scripts in SCCM

       

      The final step now once the scripts have been added to SCCM is just to run the scripts and wait….

       

        1. In the Configuration Manager console, click Assets and Compliance.
        2. In the Assets and Compliance workspace, click Device Collections.
        3. In the Device Collections list, click the collection of devices on which you want to run the script.
        4. Select a collection of your choice, click Run Script.
        5. On the Script page of the Run Script wizard, choose a script from the list. Only approved scripts are shown.
        6. Click Next, and then complete the wizard.

      Important

      If a script does not run, for example because a target device is turned off during the one hour time period, you must run it again.

         

        Script Monitoring and Output LogFiles:

         

        1. In the Configuration Manager console, click Monitoring.
        2. In the Monitoring workspace, click Script Status.
        3. In the Script Status list, you view the results for each script you ran on client devices. A script exit code of 0 generally indicates that the script ran successfully.
          • Beginning in Configuration Manager 1802, script output is truncated to 4 KB to allow for better display experience. Script monitor - Truncated Script

         

        Below is the Output from the scripts run above:

        reindex_log

         

        cleanup_log

         

        WSUScleanup_log

         

        Conclusion

         

        In this post, I demonstrated how we can use the ‘scripts’ feature in SCCM to initiate WSUS Cleanup scripts on demand.  Hopefully this is helpful to you but also shows you the capability of the feature for almost anythingOpen-mouthed smile .  Till next time…

         

        Disclaimer – All scripts and reports are provided ‘AS IS’
        This sample scripts are not supported under any Microsoft standard support program or service. This sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of these sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of these scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use these sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

        Understanding Volume Activation Services – Part 2 (Active Directory-Based Activation)

        $
        0
        0

        In the previous part of the series, we talked about KMS, MAK, and how to choose between the two when looking for the right activation method in your environment.
        Today, we are going to talk about Active Directory-based activation or ADBA in short.

        Series:

        • Part 1 - KMS and MAK
        • Part 2 - Active Directory-Based Activation
        • Part 3 - Office Considerations & License Activation Troubleshooting

         

        What is exactly ADBA and why do you need it?

        Like KMS, Active Directory-based activation (ADBA) is used to activate Windows and Office in your corporate network.
        ADBA is a more reliable and redundant solution, and it has significant advantages compares to KMS which make it the best option for activating clients machines.
        As you can guess by its name, ADBA relies on Active Directory Domain Services to store activation objects and transparently activate domain-joined computers.

        Prerequisites

        There are few prerequisites for using Active Directory-based activation:

        • Schema version must be updated to at least Windows Server 2012.
          • There's NO need for upgrading the forest or domain functional levels.
          • Older Domain Controllers (like DCs running Windows Server 2008R2) will be able to activate clients using ADBA as long as the schema is updated.
        • Computers who would like to activate against ADBA must be:
          • Domain-joined to one of the forest domains (ADBA is a forest-wide feature).
          • Running a Windows Server 2012/Windows 8.1 and above. Older operating systems (including Windows Server 2008R2, Windows 7) are NOT supported.

        ADBA Vs. KMS

        There are some major advantages for using ABDA over KMS:

        • No thresholds - Unlike KMS, ADBA does not require any minimum thresholds for start activating clients.
          Any clients request for activation is immediately activated by ADBA as long as there is a suitable activation object in the Active Directory.
        • Eliminate the need for SRV record and dedicated port - As we learned in the previous post, the KMS server is listening on port 1688 for client's activation requests.
          Clients can find KMS server based on the _VLMCS SRV records located in the DNS.
          When using ADBA, clients are looking for activation objects in the Active Directory by using LDAP, and the communication is based on the default domain services ports. No dedicated ports, neither SRV records are needed.
        • High availability - Active Directory-based activation is, by design, a high availability activation method. Any Domain Controller which is part of the forest can be used to activate a client. You won't need to create a dedicated server for KMS host anymore.

        While ADBA has significant  advantages, it also has a few drawbacks:

        • No support for older Windows versions - ADBA can only activate Windows Server 2012/Windows 8.1 and above. Therefore, as long as your environment still includes older Windows versions like Windows Server 2008R2 and Windows 7, you'll have to keep maintain other activation methods like KMS and MAK.
        • Domain-joined only - ADBA can activate domain-joined computers only. In other words, any workgroup machine or machine that belong to a different AD forest cannot be activated using the ADBA.

        The good news is that ADBA and KMS can live together. You can use ADBA to activate new versions of Windows and Office and maintain a KMS host servers for activating old Windows and Office versions like Windows 2008R2, Windows 7 and Office 2010.

        This might be a good opportunity to remind you that Windows Server 2008R2 and Windows 7 will become out of support on January 14, 2020.

        Deploying Active Directory-based activation

        In order to deploy Active Directory-based activation, we are going to use the same Volume Activation Tools feature we used to deploy the KMS host.
        It is recommended to run the Volume Activation Tools from a management/administrative machine running Windows Server 2019. If you are running the Volume Activation Tools from a Windows Server 2016 or Windows Server 2012R2, please install the following KB's before you continue (KBs are required for activating the newest Windows 10 Enterprise LTSC and Windows Server 2019):

        For Windows 2016:

        1. KB4132216 - Servicing stack update, May 18 (Information, Download).
        2. KB4467684 - November Cumulative Update or above (Information, Download).

        For Windows 2012R2:

        1. KB3173424 - Servicing stack update (Information, Download).
        2. KB4471320 - December 2018 Monthly Rollup (Information, Download).

        ADBA uses the KMS host key for activating clients. Yes, it's still called that name, as the KMS host key is used for both Active Directory-based activation and KMS activation method.
        The KMS host key can be obtained from Microsoft VLSC.
        Remember that you should only obtain and install the latest KMS host key for Windows Server available in VLSC. This is because:

        • A KMS host key of newer Windows version can be used to activate older Windows versions.
        • A Windows server KMS host key can be used to activate Windows clients.

        When the Volume Activation Tools opens, skip the introduction phase and choose 'Active Directory-Based Activation' as your volume activation method.
        Pay attention that you must be a member of the 'Local Administrators' group on the computer running the Volume Activation Tools. You also need to be a member of the 'Enterprise Administrators' group, because the activation objects are created in the 'Configuration' partition in the Active Directory.

        In the next step, you'll be asked to provide the KMS host key you obtained from the VLSC. Once again, this is the exact same key you used to activate the KMS host.
        It is recommended to enter a display name for your new activation object. The display name should reflect the product and its version (e.g. 'WindowsServer2019Std').

        Managing Active Directory-based activation

        To be honest, there's no much to administer and manage in ADBA.
        From time to time, you'll be required to install a new activation object for a new version of Windows or Office, but that's all.
        However, if you would like to view and delete currently installed activation objects, you can use either the Volume Activation Tools or the ADSI Edit (adsiedit.msc).

        Using the Volume Activation Tools, select 'Active Directory-Based Activation', click 'Next' and choose 'Skip to Configuration'.

        In the next screen, you can see the installed activation objects, including their display name and partial product key.
        If you would like to delete an activation object, just select the 'Delete' checkbox next to it and click 'Commit'.

        If you would like to see the activation objects in Active Directory, use adsiedit.msc to open the 'Configuration' partition, and navigate to Services\Microsfot SPP\Activation Objects.
        You can see that the object class is 'msSPP-AcitivationObject', and you can identify the object easily by using the displayName value in the 'Attribute Editor'.

        Reviewing ADBA client's settings

        After you enable ADBA and create the activation object in your Active Directory, supported client computers which configured to use 'KMS Client Channel' will be activated automatically against the ADBA.
        The activation is given for a 180 days period, and clients machines will try to reactivate every 7 days (just like in KMS).
        If for some reason, the ADBA activation failed (e.g activation object can't be found/does not support the client OS), the client will try to use KMS activation as an alternative.

        You can still use the slmgr.vbs script to manage and view activation settings.
        Run 'slmgr.vbs /dli' to display the activation status. Pay attention to the "AD Activation client information", which indicates that the client was activated using ADBA.

        Other slmgr.vbs commands like 'slmgr /ipk' and 'slmgr /ato' can still be used to manipulate and configure the activation settings in the client machine.

        Summary

        Active Directory-based activation should be your top priority when considering volume activation models.
        As it goes hand by hand with Active Directory, it provides you with high availability and eliminates the need for a dedicated server for activation.
        ADBA is also great for small environments, where the number of computers does not meet the KMS activation threshold.
        Remember that you can run ADBA next to KMS  if you still have earlier operating systems or workgroup computers in your network.

         

        In the last post of the series, we'll talk about Office activation and how to troubleshoot activation issues in your environment.

         

        Field Notes: The case of the failed SQL Server Failover Cluster Instance – Binaries Disks Added to Cluster

        $
        0
        0

        I paid a customer a visit a while ago and was requested to assist with a SQL Server Failover Cluster issue they were experiencing.  They had internally transferred the case from the SQL team to folks who look after the Windows Server platform as they could not pick up anything relating to SQL during initial troubleshooting efforts.

        My aim in this post is to:

        • explain what the issue was (adding disks meant to be local storage to the cluster)
        • provide a little bit of context on cluster disks and asymmetric storage configuration
        • discuss how the issue was resolved by removing the disks from cluster

        Issue definition and scope

        An attempt to move the SQL Server role/group from one node to another in a 2-node Failover Cluster failed.  This is what they observed:

        Failed SQL Server Group

        From the image above, it can be seen that all disk resources are online.  Would you suspect that storage is involved at this stage?  In cluster events, there was the standard Event ID 1069 confirming that the cluster resource 'SQL Server' of type 'SQL Server' in clustered role 'SQL Server (MSSQLSERVER)' failed.  Additionally, this is what was in the cluster log – “failed to start service with error 2”:

        Cluster Log

        Error code 2 means that the system cannot find the file specified:

        Net HelpMsg

        A little bit of digging around reveals that this is the image path we are failing to get to:

        Registry value

        Now that we have all this information, let’s look at how you would resolve this specific issue we were facing.  Before that however, I would like to provide a bit of context relating to cluster disks, especially on Asymmetric Storage Configuration.

        Context

        Consider a 2 node SQL Server Failover Cluster Instance running on a Windows Server 2012 R2 Failover Cluster with the following disk configuration:

        • C drive for the Operating System – each of the nodes has a direct attached disk
        • D drive for SQL binaries – each of the nodes has a dedicated “local” drive, presented from a Storage Area Network (SAN)
        • All the other drives required for SQL are shared drives presented from the SAN

        Disks in Server Manager

        Note: The 20 GB drive is presented from the SAN and is not added to the cluster at this stage.

        I used Hyper-V Virtual Machines to reproduce this issue in a lab environment.  For the SAN part, I used the iSCSI target that is built-in to Windows Server.

         

        Asymmetric Storage Configuration

        A feature enhancement in Failover Clustering for Windows Server 2012 and Windows Server 2012 R2 is that it supports an Asymmetric Storage Configuration.  In Windows Server 2012 a disk is considered clusterable if it is presented to one or more nodes, and is not the boot / system disk, or contain a page file.  https://support.microsoft.com/en-us/help/2813005/local-sas-disks-getting-added-in-windows-server-2012-failover-cluster

         

        What happens when you Add Disks to Cluster?

        Let us first take a look at the disks node in Failover Cluster Manager (FCM) before adding the disks.

        Disks in Failover Cluster Manager

        Here’s what we have (ordered by the disk number column):

        • The Failover Cluster Witness disk (1 GB)
        • SQL Data (50 GB)
        • SQL Logs (10 GB)
        • Other Stuff (5 GB)

        The following window is presented when an attempt to add disks to a cluster operation is performed in FCM:

        Add Disks to a Cluster

        Both disks are added as cluster disks when one clicks OK at this stage.  After adding the disks (which are not presented to both nodes), we see the following:

        Disks in Failover Cluster Manager

        Nothing changed regarding the 4 disks we have already seen in FCM, and the two “local” disks are now included:

        • Cluster Disk 1 is online on node PTA-SQL11
        • Cluster Disk 2 is offline on node PTA-SQL11 as it is not physically connected to the node

        At this stage, everything still works fine as the SQL binaries volume is still available on this node.  Note that the "Available Storage” group is running on PTA-SQL11.

         

        What happens when you move the Available Storage group?

        Move Available Storage

        Let’s take a look at FCM again:

        Disks in Failover Cluster Manager

        Now we see that:

        • Cluster Disk 1 is now offline
        • Cluster Disk 2 is now online
        • The owner of the “Available Storage” group is now PTA-SQL12

        This means that PTA-SQL12 can see the SQL binaries volume and PTA-SQL11 cannot, which causes downtime.  Moving the SQL group to PTA-SQL12 works just fine as the SQL binaries drive is online on that node.  You may also want to ensure that the resources are configured to automatically recover from failures.  Below is an example of default configuration on a resource:

        Resource Properties

         

        Process People and Technology

        It may appear that the technology is at fault here, but the Failover Cluster service does its bit to protect us from shooting ourselves in the foot, and here are some examples:

        Validation

        The Failover Cluster validation report does a good job in letting you know that disks are only visible from one node.  By the way, there’s also good information here on what’s considered for a disk to be clustered.

        Validation Report

        A warning is more like a “proceed with caution” when looking at a validation report.  Failures/errors mean that the solution does not meet requirements for Microsoft support.  Also be careful when validating storage as services may be taken offline.

         

        Logic

        In the following snippet from the cluster log, we see an example of the Failover Cluster Resource Control Manger (RCM) prevent the move of the “Available Storage” group to prevent downtime.

        Cluster Log

        Back online and way forward

        To get the service up and running again, we had to remove both Disk 1 and Disk 2 as cluster disks and make them “local” drives again.  The cause was that an administrator had added disks that were not meant to be part of the cluster as clustered disks.

        Disks need to be made online from a tool such as the Disk Management console as they are automatically placed in an offline state to avoid possible issues that may be caused by having a non-clustered disk online on two or more nodes in a shared disk scenario.

        I got curious after this and reached out to folks who specialize in SQL server to get their views on whether the SQL binaries drive should or should not be shared.  One of the strong views is to keep them as a non-shared (non-clustered) drives, especially for cases on SQL patching.  What happens if SQL patching fails in a shared drive scenario for example?

        Anyway, it would be great to hear from you through comments.

        Till next time…


        Step by step MIM PAM setup and evaluation Guide – Part 3

        $
        0
        0

        This is third part of the series. In the previous posts we have prepared test environment for PAM deployment, created and configured all needed service accounts, installed SQL Server and prepared PIM server for further installation. Now we have two forests – prod.contoso.com and priv.contoso.com. In PROD we have set up Certificate services, Exchange server, ADFS services and configured two test applications – one is using Windows Integrated Authentication and the second Claim based Authentication. In PRIV forest we have PAM server prepared for MIM/PAM deployment with SQL server ready.

        Series:

        Installing PAM Server

        1. Install SharePoint 2016
          1. a. Download SharePoint 2016 Prerequisites
          2. Please download following binaries into one selected folder (for example C:\Setup\Software\SP2016-Prerequisites) on the PRIV-PAM server

            Cumulative Update 7 (KB3092423) for Microsoft AppFabric 1.1 for Windows Server [https://www.microsoft.com/en-us/download/details.aspx?id=49171]

            Microsoft Identity Extensions [http://go.microsoft.com/fwlink/?LinkID=252368]

            Microsoft ODBC Driver 11 for SQL Server [http://www.microsoft.com/en-us/download/details.aspx?id=36434]

            Microsoft Information Protection and Control Client [http://go.microsoft.com/fwlink/?LinkID=528177]

            Microsoft SQL Server 2012 Native Client [http://go.microsoft.com/fwlink/?LinkID=239648&clcid=0x409]

            Microsoft Sync Framework Runtime v1.0 SP1 (x64) [http://www.microsoft.com/en-us/download/details.aspx?id=17616] – Open SyncSetup_en.x64.zip and extract to this folder only Synchronization.msi

            Visual C++ Redistributable Package for Visual Studio 2013 [http://www.microsoft.com/en-us/download/details.aspx?id=40784]

            Visual C++ Redistributable for Visual Studio 2015 [https://www.microsoft.com/en-us/download/details.aspx?id=48145]

            Microsoft WCF Data Services 5.0 [http://www.microsoft.com/en-us/download/details.aspx?id=29306]

            Windows Server AppFabric 1.1 [http://www.microsoft.com/en-us/download/details.aspx?id=27115]

            At the end You will need to have in the selected folder following binaries:

        • AppFabric-KB3092423-x64-ENU.exe
        • MicrosoftIdentityExtensions-64.msi
        • msodbcsql.msi
        • setup_msipc_x64.msi
        • sqlncli.msi
        • Synchronization.msi
        • vcredist_x64.exe
        • vc_redist.x64.exe
        • WcfDataServices.exe
        • WindowsServerAppFabricSetup_x64.exe
      1. Install SharePoint Prerequisites
      2. Log on to PRIV-PAM as a priv\PAMAdmin (use password P@$$w0rd)

        Open PowerShell ISE as an Admin and paste following script:

        $spPrereqBinaries = 'C:\Setup\Software\SP2016-Prerequisites'

        $sharePointBinaries = 'C:\Setup\Software\SharePoint2016'

        function Run-SystemCommand {

        Param(

        [parameter(Mandatory=$true)]

        [string]$Command,

        [parameter(Mandatory=$false)]

        [string]$Arguments = [String]::Empty,

        [parameter(Mandatory=$false)]

        [bool]$RestartIfNecessary = $false,

        [parameter(Mandatory=$false)]

        [int]$RestartResult

        )

        Process {

        try{

        $myProcess = [Diagnostics.Process]::Start($Command, $Arguments)

        $myProcess.WaitForExit()

        [int]$exitCode = $myProcess.ExitCode

        $result = ($exitCode -eq 0)

        if($result) { Write-Host "[OK] $Command was successful" }

        elseif ($RestartIfNecessary -and ($exitCode -eq $RestartResult)){

        Write-Host "[Warning]Please rerun script after restart of the server"

        Restart-Computer -Confirm

        }

        else { Write-Host "[Error] Failed to run $Command" }

        }

        catch {

        Write-Host "[Error] Failed to run $Command"

        Write-Host ("`t`t`t{0}" -f $_.Exception.Message)

        }

        }

        }

        $arguments = "/sqlncli:`"$spPrereqBinaries\sqlncli.msi`" "

        $arguments += "/idfx11:`"$spPrereqBinaries\MicrosoftIdentityExtensions-64.msi`" "

        $arguments += "/sync:`"$spPrereqBinaries\Synchronization.msi`" "

        $arguments += "/appfabric:`"$spPrereqBinaries\WindowsServerAppFabricSetup_x64.exe`" "

        $arguments += "/kb3092423:`"$spPrereqBinaries\AppFabric-KB3092423-x64-ENU.exe`" "

        $arguments += "/msipcclient:`"$spPrereqBinaries\setup_msipc_x64.msi`" "

        $arguments += "/wcfdataservices56:`"$spPrereqBinaries\WcfDataServices.exe`" "

        $arguments += "/odbc:`"$spPrereqBinaries\msodbcsql.msi`" "

        $arguments += "/msvcrt11:`"$spPrereqBinaries\vc_redist.x64.exe`" "

        $arguments += "/msvcrt14:`"$spPrereqBinaries\vcredist_x64.exe`""

        Run-SystemCommand -Command "$sharePointBinaries\prerequisiteinstaller.exe" -Arguments $arguments -RestartIfNecessary $true -RestartResult 3010

        Replace $spPrereqBinaries value with path where your prerequisite binaries are located.

        Replace $sharePointBinaries with path to root of your SharePoint 2016 distribution.

        Run above script. Result should confirm successful installation. In case server restarts, after restart run again previous command

        Repeat until restart is not needed.

        Restart PRIV-PAM server.

      3. Create SharePoint Server 2016 Installation configuration file
      4. Log on to PRIV-PAM as a priv\PAMAdmin (use password P@$$w0rd)

        In the Notepad paste following:

        <Configuration>

        <Package Id="sts">

        <Setting Id="LAUNCHEDFROMSETUPSTS" Value="Yes" />

        </Package>

        <Package Id="spswfe">

        <Setting Id="SETUPCALLED" Value="1" />

        </Package>

        <Logging Type="verbose" Path="%temp%" Template="SharePoint Server Setup(*).log" />

        <PIDKEY Value="RTNGH-MQRV6-M3BWQ-DB748-VH7DM" />

        <Display Level="none" CompletionNotice="no" />

        <Setting Id="SERVERROLE" Value="SINGLESERVER" />

        <Setting Id="USINGUIINSTALLMODE" Value="1" />

        <Setting Id="SETUP_REBOOT" Value="Never" />

        <Setting Id="SETUPTYPE" Value="CLEAN_INSTALL" />

        </Configuration>

        In the configuration I have added SharePoint 2016 evaluation key for Standard version. You are free to replace key with your license key

        Save file as config.xml to chosen location.

      5. Install SharePoint
      6. Open PowerShell ISE as an Admin and paste following script:

        $sharePointBinaries = 'C:\Setup\Software\SharePoint2016'

        $configPath = 'C:\Setup'

        function Run-SystemCommand {

        Param(

        [parameter(Mandatory=$true)]

        [string]$Command,

        [parameter(Mandatory=$false)]

        [string]$Arguments = [String]::Empty,

        [parameter(Mandatory=$false)]

        [bool]$RestartIfNecessary = $false,

        [parameter(Mandatory=$false)]

        [int]$RestartResult

        )

        Process {

        try{

        $myProcess = [Diagnostics.Process]::Start($Command, $Arguments)

        $myProcess.WaitForExit()

        [int]$exitCode = $myProcess.ExitCode

        $result = ($exitCode -eq 0)

        if($result) { Write-Host "[OK] $Command was successful" }

        elseif ($RestartIfNecessary -and ($exitCode -eq $RestartResult)){

        Write-Host "[Warning]Please rerun script after restart of the server"

        Restart-Computer -Confirm

        }

        else { Write-Host "[Error] Failed to run $Command" }

        }

        catch {

        Write-Host "[Error] Failed to run $Command"

        Write-Host ("`t`t`t{0}" -f $_.Exception.Message)

        }

        }

        }

        Run-SystemCommand -Command "$sharePointBinaries\setup.exe" -Arguments "/config $configPath\config.xml" -RestartIfNecessary $true -RestartResult 30030

        Replace $ configPath value with path where config file created in previous step is located.

        Replace $sharePointBinaries with path to root of your SharePoint 2016 distribution.

        Run above script. Wait until script finishes - it won't display installation progress.Result should confirm successful installation.

      7. Create SharePoint Site
        1. Request, issue and install SSL certificate
        2. Open PowerShell ISE as an Admin and paste following script:

          $file = @"

          [NewRequest]

          Subject = "CN=pamportal.contoso.com,c=AE, s=Dubai, l=Dubai, o=Contoso, ou=Blog"

          MachineKeySet = TRUE

          KeyLength = 2048

          KeySpec=1

          Exportable = TRUE

          RequestType = PKCS10

          [RequestAttributes]

          CertificateTemplate = "WebServerV2"

          "@

          Set-Content C:\Setup\certreq.inf $file

          Invoke-Expression -Command "certreq -new C:\Setup\certreq.inf C:\Setup\certreq.req"

          (Replace C:\Setup with folder of your choice – in this folder we will save request file)

          Run above script and respond to message boxes prompt “Template not found. Do you wish to continue anyway?” with “Yes”.

          Copy C:\Setup\certreq.req to corresponding folder on PROD-DC server.

          Log on to PROD-DC as an administrator

          Open command prompt as an admin.

          Run following command:

          certreq -submit C:\Setup\certreq.req C:\Setup\pamportal.contoso.com.cer

          Here C:\Setup is folder where certificate request file is placed – modify path according to your location.

          Confirm CA when prompted

          Now we have in C:\Setup certificate file C:\Setup\pamportal.contoso.com.cer. Copy that file back to PRIV-PAM server.

          Log on to PRIV-PAM as a priv\PAMAdmin (use password P@$$w0rd)

          Run PowerShell as Admin and execute following:

          $cert = Import-Certificate -CertStoreLocation Cert:\LocalMachine\my -FilePath C:\Setup\pamportal.contoso.com.cer

          $guid = [guid]::NewGuid().ToString("B")

          $tPrint = $cert.Thumbprint

          netsh http add sslcert hostnameport="pamportal.contoso.com:443" certhash=$tPrint certstorename=MY appid="$guid"

        3. Run script to create SharePoint Site where PAM Portal will be placed.
        4. Open PowerShell ISE as an Admin and paste following script:

          $Passphrase = 'Y0vW8sDXktY29'

          $password = 'P@$$w0rd'

          Add-PSSnapin Microsoft.SharePoint.PowerShell

          #

          #Initialize values required for the script

          $SecPhassphrase = (ConvertTo-SecureString -String $Passphrase -AsPlainText -force)

          $FarmAdminUser = 'PRIV\svc_PAMFarmWSS'

          $svcMIMPool = 'PRIV\svc_PAMAppPool'

          #

          #Create new configuration database

          $secstr = New-Object -TypeName System.Security.SecureString

          $password.ToCharArray() | ForEach-Object {$secstr.AppendChar($_)}

          $cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $FarmAdminUser, $secstr

          New-SPConfigurationDatabase -DatabaseName 'MIM_SPS_Config' -DatabaseServer 'SPSSQL' -AdministrationContentDatabaseName 'MIM_SPS_Admin_Content' -Passphrase $SecPhassphrase -FarmCredentials $cred -LocalServerRole WebFrontEnd

          #

          #Create new Central Administration site

          New-SPCentralAdministration -Port '2016' -WindowsAuthProvider "NTLM"

          #

          #Perform the config wizard tasks

          #Install Help Collections

          Install-SPHelpCollection -All

          #Initialize security

          Initialize-SPResourceSecurity

          #Install services

          Install-SPService

          #Register features

          Install-SPFeature -AllExistingFeatures

          #Install Application Content

          Install-SPApplicationContent

          #

          #Add managed account for Application Pool

          $cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $svcMIMPool, $secstr

          New-SPManagedAccount -Credential $cred

          #

          #Create new ApplicationPool

          New-SPServiceApplicationPool -Name PAMSPSPool -Account $svcMIMPool

          #

          #Create new Web Application.

          #This creates a Web application that uses classic mode windows authentication.

          #Claim-based authentication is not supported by MIM

          New-SPWebApplication -Name 'PAM Portal' -Url "https://pamportal.contoso.com" -Port 443 -HostHeader 'pamportal.contoso.com' -SecureSocketsLayer:$true -ApplicationPool "PAMSPSPool" -ApplicationPoolAccount (Get-SPManagedAccount $($svcMIMPool)) -AuthenticationMethod "Kerberos" -DatabaseName "PAM_SPS_Content"

          #

          #Create new SP Site

          New-SPSite -Name 'PAM Portal' -Url "https://pamportal.contoso.com" -CompatibilityLevel 15 -Template "STS#0" -OwnerAlias $FarmAdminUser

          #

          #Disable server-side view state. Required by MIM

          $contentService = [Microsoft.SharePoint.Administration.SPWebService]::ContentService

          $contentService.ViewStateOnServer = $false

          $contentService.Update()

          #

          #configure SSL

          Set-WebBinding -name "PAM Portal" -BindingInformation ":443:pamportal.contoso.com" -PropertyName "SslFlags" -Value 1

          #Add Secondary Site Collection Administrator

          Set-SPSite -Identity "https://pamportal.contoso.com" -SecondaryOwnerAlias "PAMAdmin"

      8. Install MIM Service, MIM Portal and PAM
      9. Open Command prompt as an Admin and run following command

        msiexec.exe /passive /i "C:\Setup\Software\MIM2016SP1RTM\Service and Portal\Service and Portal.msi" /norestart /L*v C:\Setup\PAM.LOG ADDLOCAL="CommonServices,WebPortals,PAMServices" SQMOPTINSETTING="1" SERVICEADDRESS="pamsvc.contoso.com" FIREWALL_CONF="1" SHAREPOINT_URL="https://pamportal.contoso.com" SHAREPOINTUSERS_CONF="1" SQLSERVER_SERVER="SVCSQL" SQLSERVER_DATABASE="FIMService" EXISTINGDATABASE="0" MAIL_SERVER="mail.contoso.com" MAIL_SERVER_USE_SSL="1" MAIL_SERVER_IS_EXCHANGE="1" POLL_EXCHANGE_ENABLED="1" SERVICE_ACCOUNT_NAME="svc_PAMWs" SERVICE_ACCOUNT_PASSWORD="P@$$w0rd" SERVICE_ACCOUNT_DOMAIN="PRIV" SERVICE_ACCOUNT_EMAIL="svc_PAMWs@prod.contoso.com" REQUIRE_REGISTRATION_INFO="0" REQUIRE_RESET_INFO="0" MIMPAM_REST_API_PORT="8086" PAM_MONITORING_SERVICE_ACCOUNT_DOMAIN="PRIV" PAM_MONITORING_SERVICE_ACCOUNT_NAME="svc_PAMMonitor" PAM_MONITORING_SERVICE_ACCOUNT_PASSWORD="P@$$w0rd" PAM_COMPONENT_SERVICE_ACCOUNT_DOMAIN="PRIV" PAM_COMPONENT_SERVICE_ACCOUNT_NAME="svc_PAMComponent" PAM_COMPONENT_SERVICE_ACCOUNT_PASSWORD="P@$$w0rd" PAM_REST_API_APPPOOL_ACCOUNT_DOMAIN="PRIV" PAM_REST_API_APPPOOL_ACCOUNT_NAME="svc_PAMAppPool" PAM_REST_API_APPPOOL_ACCOUNT_PASSWORD="P@$$w0rd" REGISTRATION_PORTAL_URL="http://localhost" SYNCHRONIZATION_SERVER_ACCOUNT="PRIV\svc_MIMMA" SHAREPOINTTIMEOUT="600"

        ("C:\Setup\Software\MIM2016SP1RTM\Service and Portal\Service and Portal.msi" replace with path to Service and Portal installation path, C:\Setup\PAM.LOG replace with path where installation log will be placed)

        When installation finishes open C:\Setup\PAM.LOG file in Notepad and goto the end of the file. You should find line

        … Product: Microsoft Identity Manager Service and Portal -- Installation completed successfully.

        Open Internet Explorer and navigate to https://pamportal.contoso.com/IdentityManagement

        Portal should be loaded:

        clip_image002

        Restart the PRIV-PAM server

      10. Configure SSL for pamapi.contoso.com
        1. Request, issue and install SSL certificate for the portal
        2. Open PowerShell ISE as an Admin and paste following script:

          $file = @"

          [NewRequest]

          Subject = "CN=pamapi.contoso.com,c=AE, s=Dubai, l=Dubai, o=Contoso, ou=Blog"

          MachineKeySet = TRUE

          KeyLength = 2048

          KeySpec=1

          Exportable = TRUE

          RequestType = PKCS10

          [RequestAttributes]

          CertificateTemplate = "WebServerV2"

          "@

          Set-Content C:\Setup\certreq.inf $file

          Invoke-Expression -Command "certreq -new C:\Setup\certreq.inf C:\Setup\certreq.req"

          (Replace C:\Setup with folder of your choice – in this folder we will save request file)

          Run above script and respond to message boxes with “OK”.

          Copy C:\Setup\certreq.req to corresponding folder on PROD-DC server.

          Log on to PROD-DC as an administrator

          Open command prompt as an admin.

          Run following command:

          certreq -submit C:\Setup\certreq.req C:\Setup\pamapi.contoso.com.cer

          Here C:\Setup is folder where certificate request file is placed – modify path according to your location.

          Confirm CA when prompted

          Now we have in C:\Setup certificate file C:\Setup\pamapi.contoso.com.cer. Copy that file back to PRIV-PAM server.

          Log on to PRIV-PAM as a priv\PAMAdmin (use password P@$$w0rd)

          Run PowerShell as Admin and execute following:

          $cert = Import-Certificate -CertStoreLocation Cert:\LocalMachine\my -FilePath C:\Setup\pamapi.contoso.com.cer

          $guid = [guid]::NewGuid().ToString("B")

          $tPrint = $cert.Thumbprint

          netsh http add sslcert hostnameport="pamapi.contoso.com:8086" certhash=$tPrint certstorename=MY appid="$guid"

        3. Configure SSL on pamapi.contoso.com
        4. Run PowerShell as Admin and execute following:

          Set-WebBinding -Name 'MIM Privileged Access Management API' -BindingInformation ":8086:" -PropertyName Port -Value 8087

          New-WebBinding -Name "MIM Privileged Access Management API" -Port 8086 -Protocol https -HostHeader "pamapi.contoso.com" -SslFlags 1

          Remove-WebBinding -Name "MIM Privileged Access Management API" -BindingInformation ":8087:"

        Conclusion of Part 3

        Now we are ready for the Part 4 - Installing PAM Example portal.

        In this exercise we went step by step through PAM Portal set up. If you carefully followed all steps you have healthy and well configured PAM deployment.

        We didn’t spent time on Portal customization and branding, what I leave to you for the future.

        In the Part 4 we will set up PAM Example Portal.

        Until then

        Have a great week

        Disclaimer – All scripts and reports are provided ‘AS IS’

        This sample script is not supported under any Microsoft standard support program or service. This sample script is provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of this sample script and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of this script be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use this sample script or documentation, even if Microsoft has been advised of the possibility of such damages.

        Most Common Mistakes in Active Directory and Domain Services – Part 3

        $
        0
        0

        This blog post is the third (and last) part in the 'Most Common Mistakes in Active Directory In Domain Services" series.
        In the previous parts, we covered some major mistake like configuring multiple password policies using GPO and keeping FFL/DFL at a lower version.
        The 3'rd part of the series is no exception. we'll go on and review three additional mistakes and summarize this series.

        Series:

        Mistake #7: Installing Additional Server Roles and Applications on a Domain Controller

        When I review a customer's Active Directory environment, I often find additional Windows Server roles (other than the default ADDS and DNS roles) installed on one or more of the Domain Controllers.

        This can be any role - from RDS Licensing, through Certificate Authority and up to DHCP Server. Beside Windows Server roles, I also find special applications and features running on the Domain Controllers, like KMS (Key Management Service) host for volume activation, or Azure AD Connect for integrating on-premises directories with Azure AD.

        There is a wide variety of roles and applications which administrators install on the Domain Controllers, but there is one thing common to all of them: Domain Controllers are NOT the place for them.

        By default, any Domain Controller in a domain provides the same functionality and features as the others, what makes the Active Directory Domain Services not be affected if one Domain Controller becomes unavailable.
        Even in a case where the Domain Controller holding the FSMO roles becomes unavailable, the Domain Services will continue to work as expected for most scenarios (at least in the short-term).

        When you install additional roles and applications on your Domain Controllers, two problems are raised:

        1. Domain Controllers with additional roles and features become unique and different compares to other Domain Controllers. If any of these Domain Controllers will be turned off or get damaged, its roles and features might be affected and become unavailable. This, in fact, creates a dependency between ADDS and other roles and affect the redundancy of the Active Directory Domain Services.
        2. Upgrading your Active Directory environment becomes a much more complicated task. A DHCP Server or a Certificate Authority roles installed on your Domain Controllers will enforce you to deal with them first, and only then move forward and upgrade the Active Directory itself. This complexity might also affect other tasks like restoring a Domain Controller or even put a Domain Controller into maintenance.

        This is why putting additional roles and applications on your Domain Controllers is not recommended for most cases.
        You can use the following PowerShell script to easily get a report with your Domain Controllers installed roles. Pay attention that this script is working only for Windows Server 2012 and above. For Windows Server 2008, you can use WMI Query.

         

        Bottom Line: Domain Controllers are designed to provide directory services for your users - allowing access to domain resources and respond to security authentication requests.
        Mixing Active Directory Domain Services with other roles and applications creates a dependency between the two, affect Domain Controller performance and make the administrative tasks a much more complicated.

        Do It Right: Use Domain Controllers for Active Directory Domain Services only, and install additional roles (let it be KMS or a DHCP server) on different servers.

        Mistake #8: Deploying Domain Controllers as a Windows Server With Desktop Experience 

        When you install Windows Server, you can choose between two installation options:

        • Windows Server with Desktop Experience - This is the standard user interface, including desktop, start menu, etc.
        • Windows Server - This is the Server Core, which leaving the standard user interface in favor of command line.

        Although Windows Server Core has some major advantages compares to Desktop Experience, most administrators are still choosing to go with the full user interface, even for the most convenient and supported server roles like Active Directory Domain Services, Active Directory Certificate Services, and DHCP Server.

        Windows Core is not a new option, and it has been here since Windows Server 2008R2. It works great for the supported Windows roles and has some great advantages compares to the Windows Server with Desktop Experience. Here are the most significant ones:

        • Reduce potential attack surface and lower the chance for user mistakes - Windows Server Core reduces the potential attack surface by eliminating binaries and features which does not require for the supporting roles (Active Directory Domain Services in our case).
          For example, the Explorer shell is not installed, which of curse reduces the risks and exploits that can be manipulated and used to attack the server.
          Other than that, when customers are using Windows Server with Desktop Experience for Active Directory Domain Services, they are also usually performing administrative tasks directly on their Domain Controllers using Remote Desktop.
          This is a very bad habit as it may have a significant impact over the Domain Controllers performance and functionality. It might also cause a Domain Controller to become unavailable by accidentally turn it off or running a heavy PowerShell script which drains the server's memory.
        • Improve administrative skills while still be able to use the GUI tools - by choosing Windows Server Core, you'll probably get the chance to use some PowerShell cmdlets and improve your PowerShell and scripting skills.
          Some customers think that this is the only way to manage and administer the server and its role, but that's not true.
          Alongside the Command Line options, you'll find some useful remote management tools, including Windows Admin Center, Server Manager, and Remote Server Administration Tools (RSAT).
          In our case, the RSAT includes all the Active Directory Administrative tool like the Active Directory Users and Computers (dsa.msc) and the ADSI Editor (adsiedit.msc).
          It also important to be familiar with the 'Server Core App Compatibility Feature on Demand' (FOD), which can be used to increase Windows Server Core 2019 compatibility with other applications and to provide administrative tools for troubleshooting scenarios.
          My recommendation is to deploy an administrative server for managing all domain services roles, including Active Directory Domain Services, DNS, DHCP, Active Directory Certificate Services, Volume Activation, and others. 
        • Other advantages like reducing disk space and memory usage are also here, but they, by themselves, are not the reason for using Windows Server Core.

        You should be aware that unlike Windows Server 2o12R2, you cannot convert Windows Server 2016/2019 between Server Core and Server with Desktop Experience after installation.

        Bottom Line: Windows Server Core is not a compromise. For the supported Windows Server roles, it is the official recommendation by Microsoft. Using Windows Server with Full Desktop Experience increases the chances that your Domain Controllers will get messy and will be used for administration tasks rather than providing domain services.

        Do It Right: Install your Domain Controllers as a Windows Server Core, and use remote management tools to administer your domain resources and configuration. Consider deploying one of your Domain Controller as a Windows Server with Full Desktop Experience for forest recovery scenarios.

        Mistake #9: Use Subnets Without Mapping them to Active Directory sites

        Active Directory uses sites for many purposes. One of them is to inform clients about Domain Controllers available within the closest site as the client.

        For doing that, each site is associated with the relevant subnets, which correspond to the range of IP addresses in the site. You can use Active Directory Sites and Services to manage and associate your subnets. 

        When a Windows domain client is looking for the nearest Domain Controller (what's known as the DC Locator process), the Active Directory (or more precisely, the NetLogon in one of the Domain Controllers) is looking for the IP address of the client in its subnets-to-sites association data.
        If the client's IP address is found in one of the subnets, the Domain Controller returns the relevant site information to the client, and the client use this information to contact a Domain Controller within its site.

        When the client's IP address cannot be found, the client may connect to any Domain Controller, including ones that are physically far away from him.
        This can result in communication over slow WAN links, which will have a direct impact on the client login process.

        If you suspect that you have missing subnets in your Active Directory environment, you can look for event ID 5807 (Source: NETLOGON) within your Domain Controllers.
        The event is created when there are connections from clients whose IP addresses don't map to any of the existing AD sites.
        Those clients, along with their names and IP address, are listed by default in C:\Windows\debug\netlogon.log.

        You can use the following PowerShell script to create a report of all clients which are not mapped to any AD sites, based on the Netlogon.log files from all of the Domain Controllers within the domain.

        The script output should look similar to this:

        Bottom Line: The association of subnets to Active Directory sites has a significant impact on the client machines performance. Missing this association may lead to poor performance and unexpected login times.

        Do It Right: Work together with your IT network team to make sure any new scope is covered and has a corresponded subnet that associated to an Active Directory site.

        So... this was the last part of the 'Most Common Mistakes in Active Directory and Domain Services' series.
        Hope you enjoyed reading these blog posts and learned a thing or two.

        Time zone issues when copying SCOM alerts

        $
        0
        0

        Background

        When trying to copy-paste (ctrl+c, ctrl+v) alerts from the SCOM console to an Excel worksheet or just a text file, we noticed that the Created field values where different from the ones displayed in the console. There was a two-hour difference.

        1

        2

        As it turns out, the server was configured in a GMT+2 time zone, and the values got pasted in UTC. Hence the two-hour difference.

        Solution

        On each of the servers/workstations with SCOM console installed where you want to fix this, simply create the following registry key and value:

        Key: HKEY_CURRENT_USER\SOFTWARE\Microsoft\Microsoft Operations Manager\3.0\Console\ViewCopySettings\

        Value: InLocalTime (DWord)

        Data: 1

        (Where 1 means that you want to have the values in your local time, and 0 means the default behaviour of UTC)

        3



        Conclusion

        With some digging done by me and my colleagues using Procmon we where able to find out that the copy mechanism is trying to reach a non existing registry key and value.

        So.. “When in doubt, run process monitor” – Mark Russinovich.


        Hope this helps,

        Oren Salzberg.

        Field Notes: The case of buried Active Directory Account Management Security Audit Policy events

        $
        0
        0

        Security auditing is one of the most powerful tools that you can use to maintain the integrity of your system.  As part of your overall security strategy, you should determine the level of auditing that is appropriate for your environment.  Auditing should identify attacks (successful or not) that pose a threat to your network, and attacks against resources that you have determined to be valuable in your risk assessment.

        In this blog post, I discuss a common security audit policy configuration I come across in a number of environments (with special focus on Account Management).  I also highlight the difference between basic and advanced security audit policy settings.  Lastly, I point you to where recommendations that can help you fine-tune these policies can be obtained.

        Background

        It may appear that events relating to user account management activities in Active Directory (AD) are not logged in the security event logs on domain controllers (DC).  This is an example of a view on one DC:

        Cluttered Security Event Log

        Here we see a lot of events from the Filtering Platform Packet Drop and Filtering Platform Connection subcategories - the image shows ten of these within the same second! 

        We see the following events on the same log about two minutes later (Directory Service Replication):

        Cluttered Security Event Log

        It can also be seen that there was an event relating to a successful Directory Service Access (DS Access) activity, but this is only one out of quite a bit!

        Running the following command in an elevated prompt helps in figuring out what triggers these events:

         auditpol /get /category:"DS Access,Object Access" 

        The output below reveals that every subcategory in both the Policy Change and DS Access categories is set to capture success and failure events.

        Auditpol Output

        Note: running auditpol unelevated will result in the following error:

        Error 0x00000522 occurred:

        A required privilege is not held by the client.

        To complete the picture, this is what it looked like in the Group Policy Editor:

        Basic Audit Policy Settings Group Policy Management Editor

        Do we need all these security audit events?  Let us look at what some of the recommendations are.


        Security auditing recommendations

        Guidance from tools such as the Security Compliance Manager (SCM) states that if audit settings are not configured, it can be difficult or impossible to determine what occurred during a security incident.  However, if audit settings are configured so that events are generated for all activities the security log will be filled with data and hard to use.  We need a good balance. 

        Let us take a closer look at these subcategories:

        Filtering Platform Packet Drop

        This subcategory reports when packets are dropped by Windows Filtering Platform (WFP).  These events can be very high in volume.  The default and recommended setting is no auditing on AD domain controllers.

        Filtering Platform Connection

        This subcategory reports when connections are allowed or blocked by WFP.  These events can be high in volume.  The default and recommended setting is no auditing on AD domain controllers.

        Directory Service Replication

        This subcategory reports when replication between two domain controllers begins and ends.  The default and recommended setting is no auditing on AD domain controllers.

        These descriptions and recommendations are from SCM but there is also the Policy Analyzer, which is part of the Microsoft Security Compliance Toolkit, you can look at using for guidance.  There’s also this document if you do not have any of these tools installed.

        Tuning audit settings

        Turning on everything – success and failures, is obviously not inline with security audit policy recommendations.  If you have an environment that was built on Windows Server 2008 R2 or above, the advanced audit policy configuration is available to use in Group Policy.

        Important

        Basic versus Advanced

        Reference: https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/dd692792(v=ws.10)

        If you already have settings configured in the basic audit policy and want to start leveraging the advanced audit policy in order to benefit from granularity offered by the latter, you need to carefully plan for the migration.

        Getting Started with Advanced Audit Policy Configuration

        In case you are wondering what I mean by granularity, see a comparison of the two below.

        Basic Audit Policy Settings

        In this example, I set the audit directory service access (DS Access) category to success:

        Example of Basic Audit Policy Settings

        Notice that all subcategories are affected as there is no granularity offered here (every subcategory is set to success):

        Outcome of Basic Audit Policy Setting

        Side note: take a look back at the Group Policy Management Editor window focusing on Audit Policy while we are here.  Notice that audit policy change is set to no auditing instead of not defined.  Here is the difference between the two:

        • Not defined means that group policy does not enforce this setting – Windows (Server) will assume the default setting
        • No auditing means that auditing is turned off – see example below

        No Auditing

        Advanced Audit Policy Settings

        On the other hand, the advanced security audit policy does offer fine-grained control.  The example below demonstrates granularity that could be realized when using the advanced security audit policies:

        Subcategory Setting
        Audit Detailed Directory Service Replication No Auditing
        Audit Directory Service Access Success and Failure
        Audit Directory Service Changes Success
        Audit Directory Service Replication No Auditing

        Example of Advanced Audit Policy Settings

        The output of auditpol confirms expected the expected result:

        Outcome of Advanced Audit Policy Settings

        The outcome

        After turning off basic security audit policies and implementing the advanced settings based on the recommendations shared above, the security event logs start to make sense since a lot of the “noise” has been removed.  We start seeing desired events logged in the security log as depicted below:

        Neat Security Event Log

        Keep in mind that these events are local to each DC, and that the event logs are configured to overwrite events as needed (oldest events first) by default.  Solutions such as System Center Operations Manager Audit Collection Services can help capture, centralize and archive these events.

        Till next time…

        Going Serverless with Azure Functions & Powershell: SendGrid

        $
        0
        0


        In this post, we will discuss the process of creating that solution using Azure Function Apps and SendGrid to send emails based on your Powershell Functions which runs in your local machine. We will see that how can we build a serverless email solution that reports your disk usage with Azure Functions, SendGrid and Powershell.

        We are going to examine this scenario in the following categories:

        1- Configure SendGrid and Azure

        2- Create an Azure Function App

        3- Create an Azure Funciton – Experimental language was used  in this scenario

        4- Create your local Powershell Script to learn aobut your disk space and usage

        6- (Optional) Put your poweshell script into the task schedule and send your disk usage as an email by calling your Azure Function URL

        Please keep in mind that you can customize your Azure Powershell function and local PowerShell script based on your requirements and needs. I think that it might be useful to implement this kind of logic into your solutions.


        Configure SendGrid and Azure

        In this demo, I preferred to use free SendGrid account which provides us a software plan as 25,000 emails per month to get things going.

        You can find SendGrid Email Delivery in Azure Marketplace in the Web Category. Once the SendGrid account is successfully created, you need to obtain your SendGrid API key to use this API key later during the creation of the function. So that, make sure that you are going to keep it in a secure place.

        Annotation 2019-01-12 165233


        Annotation 2019-01-12 165405Annotation 2019-01-12 165449


        Once your SendGrid Account is created, you can click Manage in the left corner, and it will automatically direct you into the SendGrid Account. The next step that we are going to to do is to obtain your API key which is going to be needed to be used in your Function App.

        Annotation 2019-01-12 165808


        Please select Create API Key on the top of the right Corner. Specify your API Key details such as Name and Permissions and then Click “Create & View.”

        image           Annotation 2019-01-12 165944

        image














        Remember that this is your only option to view and obtain your API Key. For security purposes, it will not allow the API Key value to be displayed.




        Create an Azure Function App

        As a next step, we are going to create our new function app service. As you see in the following screenshot, We need to use resource group and storage account to store the function code and its components.


        Annotation 2019-01-12 170335


        Since we are going to create our function app as Powershell script, we need to review and change our platform features. Please go to your previously created function and from the General Settings open up the Function App Settings section. Now we have to change our RunTime version from ~2 to ~1 to be able to access the Powershell language support once we are choosing our function template when experimental language support is enabled.

        Annotation 2019-01-12 171430


        image


        Annotation 2019-01-12 171738

        Since we are going to trigger our SendGrid API we are going to need to choose our template as HTTP Trigger. You can specify your function language, name and Authorization Level.

        Annotation 2019-01-12 171759

        The function HttpTriggerPowershell1 and run.ps1 script are going to be our sources which is going to be responsible to call the SendGrid API.


        Create and Modify an Azure Function

        Please provide following expected variables into your run.ps1 script. We need to define “to” and “from” sections in the body variable.

        To be able to call post method, we need to populate our header variable, please provide your API KEY which is obtained in previous section.

        run.ps1

        # POST method: $req

        $requestBody = Get-Content $req -Raw | ConvertFrom-Json
        $count = $requestBody.value.count $date = $requestBody.date
        $firstline = "Id | Type | Size(GB) | FreeSpace(GB) | FreeSpace(%)"+"\n\n\n"
        $info += $firstline

        for($i=0;$i -lt $count;$i++){
            $line = $requestBody.value[$i].DeviceID + "\t" + $requestBody.value[$i].DriveType + "\t" + $requestBody.value[$i].'Size (GB)' + "\t"  + $requestBody.value[$i].'Free Space (GB)' + "\t" + $requestBody.value[$i].'Free Space (%)' +  "\t\t\n\n\n"

        $info +=  $line }
        $body = @" '{"personalizations": [{"to": [{"email": "TO_EMAIL_ADDRESS"}]}],"from": {"email": "FROM_EMAIL_ADDRESS"},"subject": "Current Disk Space Status --> $date","content": [{"type": "text/plain", "value": "$info"}]}' "@
        $header = @{"Authorization"="Bearer YOUR API KEY HERE";"Content-Type"="application/json"} 
        Invoke-RestMethod -Uri https://api.sendgrid.com/v3/mail/send -Method Post -Headers $header -Body $body


        Create a Powershell Script to learn about your disk space and usage


        Now we need to customize our local PS Script which call our Azure Function app to be able to invoke rest method. We need to obtain our function URL from Azure Portal and place into the expected parameter in the “Invoke-RestMethod”.

        2


        get_disk_space.ps1

        $servername = "localhost"
        $diskinfo = Get-WmiObject -Class Win32_LogicalDisk -ComputerName $servername |
        Select-Object @{Name="DeviceID";Expression={$_.DeviceID}},          @{Name="DriveType";Expression={switch ($_.DriveType){
        0 {"Unknown"}

        1 {"No Root Directory"}
        2 {"Removable Disk"}
        3 {"Local Disk"}
        4 {"Network Drive"}
        5 {"Compact Disc"}
        6 {"RAM Disk"}
        }};
        },
        @{Name="Size (GB)";Expression={"{0:N1}" -f ($_.Size/1GB)}},
        @{Name="Free Space (GB)";Expression={"{0:N1}" -f($_.FreeSpace/1GB)}},
        @{Name="Free Space (%)";
        Expression={
        if ($_.Size -gt 0){
        "{0:P0}" -f($_.FreeSpace/$_.Size)
        }
        else
        {
        0
        }
        }
        }


        $data = @{date="{0:MM}/{0:dd}/{0:yyyy} {0:hh}:{0:mm}" -f (get-date);value = $diskinfo} $json = $data | ConvertTo-Json Invoke-RestMethod -Method post -uri "YOUR COPIED FUNCTION URL HERE" -Body $json



        Configure to run a Powershell Script into Task Scheduler

        To create a periodic action which send your participant about given specified machine’s disk usage status, we need to call our get_disk_space.ps1 script on a daily basis.

        As an example I configured my schedule to send an email at 8:30 pm every day with the information of my current disk space usage.

        tempsnip

        In the action tab we need to call required script which collect our disk usage info and by calling Invoke-RestMethod with your function URL.

        tempsnip2



        Here is the outcome of our scenario;

        tempsnip3



        Please let me know if you have any suggestion & question about my post.

        Thanks for your time!

        References

        https://sendgrid.com/docs/index.html

        https://docs.microsoft.com/en-us/azure/azure-functions/functions-bindings-sendgrid

        https://social.technet.microsoft.com/wiki/contents/articles/38580.configure-to-run-a-powershell-script-into-task-scheduler.aspx

        Windows Admin Center–Part 1 of Optimization Series

        $
        0
        0

        This is going to be the first in a series of posts on how to optimize your environment, with the tools provided by Microsoft at no cost to you.

         

        Windows Admin Center which can be downloaded here is the Natural evolution and ultimate replacement for Server Manager.

         

        It is a Free, single lightweight MSI download that uses Remote PowerShell and WMI over WinRM, to connect to, manage the devices (Windows Server 2008 R2– later, Windows 10) through the Windows Admin Center gateway installed on Windows Server or Windows 10

         

        It Provides a Single plane of glass view, for performing multiple tasks across a range of Servers in your Environment, without having to use multiple tools anymore (MMC, Hyper-V Manager etc).

         

        wac-complements

         

        Today I am going to be looking at 3 features to help you as an Admin with running your infrastructure more effectively:

         

        1. Managing Certificates like a Pro (easy as 1 – 2 – 3 )

        2. Enabling your Nested Virtualization

        3. Quickly Enable Azure Backup  ( under 5 minutes start to Finish)

        Before we get started, we need to ensure that we have a list of the machines available that we want to manage

         

        This can be done by either adding the Server names in manually

         

        image

         

        or adding a txt file with the server names in, for managing your entire environment

         

        image

         

        Now that we have added the servers, we connect to a Machine, and can start from there

         

        1. Managing Certificates like a Pro (easy as 1 – 2 – 3 )

         

        After selecting a machine to manage, select Certificates (step 1)

         

        SNAGHTMLfbe2018

         

        I Now have an overview of the Certs that are installed on the machine, and can view the number of expired Certs( or import new certs etc)

         

        For this example, we are cleaning up expired Certs on the machine. Select Expired (Step 2)

         

        SNAGHTMLfc20365

         

        Now once I have opened the expired Certs, I can now delete\request renewal of the certs (step 3)

         

        SNAGHTMLfc7aad8

         

        That is certificate management like a Pro.

         

        From one pane of glass, I can easily manage the certs, quickly and effectively, without having to launch MMC – Certificates.

         

        2. Enabling your Nested Virtualization

         

        When selecting Virtual Machines, you will get a summary of the VM’s running on the Server\PC and the impact of that on the system

         

        SNAGHTML2e72695

         

        Now we select Inventory, then the Virtual machine we want to edit\manage

         

        SNAGHTML3003ca3

         

        Now select More – Settings from the Drop down list

         

        SNAGHTML3010c09

         

        Note : Remember that the VM must not be in a running State, else you cannot make changes to the Hardware of it

         

        Select Processors – Enable nested virtualization

         

        SNAGHTML301a6e2

         

        That simple.

         

        Signing into Azure

         

        For the Next Step you need to have already signed the Gateway in to your Azure Subscription

        If you have not, the steps are listed below:

         

        In Admin Center – Select the Gear Icon for Settings

         

        SNAGHTML357bbdf

         

        Select Azure – Register

         

        image

         

        Follow the Steps to sign in, grant permissions to the Gateway App on the subscription.

         

        3. Quickly Enable Azure Backup ( under 5 Minutes)

        The Following Video, will guide you through setting up Azure Backup from scratch in under 5 minutes

         

        Quickly enable azure backup

         

        I hope that this will help you in getting used to and start using new Windows Admin Center.

        Please check back later for Part 2 of the Blog

        Field Notes: The case of Active Directory Diagnostics – Data Collector Set Fails to Start

        $
        0
        0

        Performance Monitor is a great tool for collecting and analyzing performance data in Windows and Windows Server.  There are many counters available that one can look at to help understand how the system is performing.  Unfortunately analysis of performance data may not always be straightforward for some system administrators.  Luckily, there is the built-in Data Collector Set for Active Directory Diagnostics in Windows Server once the Active Directory Domain Services role is installed on a machine.  This feature makes the life of an Active Directory administrator easy as most of the analysis is automated.

        In this blog post, I briefly explain how the Active Directory Diagnostics works.  I also take you through what I see in some environments where this feature does not work due to inadequate user rights.

        The Active Directory Diagnostics Report

        Say you are already familiar with the Active Directory (AD) Diagnostics Data Collector Set (DCS) in Performance Monitor, or you have read this blog post and are interested in a report similar to the one below created by the default AD DCS.  In the example, we see that there is a warning indicating that the system is experiencing excessive paging.  The cause here is that available memory on the system is low.  The report also suggests that we upgrade the physical memory or reduce system load.  This report allows us to drill into desired areas of interest such as Active Directory, CPU, network, disk, memory, etc.

         Diagnostics Results

        The Data Collector Set Fails to Start

        Unfortunately the AD DCS may fail to start in some instances due to inadequate user rights, which I see often in the field.  Instead of starting up and visually indicating with the green play icon as depicted below, there would not even be a pop-up dialog box with a warning or error indicating that there is a problem – the DCS just does not start!

        Running Data Collector Set

        Attempting to kick of the DCS via command line also does not help:

         Logman Start "System\Active Directory Diagnostics" –ets 

        Behind The Scenes

        Before we get into what exactly the issue is and how we would go about resolving it, let us briefly take a look at how this feature works. 

        Working environment

        The Active Directory Diagnostics DCS leverages the Windows Task Scheduler in order to complete what it is requested to perform.  I grabbed a screenshot from the Task Scheduler to help paint a picture:

        Scheduled Task History

        Following the sequence of events that took place (reading from bottom to top), we get an idea on what happens behind the scenes when the play button is pressed in Performance Monitor.  Here are a few informational events that stand out:

        • Event ID 100 – Task Started
        • Event ID 200 – Action Started
        • Event ID 201 – Action Completed
        • Event ID 102 – Task Completed


        Broken environment

        Looking at the task where the Data Collector Set fails to launch, we see the following:

        Scheduled Task History

        From the image above, we can see Event ID 101.  This event means the Task Scheduler failed to start the AD Diagnostics task for the currently logged on user.

        Note: These tasks are created under Microsoft | Windows | PLA |System

        Taking a look in the Event Viewer (Microsoft-Windows-TaskScheduler | Operational), there is also an Event ID 104 logged indicating that the task scheduler failed to logon…

        Event 104

        Required Rights

        How do we proceed with this background information?  Taking a look back at the scheduled task, we see the following under general options.  The specified account is the currently logged on user (which is also reflected in Event ID 101):

        Task User Account

        You may begin to wonder at this stage as you are currently logged on to the DC with an account that is in the Domain Admins group.  What permissions/rights are missing?  The log on as a batch job user right assignment, which determines which accounts can log on by using a batch-queue tool such as the Task Scheduler service. 

        Default Behavior

        In Windows Server 2012 R2, this setting is set to “Not Configured” in the Default Domain Controllers Policy.  Domain Controllers would then assume the default behavior, which assigns this user right to the following security groups:

        • Administrators
        • Backup Operators


        Common Case

        If you look at the policy setting (where the DCS fails to start), you would see that user accounts or groups have explicitly been granted the right.  This unfortunately overrides the default behavior – only the accounts and/or security groups listed here have this right (the explain tab lists the default groups):

        User Rights Assignment

        I observed something interesting when I tested on a few Windows Server 2016 machines in my lab.   Default groups are pre-populated when you modify this setting, therefore, chances of accidentally hurting yourself are lower.

        The Fix is Very Easy

        Administrators and Backup Operators would have to been added over and above the IDRS\fw-service account (in this example) if you still want them to have this user right, as depicted below:

        User Rights Assignment

        After adding the Administrators group back to the list of security principles allowed to log on as a batch job, the DCS successfully starts:

         Logman Query “System\Active Directory Diagnostics” – ets 

         

        Running Data Collector Set (command-line)

        Closure

        Be careful when modifying policy settings such as User Rights Assignments as you could end up seeing unexpected results if they are not properly configured.  In this instance, the Administrators and Backup Operators groups would have to be explicitly added with the IDRS\fw-service account in order not to negatively impact the default behavior.  Be sure check tools such as the Policy Analyzer and the Security Compliance Manager for guidance on what the recommendations are.  This is one example and there are others, such as the inability to add a new DC to an existing domain due to lack inadequate rights!

        Till next time…


        SCOM Advanced Authoring: TCP Port Monitoring

        $
        0
        0

        It's been long time since I blogged about SCOM Authoring. Following the Blog Post on PowerShell Discovery from CSV File I got numerous feedbacks from fellow techies to complete the TCP Port Monitoring MP with monitors and rules. One of our friend even has created an MP which he has blogged out here.

        Anyways, I thought it would be good to finish what I started with detailed explanation. Better Late than Never!

        When we start talking about creating Custom Monitors and Rules in SCOM, we must understand how they are structured.

        Monitors: Each Monitor is based on a Monitor Type where you define the number of states (two or three) and their criteria along with necessary modules.

        Rules: Rules are built on top of modules directly.

        To understand about modules, please take a minute to go through the WIKI article.

        Coming back to the scenario, now we must build 4 Monitors and 1 Rule.

        • TCP Unreachable Monitor
        • TCP Timeout Monitor
        • DNS Resolution Monitor
        • Connection Refused Monitor
        • Connection Time Performance Collection Rule

        This means, we must build 4 Monitor Types on top of which 4 Monitors can be created. For all 4 Monitors and 1 Rule, the Data Source is same (i.e., Synthetic Transaction to test port connectivity). So, we will start with creating a Data Source following with 4 Monitor Types and finally our 4 Monitors and a Rule.

        Data Source:

        Below is the XML fragment for the Data Source Module. We use System.SimpleScheduler Data Source module and "Microsoft.SystemCenter.SyntheticTransactions.TCPPortCheckProbe" probe action module to create composite Data Source. As discussed in previous blog posts, we promote the fields that are customizable such as "IntervalSeconds", "SyncTime","ServerName" and "Port".

        <DataSourceModuleType ID="GKLab.TCP.Port.Monitoring.Monitoring.DataSource" Accessibility="Internal" Batching="false">

        <Configuration>

        <xsd:element minOccurs="1" name="IntervalSeconds" type="xsd:integer" />

        <xsd:element minOccurs="1" name="SyncTime" type="xsd:string" />

        <xsd:element minOccurs="1" name="ServerName" type="xsd:string" />

        <xsd:element minOccurs="1" name="Port" type="xsd:integer" />

        </Configuration>

        <OverrideableParameters>

        <OverrideableParameter ID="IntervalSeconds" Selector="$Config/IntervalSeconds$" ParameterType="int" />

        </OverrideableParameters>

        <ModuleImplementation Isolation="Any">

        <Composite>

        <MemberModules>

        <DataSource ID="DS" TypeID="System!System.SimpleScheduler">

        <IntervalSeconds>$Config/IntervalSeconds$</IntervalSeconds>

        <SyncTime>$Config/SyncTime$</SyncTime>

        </DataSource>

        <ProbeAction ID="Probe" TypeID="MicrosoftSystemCenterSyntheticTransactionsLibrary!Microsoft.SystemCenter.SyntheticTransactions.TCPPortCheckProbe">

        <ServerName>$Config/ServerName$</ServerName>

        <Port>$Config/Port$</Port>

        </ProbeAction>

        </MemberModules>

        <Composition>

        <Node ID="Probe">

        <Node ID="DS" />

        </Node>

        </Composition>

        </Composite>

        </ModuleImplementation>

        <OutputType>MicrosoftSystemCenterSyntheticTransactionsLibrary!Microsoft.SystemCenter.SyntheticTransactions.TCPPortCheckData</OutputType>

        </DataSourceModuleType>

        Monitor Type:

        Next, we will create 4 monitor types. Each Monitor Type has two states. We will use the Data Source created above and define two conditions which will correspond to two states of Monitor Type. Below is code for Connection Refused Monitor Type. If StatusCode from Data Source equals 2147952461 then the monitor state will be
        ConnectionRefusedFailure
        which will be mapped to Critical health of monitor. If not, monitor state will be NoConnectionRefusedFailure which will be mapped to Success health of monitor.

        <UnitMonitorType ID="GKLab.TCP.Port.Monitoring.MonitorType.ConnectionRefused" Accessibility="Internal">

        <MonitorTypeStates>

        <MonitorTypeState ID="ConnectionRefusedFailure" NoDetection="false" />

        <MonitorTypeState ID="NoConnectionRefusedFailure" NoDetection="false" />

        </MonitorTypeStates>

        <Configuration>

        <xsd:element minOccurs="1" name="IntervalSeconds" type="xsd:integer" />

        <xsd:element minOccurs="1" name="SyncTime" type="xsd:string" />

        <xsd:element minOccurs="1" name="ServerName" type="xsd:string" />

        <xsd:element minOccurs="1" name="Port" type="xsd:integer" />

        <xsd:element minOccurs="1" name="TimeWindowInSeconds" type="xsd:integer" />

        <xsd:element minOccurs="1" name="NoOfRetries" type="xsd:integer" />

        </Configuration>

        <OverrideableParameters>

        <OverrideableParameter ID="IntervalSeconds" Selector="$Config/IntervalSeconds$" ParameterType="int" />

        </OverrideableParameters>

        <MonitorImplementation>

        <MemberModules>

        <DataSource ID="DS" TypeID="GKLab.TCP.Port.Monitoring.Monitoring.DataSource">

        <IntervalSeconds>$Config/IntervalSeconds$</IntervalSeconds>

        <SyncTime>$Config/SyncTime$</SyncTime>

        <ServerName>$Config/ServerName$</ServerName>

        <Port>$Config/Port$</Port>

        </DataSource>

        <ProbeAction ID="PassThrough" TypeID="System!System.PassThroughProbe" />

        <ConditionDetection ID="ConditionOK" TypeID="System!System.ExpressionFilter">

        <Expression>

        <SimpleExpression>

        <ValueExpression>

        <XPathQuery Type="UnsignedInteger">StatusCode</XPathQuery>

        </ValueExpression>

        <Operator>NotEqual</Operator>

        <ValueExpression>

        <Value Type="UnsignedInteger">2147952461</Value>

        </ValueExpression>

        </SimpleExpression>

        </Expression>

        </ConditionDetection>

        <ConditionDetection ID="ConditionFailure" TypeID="System!System.ExpressionFilter">

        <Expression>

        <SimpleExpression>

        <ValueExpression>

        <XPathQuery Type="UnsignedInteger">StatusCode</XPathQuery>

        </ValueExpression>

        <Operator>Equal</Operator>

        <ValueExpression>

        <Value Type="UnsignedInteger">2147952461</Value>

        </ValueExpression>

        </SimpleExpression>

        </Expression>

        </ConditionDetection>

        <ConditionDetection ID="Consolidator" TypeID="System!System.ConsolidatorCondition">

        <Consolidator>

        <ConsolidationProperties />

        <TimeControl>

        <WithinTimeSchedule>

        <Interval>$Config/TimeWindowInSeconds$</Interval>

        </WithinTimeSchedule>

        </TimeControl>

        <CountingCondition>

        <Count>$Config/NoOfRetries$</Count>

        <CountMode>OnNewItemTestOutputRestart_OnTimerSlideByOne</CountMode>

        </CountingCondition>

        </Consolidator>

        </ConditionDetection>

        </MemberModules>

        <RegularDetections>

        <RegularDetection MonitorTypeStateID="ConnectionRefusedFailure">

        <Node ID="Consolidator">

        <Node ID="ConditionFailure">

        <Node ID="DS" />

        </Node>

        </Node>

        </RegularDetection>

        <RegularDetection MonitorTypeStateID="NoConnectionRefusedFailure">

        <Node ID="ConditionOK">

        <Node ID="DS" />

        </Node>

        </RegularDetection>

        </RegularDetections>

        <OnDemandDetections>

        <OnDemandDetection MonitorTypeStateID="ConnectionRefusedFailure">

        <Node ID="ConditionFailure">

        <Node ID="PassThrough" />

        </Node>

        </OnDemandDetection>

        <OnDemandDetection MonitorTypeStateID="NoConnectionRefusedFailure">

        <Node ID="ConditionOK">

        <Node ID="PassThrough" />

        </Node>

        </OnDemandDetection>

        </OnDemandDetections>

        </MonitorImplementation>

        </UnitMonitorType>

        Monitors:

        Finally, the Monitors. The Monitor is targeted to the custom Class we created earlier - GKLab.TCP.Port.Monitoring.Class which hosts the instances from the CSV file. The Target Instance data is passed as configuration to the Monitor (refer <Configuration> tag) and the Alert parameters are defined. Notice the health state mapping with MonitorTypeStateId which we defined earlier in Monitor Types.

        <UnitMonitor ID="GKLab.TCP.Port.Monitoring.Monitor.ConnectionRefused" Accessibility="Internal" Enabled="true" Target="GKLab.TCP.Port.Monitoring.Class" ParentMonitorID="Health!System.Health.AvailabilityState" Remotable="true" Priority="Normal" TypeID="GKLab.TCP.Port.Monitoring.MonitorType.ConnectionRefused" ConfirmDelivery="true">

        <Category>Custom</Category>

        <AlertSettings AlertMessage="GKLab.TCP.Port.Monitoring.Monitor.ConnectionRefused_AlertMessageResourceID">

        <AlertOnState>Error</AlertOnState>

        <AutoResolve>true</AutoResolve>

        <AlertPriority>Normal</AlertPriority>

        <AlertSeverity>Error</AlertSeverity>

        <AlertParameters>

        <AlertParameter1>$Target/Property[Type="GKLab.TCP.Port.Monitoring.Class"]/Port$</AlertParameter1>

        <AlertParameter2>$Target/Property[Type="GKLab.TCP.Port.Monitoring.Class"]/ServerName$</AlertParameter2>

        <AlertParameter3>$Target/Host/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$</AlertParameter3>

        </AlertParameters>

        </AlertSettings>

        <OperationalStates>

        <OperationalState ID="UIGeneratedOpStateIdde249d72023f429ab12b926b5bc21ca4" MonitorTypeStateID="ConnectionRefusedFailure" HealthState="Error" />

        <OperationalState ID="UIGeneratedOpStateId86f579e32c97416b824528157ecd2c71" MonitorTypeStateID="NoConnectionRefusedFailure" HealthState="Success" />

        </OperationalStates>


        <Configuration>

        <IntervalSeconds>300</IntervalSeconds>

        <SyncTime>00:00</SyncTime>

        <ServerName>$Target/Property[Type="GKLab.TCP.Port.Monitoring.Class"]/ServerName$</ServerName>

        <Port>$Target/Property[Type="GKLab.TCP.Port.Monitoring.Class"]/Port$</Port>

        <TimeWindowInSeconds>$Target/Property[Type="GKLab.TCP.Port.Monitoring.Class"]/TimeWindowInSeconds$</TimeWindowInSeconds>

        <NoOfRetries>$Target/Property[Type="GKLab.TCP.Port.Monitoring.Class"]/NoOfRetries$</NoOfRetries>

        </Configuration>

        </UnitMonitor>

        Rules:

        Like the Monitors, we need target Rule to the custom Class. We need to define the Data Source and relevant modules based on whether the rule is alerting or non-alerting rule. Since we are building a performance collection rule, we ought to use Performance!System.Performance.DataGenericMapper to map the performance data collected and Write Action modules to write the collected data to Ops DB and Ops DW DB.


        <Rules>

        <Rule ID="Virtusa.TCP.Port.Monitoring.Rule.ConnectionTime" Enabled="true" Target="Virtusa.TCP.Port.Monitoring.Class" ConfirmDelivery="true" Remotable="true" Priority="Normal" DiscardLevel="100">

        <Category>PerformanceCollection</Category>

        <DataSources>

        <DataSource ID="DS" TypeID="Virtusa.TCP.Port.Monitoring.Monitoring.DataSource">

        <IntervalSeconds>300</IntervalSeconds>

        <SyncTime>00:00</SyncTime>

        <ServerName>$Target/Property[Type="Virtusa.TCP.Port.Monitoring.Class"]/ServerName$</ServerName>

        <Port>$Target/Property[Type="Virtusa.TCP.Port.Monitoring.Class"]/Port$</Port>

        </DataSource>

        </DataSources>

        <ConditionDetection ID="PerfMapper" TypeID="Performance!System.Performance.DataGenericMapper">

        <ObjectName>TCP Port Check</ObjectName>

        <CounterName>Connection Time</CounterName>

        <InstanceName>$Data/ServerName$:$Data/Port$</InstanceName>

        <Value>$Data/ConnectionTime$</Value>

        </ConditionDetection>

        <WriteActions>

        <WriteAction ID="WriteToDB" TypeID="SC!Microsoft.SystemCenter.CollectPerformanceData" />

        <WriteAction ID="WriteToDW" TypeID="SystemCenter!Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData" />

        </WriteActions>

        </Rule>

        </Rules>

        Wrap Up:

        Finally add missing xml fragments for Folders, Views, String Resources and Language Pack elements.

        You can download the final XML here.

         

        For any SCOM Monitoring requirements, please feel free to add a comment.

        Happy SCOMing!

        Field Notes: The case of the failed SQL Server Failover Cluster Instance – Binaries Disks Added to Cluster

        $
        0
        0

        I paid a customer a visit a while ago and was requested to assist with a SQL Server Failover Cluster issue they were experiencing.  They had internally transferred the case from the SQL team to folks who look after the Windows Server platform as they could not pick up anything relating to SQL during initial troubleshooting efforts.

        My aim in this post is to:

        • explain what the issue was (adding disks meant to be local storage to the cluster)
        • provide a little bit of context on cluster disks and asymmetric storage configuration
        • discuss how the issue was resolved by removing the disks from cluster

        Issue definition and scope

        An attempt to move the SQL Server role/group from one node to another in a 2-node Failover Cluster failed.  This is what they observed:

        Failed SQL Server Group

        From the image above, it can be seen that all disk resources are online.  Would you suspect that storage is involved at this stage?  In cluster events, there was the standard Event ID 1069 confirming that the cluster resource ‘SQL Server’ of type ‘SQL Server’ in clustered role ‘SQL Server (MSSQLSERVER)’ failed.  Additionally, this is what was in the cluster log – “failed to start service with error 2”:

        Cluster Log

        Error code 2 means that the system cannot find the file specified:

        Net HelpMsg

        A little bit of digging around reveals that this is the image path we are failing to get to:

        Registry value

        Now that we have all this information, let’s look at how you would resolve this specific issue we were facing.  Before that however, I would like to provide a bit of context relating to cluster disks, especially on Asymmetric Storage Configuration.

        Context

        Consider a 2 node SQL Server Failover Cluster Instance running on a Windows Server 2012 R2 Failover Cluster with the following disk configuration:

        • C drive for the Operating System – each of the nodes has a direct attached disk
        • D drive for SQL binaries – each of the nodes has a dedicated “local” drive, presented from a Storage Area Network (SAN)
        • All the other drives required for SQL are shared drives presented from the SAN

        Disks in Server Manager

        Note: The 20 GB drive is presented from the SAN and is not added to the cluster at this stage.

        I used Hyper-V Virtual Machines to reproduce this issue in a lab environment.  For the SAN part, I used the iSCSI target that is built-in to Windows Server.

         

        Asymmetric Storage Configuration

        A feature enhancement in Failover Clustering for Windows Server 2012 and Windows Server 2012 R2 is that it supports an Asymmetric Storage Configuration.  In Windows Server 2012 a disk is considered clusterable if it is presented to one or more nodes, and is not the boot / system disk, or contain a page file.  https://support.microsoft.com/en-us/help/2813005/local-sas-disks-getting-added-in-windows-server-2012-failover-cluster

         

        What happens when you Add Disks to Cluster?

        Let us first take a look at the disks node in Failover Cluster Manager (FCM) before adding the disks.

        Disks in Failover Cluster Manager

        Here’s what we have (ordered by the disk number column):

        • The Failover Cluster Witness disk (1 GB)
        • SQL Data (50 GB)
        • SQL Logs (10 GB)
        • Other Stuff (5 GB)

        The following window is presented when an attempt to add disks to a cluster operation is performed in FCM:

        Add Disks to a Cluster

        Both disks are added as cluster disks when one clicks OK at this stage.  After adding the disks (which are not presented to both nodes), we see the following:

        Disks in Failover Cluster Manager

        Nothing changed regarding the 4 disks we have already seen in FCM, and the two “local” disks are now included:

        • Cluster Disk 1 is online on node PTA-SQL11
        • Cluster Disk 2 is offline on node PTA-SQL11 as it is not physically connected to the node

        At this stage, everything still works fine as the SQL binaries volume is still available on this node.  Note that the “Available Storage” group is running on PTA-SQL11.

         

        What happens when you move the Available Storage group?

        Move Available Storage

        Let’s take a look at FCM again:

        Disks in Failover Cluster Manager

        Now we see that:

        • Cluster Disk 1 is now offline
        • Cluster Disk 2 is now online
        • The owner of the “Available Storage” group is now PTA-SQL12

        This means that PTA-SQL12 can see the SQL binaries volume and PTA-SQL11 cannot, which causes downtime.  Moving the SQL group to PTA-SQL12 works just fine as the SQL binaries drive is online on that node.  You may also want to ensure that the resources are configured to automatically recover from failures.  Below is an example of default configuration on a resource:

        Resource Properties

         

        Process People and Technology

        It may appear that the technology is at fault here, but the Failover Cluster service does its bit to protect us from shooting ourselves in the foot, and here are some examples:

        Validation

        The Failover Cluster validation report does a good job in letting you know that disks are only visible from one node.  By the way, there’s also good information here on what’s considered for a disk to be clustered.

        Validation Report

        A warning is more like a “proceed with caution” when looking at a validation report.  Failures/errors mean that the solution does not meet requirements for Microsoft support.  Also be careful when validating storage as services may be taken offline.

         

        Logic

        In the following snippet from the cluster log, we see an example of the Failover Cluster Resource Control Manger (RCM) prevent the move of the “Available Storage” group to prevent downtime.

        Cluster Log

        Back online and way forward

        To get the service up and running again, we had to remove both Disk 1 and Disk 2 as cluster disks and make them “local” drives again.  The cause was that an administrator had added disks that were not meant to be part of the cluster as clustered disks.

        Disks need to be made online from a tool such as the Disk Management console as they are automatically placed in an offline state to avoid possible issues that may be caused by having a non-clustered disk online on two or more nodes in a shared disk scenario.

        I got curious after this and reached out to folks who specialize in SQL server to get their views on whether the SQL binaries drive should or should not be shared.  One of the strong views is to keep them as a non-shared (non-clustered) drives, especially for cases on SQL patching.  What happens if SQL patching fails in a shared drive scenario for example?

        Anyway, it would be great to hear from you through comments.

        Till next time…

        Step by step MIM PAM setup and evaluation Guide – Part 3

        $
        0
        0

        This is third part of the series. In the previous posts we have prepared test environment for PAM deployment, created and configured all needed service accounts, installed SQL Server and prepared PIM server for further installation. Now we have two forests – prod.contoso.com and priv.contoso.com. In PROD we have set up Certificate services, Exchange server, ADFS services and configured two test applications – one is using Windows Integrated Authentication and the second Claim based Authentication. In PRIV forest we have PAM server prepared for MIM/PAM deployment with SQL server ready.

        Series:

        Installing PAM Server

          1. Install SharePoint 2016
            1. a. Download SharePoint 2016 Prerequisites

        Please download following binaries into one selected folder (for example C:SetupSoftwareSP2016-Prerequisites) on the PRIV-PAM server

        Cumulative Update 7 (KB3092423) for Microsoft AppFabric 1.1 for Windows Server [https://www.microsoft.com/en-us/download/details.aspx?id=49171]

        Microsoft Identity Extensions [http://go.microsoft.com/fwlink/?LinkID=252368]

        Microsoft ODBC Driver 11 for SQL Server [http://www.microsoft.com/en-us/download/details.aspx?id=36434]

        Microsoft Information Protection and Control Client [http://go.microsoft.com/fwlink/?LinkID=528177]

        Microsoft SQL Server 2012 Native Client [http://go.microsoft.com/fwlink/?LinkID=239648&clcid=0x409]

        Microsoft Sync Framework Runtime v1.0 SP1 (x64) [http://www.microsoft.com/en-us/download/details.aspx?id=17616] – Open SyncSetup_en.x64.zip and extract to this folder only Synchronization.msi

        Visual C++ Redistributable Package for Visual Studio 2013 [http://www.microsoft.com/en-us/download/details.aspx?id=40784]

        Visual C++ Redistributable for Visual Studio 2015 [https://www.microsoft.com/en-us/download/details.aspx?id=48145]

        Microsoft WCF Data Services 5.0 [http://www.microsoft.com/en-us/download/details.aspx?id=29306]

        Windows Server AppFabric 1.1 [http://www.microsoft.com/en-us/download/details.aspx?id=27115]

        At the end You will need to have in the selected folder following binaries:

              • AppFabric-KB3092423-x64-ENU.exe
              • MicrosoftIdentityExtensions-64.msi
              • msodbcsql.msi
              • setup_msipc_x64.msi
              • sqlncli.msi
              • Synchronization.msi
              • vcredist_x64.exe
              • vc_redist.x64.exe
              • WcfDataServices.exe
              • WindowsServerAppFabricSetup_x64.exe
            1. Install SharePoint Prerequisites

        Log on to PRIV-PAM as a privPAMAdmin (use password P@$$w0rd)

        Open PowerShell ISE as an Admin and paste following script:

        $spPrereqBinaries = ‘C:SetupSoftwareSP2016-Prerequisites’

        $sharePointBinaries = ‘C:SetupSoftwareSharePoint2016’

        function Run-SystemCommand {

        Param(

        [parameter(Mandatory=$true)]

        [string]$Command,

        [parameter(Mandatory=$false)]

        [string]$Arguments = [String]::Empty,

        [parameter(Mandatory=$false)]

        [bool]$RestartIfNecessary = $false,

        [parameter(Mandatory=$false)]

        [int]$RestartResult

        )

        Process {

        try{

        $myProcess = [Diagnostics.Process]::Start($Command, $Arguments)

        $myProcess.WaitForExit()

        [int]$exitCode = $myProcess.ExitCode

        $result = ($exitCode -eq 0)

        if($result) { Write-Host “[OK] $Command was successful” }

        elseif ($RestartIfNecessary -and ($exitCode -eq $RestartResult)){

        Write-Host “[Warning]Please rerun script after restart of the server”

        Restart-Computer -Confirm

        }

        else { Write-Host “[Error] Failed to run $Command” }

        }

        catch {

        Write-Host “[Error] Failed to run $Command”

        Write-Host (“`t`t`t{0}” -f $_.Exception.Message)

        }

        }

        }

        $arguments = “/sqlncli:`”$spPrereqBinariessqlncli.msi`” ”

        $arguments += “/idfx11:`”$spPrereqBinariesMicrosoftIdentityExtensions-64.msi`” ”

        $arguments += “/sync:`”$spPrereqBinariesSynchronization.msi`” ”

        $arguments += “/appfabric:`”$spPrereqBinariesWindowsServerAppFabricSetup_x64.exe`” ”

        $arguments += “/kb3092423:`”$spPrereqBinariesAppFabric-KB3092423-x64-ENU.exe`” ”

        $arguments += “/msipcclient:`”$spPrereqBinariessetup_msipc_x64.msi`” ”

        $arguments += “/wcfdataservices56:`”$spPrereqBinariesWcfDataServices.exe`” ”

        $arguments += “/odbc:`”$spPrereqBinariesmsodbcsql.msi`” ”

        $arguments += “/msvcrt11:`”$spPrereqBinariesvc_redist.x64.exe`” ”

        $arguments += “/msvcrt14:`”$spPrereqBinariesvcredist_x64.exe`””

        Run-SystemCommand -Command “$sharePointBinariesprerequisiteinstaller.exe” -Arguments $arguments -RestartIfNecessary $true -RestartResult 3010

        Replace $spPrereqBinaries value with path where your prerequisite binaries are located.

        Replace $sharePointBinaries with path to root of your SharePoint 2016 distribution.

        Run above script. Result should confirm successful installation. In case server restarts, after restart run again previous command

        Repeat until restart is not needed.

        Restart PRIV-PAM server.

            1. Create SharePoint Server 2016 Installation configuration file

        Log on to PRIV-PAM as a privPAMAdmin (use password P@$$w0rd)

        In the Notepad paste following:

        <Configuration>

        <Package Id=”sts”>

        <Setting Id=”LAUNCHEDFROMSETUPSTS” Value=”Yes” />

        </Package>

        <Package Id=”spswfe”>

        <Setting Id=”SETUPCALLED” Value=”1″ />

        </Package>

        <Logging Type=”verbose” Path=”%temp%” Template=”SharePoint Server Setup(*).log” />

        <PIDKEY Value=”RTNGH-MQRV6-M3BWQ-DB748-VH7DM” />

        <Display Level=”none” CompletionNotice=”no” />

        <Setting Id=”SERVERROLE” Value=”SINGLESERVER” />

        <Setting Id=”USINGUIINSTALLMODE” Value=”1″ />

        <Setting Id=”SETUP_REBOOT” Value=”Never” />

        <Setting Id=”SETUPTYPE” Value=”CLEAN_INSTALL” />

        </Configuration>

        In the configuration I have added SharePoint 2016 evaluation key for Standard version. You are free to replace key with your license key

        Save file as config.xml to chosen location.

            1. Install SharePoint

        Open PowerShell ISE as an Admin and paste following script:

        $sharePointBinaries = ‘C:SetupSoftwareSharePoint2016’

        $configPath = ‘C:Setup’

        function Run-SystemCommand {

        Param(

        [parameter(Mandatory=$true)]

        [string]$Command,

        [parameter(Mandatory=$false)]

        [string]$Arguments = [String]::Empty,

        [parameter(Mandatory=$false)]

        [bool]$RestartIfNecessary = $false,

        [parameter(Mandatory=$false)]

        [int]$RestartResult

        )

        Process {

        try{

        $myProcess = [Diagnostics.Process]::Start($Command, $Arguments)

        $myProcess.WaitForExit()

        [int]$exitCode = $myProcess.ExitCode

        $result = ($exitCode -eq 0)

        if($result) { Write-Host “[OK] $Command was successful” }

        elseif ($RestartIfNecessary -and ($exitCode -eq $RestartResult)){

        Write-Host “[Warning]Please rerun script after restart of the server”

        Restart-Computer -Confirm

        }

        else { Write-Host “[Error] Failed to run $Command” }

        }

        catch {

        Write-Host “[Error] Failed to run $Command”

        Write-Host (“`t`t`t{0}” -f $_.Exception.Message)

        }

        }

        }

        Run-SystemCommand -Command “$sharePointBinariessetup.exe” -Arguments “/config $configPathconfig.xml” -RestartIfNecessary $true -RestartResult 30030

        Replace $ configPath value with path where config file created in previous step is located.

        Replace $sharePointBinaries with path to root of your SharePoint 2016 distribution.

        Run above script. Wait until script finishes – it won’t display installation progress.Result should confirm successful installation.

          1. Create SharePoint Site
            1. Request, issue and install SSL certificate

        Open PowerShell ISE as an Admin and paste following script:

        $file = @”

        [NewRequest]

        Subject = “CN=pamportal.contoso.com,c=AE, s=Dubai, l=Dubai, o=Contoso, ou=Blog”

        MachineKeySet = TRUE

        KeyLength = 2048

        KeySpec=1

        Exportable = TRUE

        RequestType = PKCS10

        [RequestAttributes]

        CertificateTemplate = “WebServerV2”

        “@

        Set-Content C:Setupcertreq.inf $file

        Invoke-Expression -Command “certreq -new C:Setupcertreq.inf C:Setupcertreq.req”

        (Replace C:Setup with folder of your choice – in this folder we will save request file)

        Run above script and respond to message boxes prompt “Template not found. Do you wish to continue anyway?” with “Yes”.

        Copy C:Setupcertreq.req to corresponding folder on PROD-DC server.

        Log on to PROD-DC as an administrator

        Open command prompt as an admin.

        Run following command:

        certreq -submit C:Setupcertreq.req C:Setuppamportal.contoso.com.cer

        Here C:Setup is folder where certificate request file is placed – modify path according to your location.

        Confirm CA when prompted

        Now we have in C:Setup certificate file C:Setuppamportal.contoso.com.cer. Copy that file back to PRIV-PAM server.

        Log on to PRIV-PAM as a privPAMAdmin (use password P@$$w0rd)

        Run PowerShell as Admin and execute following:

        $cert = Import-Certificate -CertStoreLocation Cert:LocalMachinemy -FilePath C:Setuppamportal.contoso.com.cer

        $guid = [guid]::NewGuid().ToString(“B”)

        $tPrint = $cert.Thumbprint

        netsh http add sslcert hostnameport=”pamportal.contoso.com:443″ certhash=$tPrint certstorename=MY appid=”$guid”

            1. Run script to create SharePoint Site where PAM Portal will be placed.

        Open PowerShell ISE as an Admin and paste following script:

        $Passphrase = ‘Y0vW8sDXktY29’

        $password = ‘P@$$w0rd’

        Add-PSSnapin Microsoft.SharePoint.PowerShell

        #

        #Initialize values required for the script

        $SecPhassphrase = (ConvertTo-SecureString -String $Passphrase -AsPlainText -force)

        $FarmAdminUser = ‘PRIVsvc_PAMFarmWSS’

        $svcMIMPool = ‘PRIVsvc_PAMAppPool’

        #

        #Create new configuration database

        $secstr = New-Object -TypeName System.Security.SecureString

        $password.ToCharArray() | ForEach-Object {$secstr.AppendChar($_)}

        $cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $FarmAdminUser, $secstr

        New-SPConfigurationDatabase -DatabaseName ‘MIM_SPS_Config’ -DatabaseServer ‘SPSSQL’ -AdministrationContentDatabaseName ‘MIM_SPS_Admin_Content’ -Passphrase $SecPhassphrase -FarmCredentials $cred -LocalServerRole WebFrontEnd

        #

        #Create new Central Administration site

        New-SPCentralAdministration -Port ‘2016’ -WindowsAuthProvider “NTLM”

        #

        #Perform the config wizard tasks

        #Install Help Collections

        Install-SPHelpCollection -All

        #Initialize security

        Initialize-SPResourceSecurity

        #Install services

        Install-SPService

        #Register features

        Install-SPFeature -AllExistingFeatures

        #Install Application Content

        Install-SPApplicationContent

        #

        #Add managed account for Application Pool

        $cred = new-object -typename System.Management.Automation.PSCredential -argumentlist $svcMIMPool, $secstr

        New-SPManagedAccount -Credential $cred

        #

        #Create new ApplicationPool

        New-SPServiceApplicationPool -Name PAMSPSPool -Account $svcMIMPool

        #

        #Create new Web Application.

        #This creates a Web application that uses classic mode windows authentication.

        #Claim-based authentication is not supported by MIM

        New-SPWebApplication -Name ‘PAM Portal’ -Url “https://pamportal.contoso.com&#8221; -Port 443 -HostHeader ‘pamportal.contoso.com’ -SecureSocketsLayer:$true -ApplicationPool “PAMSPSPool” -ApplicationPoolAccount (Get-SPManagedAccount $($svcMIMPool)) -AuthenticationMethod “Kerberos” -DatabaseName “PAM_SPS_Content”

        #

        #Create new SP Site

        New-SPSite -Name ‘PAM Portal’ -Url “https://pamportal.contoso.com&#8221; -CompatibilityLevel 15 -Template “STS#0” -OwnerAlias $FarmAdminUser

        #

        #Disable server-side view state. Required by MIM

        $contentService = [Microsoft.SharePoint.Administration.SPWebService]::ContentService

        $contentService.ViewStateOnServer = $false

        $contentService.Update()

        #

        #configure SSL

        Set-WebBinding -name “PAM Portal” -BindingInformation “:443:pamportal.contoso.com” -PropertyName “SslFlags” -Value 1

        #Add Secondary Site Collection Administrator

        Set-SPSite -Identity “https://pamportal.contoso.com&#8221; -SecondaryOwnerAlias “PAMAdmin”

          1. Install MIM Service, MIM Portal and PAM

        Open Command prompt as an Admin and run following command

        msiexec.exe /passive /i “C:SetupSoftwareMIM2016SP1RTMService and PortalService and Portal.msi” /norestart /L*v C:SetupPAM.LOG ADDLOCAL=”CommonServices,WebPortals,PAMServices” SQMOPTINSETTING=”1″ SERVICEADDRESS=”pamsvc.contoso.com” FIREWALL_CONF=”1″ SHAREPOINT_URL=”https://pamportal.contoso.com&#8221; SHAREPOINTUSERS_CONF=”1″ SQLSERVER_SERVER=”SVCSQL” SQLSERVER_DATABASE=”FIMService” EXISTINGDATABASE=”0″ MAIL_SERVER=”mail.contoso.com” MAIL_SERVER_USE_SSL=”1″ MAIL_SERVER_IS_EXCHANGE=”1″ POLL_EXCHANGE_ENABLED=”1″ SERVICE_ACCOUNT_NAME=”svc_PAMWs” SERVICE_ACCOUNT_PASSWORD=”P@$$w0rd” SERVICE_ACCOUNT_DOMAIN=”PRIV” SERVICE_ACCOUNT_EMAIL=”svc_PAMWs@prod.contoso.com” REQUIRE_REGISTRATION_INFO=”0″ REQUIRE_RESET_INFO=”0″ MIMPAM_REST_API_PORT=”8086″ PAM_MONITORING_SERVICE_ACCOUNT_DOMAIN=”PRIV” PAM_MONITORING_SERVICE_ACCOUNT_NAME=”svc_PAMMonitor” PAM_MONITORING_SERVICE_ACCOUNT_PASSWORD=”P@$$w0rd” PAM_COMPONENT_SERVICE_ACCOUNT_DOMAIN=”PRIV” PAM_COMPONENT_SERVICE_ACCOUNT_NAME=”svc_PAMComponent” PAM_COMPONENT_SERVICE_ACCOUNT_PASSWORD=”P@$$w0rd” PAM_REST_API_APPPOOL_ACCOUNT_DOMAIN=”PRIV” PAM_REST_API_APPPOOL_ACCOUNT_NAME=”svc_PAMAppPool” PAM_REST_API_APPPOOL_ACCOUNT_PASSWORD=”P@$$w0rd” REGISTRATION_PORTAL_URL=”http://localhost&#8221; SYNCHRONIZATION_SERVER_ACCOUNT=”PRIVsvc_MIMMA” SHAREPOINTTIMEOUT=”600″

        (“C:SetupSoftwareMIM2016SP1RTMService and PortalService and Portal.msi” replace with path to Service and Portal installation path, C:SetupPAM.LOG replace with path where installation log will be placed)

        When installation finishes open C:SetupPAM.LOG file in Notepad and goto the end of the file. You should find line

        … Product: Microsoft Identity Manager Service and Portal — Installation completed successfully.

        Open Internet Explorer and navigate to https://pamportal.contoso.com/IdentityManagement

        Portal should be loaded:

        clip_image002

        Restart the PRIV-PAM server

          1. Configure SSL for pamapi.contoso.com
            1. Request, issue and install SSL certificate for the portal

        Open PowerShell ISE as an Admin and paste following script:

        $file = @”

        [NewRequest]

        Subject = “CN=pamapi.contoso.com,c=AE, s=Dubai, l=Dubai, o=Contoso, ou=Blog”

        MachineKeySet = TRUE

        KeyLength = 2048

        KeySpec=1

        Exportable = TRUE

        RequestType = PKCS10

        [RequestAttributes]

        CertificateTemplate = “WebServerV2”

        “@

        Set-Content C:Setupcertreq.inf $file

        Invoke-Expression -Command “certreq -new C:Setupcertreq.inf C:Setupcertreq.req”

        (Replace C:Setup with folder of your choice – in this folder we will save request file)

        Run above script and respond to message boxes with “OK”.

        Copy C:Setupcertreq.req to corresponding folder on PROD-DC server.

        Log on to PROD-DC as an administrator

        Open command prompt as an admin.

        Run following command:

        certreq -submit C:Setupcertreq.req C:Setuppamapi.contoso.com.cer

        Here C:Setup is folder where certificate request file is placed – modify path according to your location.

        Confirm CA when prompted

        Now we have in C:Setup certificate file C:Setuppamapi.contoso.com.cer. Copy that file back to PRIV-PAM server.

        Log on to PRIV-PAM as a privPAMAdmin (use password P@$$w0rd)

        Run PowerShell as Admin and execute following:

        $cert = Import-Certificate -CertStoreLocation Cert:LocalMachinemy -FilePath C:Setuppamapi.contoso.com.cer

        $guid = [guid]::NewGuid().ToString(“B”)

        $tPrint = $cert.Thumbprint

        netsh http add sslcert hostnameport=”pamapi.contoso.com:8086″ certhash=$tPrint certstorename=MY appid=”$guid”

            1. Configure SSL on pamapi.contoso.com

        Run PowerShell as Admin and execute following:

        Set-WebBinding -Name ‘MIM Privileged Access Management API’ -BindingInformation “:8086:” -PropertyName Port -Value 8087

        New-WebBinding -Name “MIM Privileged Access Management API” -Port 8086 -Protocol https -HostHeader “pamapi.contoso.com” -SslFlags 1

        Remove-WebBinding -Name “MIM Privileged Access Management API” -BindingInformation “:8087:”

        Conclusion of Part 3

        Now we are ready for the Part 4 – Installing PAM Example portal.

        In this exercise we went step by step through PAM Portal set up. If you carefully followed all steps you have healthy and well configured PAM deployment.

        We didn’t spent time on Portal customization and branding, what I leave to you for the future.

        In the Part 4 we will set up PAM Example Portal.

        Until then

        Have a great week

        Disclaimer – All scripts and reports are provided ‘AS IS’

        This sample script is not supported under any Microsoft standard support program or service. This sample script is provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of this sample script and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of this script be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use this sample script or documentation, even if Microsoft has been advised of the possibility of such damages.

        Most Common Mistakes in Active Directory and Domain Services – Part 3

        $
        0
        0

        This blog post is the third (and last) part in the ‘Most Common Mistakes in Active Directory In Domain Services” series.
        In the previous parts, we covered some major mistake like configuring multiple password policies using GPO and keeping FFL/DFL at a lower version.
        The 3’rd part of the series is no exception. we’ll go on and review three additional mistakes and summarize this series.

        Series:

        Mistake #7: Installing Additional Server Roles and Applications on a Domain Controller

        When I review a customer’s Active Directory environment, I often find additional Windows Server roles (other than the default ADDS and DNS roles) installed on one or more of the Domain Controllers.

        This can be any role – from RDS Licensing, through Certificate Authority and up to DHCP Server. Beside Windows Server roles, I also find special applications and features running on the Domain Controllers, like KMS (Key Management Service) host for volume activation, or Azure AD Connect for integrating on-premises directories with Azure AD.

        There is a wide variety of roles and applications which administrators install on the Domain Controllers, but there is one thing common to all of them: Domain Controllers are NOT the place for them.

        By default, any Domain Controller in a domain provides the same functionality and features as the others, what makes the Active Directory Domain Services not be affected if one Domain Controller becomes unavailable.
        Even in a case where the Domain Controller holding the FSMO roles becomes unavailable, the Domain Services will continue to work as expected for most scenarios (at least in the short-term).

        When you install additional roles and applications on your Domain Controllers, two problems are raised:

        1. Domain Controllers with additional roles and features become unique and different compares to other Domain Controllers. If any of these Domain Controllers will be turned off or get damaged, its roles and features might be affected and become unavailable. This, in fact, creates a dependency between ADDS and other roles and affect the redundancy of the Active Directory Domain Services.
        2. Upgrading your Active Directory environment becomes a much more complicated task. A DHCP Server or a Certificate Authority roles installed on your Domain Controllers will enforce you to deal with them first, and only then move forward and upgrade the Active Directory itself. This complexity might also affect other tasks like restoring a Domain Controller or even put a Domain Controller into maintenance.

        This is why putting additional roles and applications on your Domain Controllers is not recommended for most cases.
        You can use the following PowerShell script to easily get a report with your Domain Controllers installed roles. Pay attention that this script is working only for Windows Server 2012 and above. For Windows Server 2008, you can use WMI Query.

        https://gist.github.com/OmerMicrosoft/2e2cdeb92743ba07865f2575f8c26037.js

         

        Bottom Line: Domain Controllers are designed to provide directory services for your users – allowing access to domain resources and respond to security authentication requests.
        Mixing Active Directory Domain Services with other roles and applications creates a dependency between the two, affect Domain Controller performance and make the administrative tasks a much more complicated.

        Do It Right: Use Domain Controllers for Active Directory Domain Services only, and install additional roles (let it be KMS or a DHCP server) on different servers.

        Mistake #8: Deploying Domain Controllers as a Windows Server With Desktop Experience 

        When you install Windows Server, you can choose between two installation options:

        • Windows Server with Desktop Experience – This is the standard user interface, including desktop, start menu, etc.
        • Windows Server – This is the Server Core, which leaving the standard user interface in favor of command line.

        Although Windows Server Core has some major advantages compares to Desktop Experience, most administrators are still choosing to go with the full user interface, even for the most convenient and supported server roles like Active Directory Domain Services, Active Directory Certificate Services, and DHCP Server.

        Windows Core is not a new option, and it has been here since Windows Server 2008R2. It works great for the supported Windows roles and has some great advantages compares to the Windows Server with Desktop Experience. Here are the most significant ones:

        • Reduce potential attack surface and lower the chance for user mistakes – Windows Server Core reduces the potential attack surface by eliminating binaries and features which does not require for the supporting roles (Active Directory Domain Services in our case).
          For example, the Explorer shell is not installed, which of curse reduces the risks and exploits that can be manipulated and used to attack the server.
          Other than that, when customers are using Windows Server with Desktop Experience for Active Directory Domain Services, they are also usually performing administrative tasks directly on their Domain Controllers using Remote Desktop.
          This is a very bad habit as it may have a significant impact over the Domain Controllers performance and functionality. It might also cause a Domain Controller to become unavailable by accidentally turn it off or running a heavy PowerShell script which drains the server’s memory.
        • Improve administrative skills while still be able to use the GUI tools – by choosing Windows Server Core, you’ll probably get the chance to use some PowerShell cmdlets and improve your PowerShell and scripting skills.
          Some customers think that this is the only way to manage and administer the server and its role, but that’s not true.
          Alongside the Command Line options, you’ll find some useful remote management tools, including Windows Admin Center, Server Manager, and Remote Server Administration Tools (RSAT).
          In our case, the RSAT includes all the Active Directory Administrative tool like the Active Directory Users and Computers (dsa.msc) and the ADSI Editor (adsiedit.msc).
          It also important to be familiar with the ‘Server Core App Compatibility Feature on Demand‘ (FOD), which can be used to increase Windows Server Core 2019 compatibility with other applications and to provide administrative tools for troubleshooting scenarios.
          My recommendation is to deploy an administrative server for managing all domain services roles, including Active Directory Domain Services, DNS, DHCP, Active Directory Certificate Services, Volume Activation, and others.
        • Other advantages like reducing disk space and memory usage are also here, but they, by themselves, are not the reason for using Windows Server Core.

        You should be aware that unlike Windows Server 2o12R2, you cannot convert Windows Server 2016/2019 between Server Core and Server with Desktop Experience after installation.

        Bottom Line: Windows Server Core is not a compromise. For the supported Windows Server roles, it is the official recommendation by Microsoft. Using Windows Server with Full Desktop Experience increases the chances that your Domain Controllers will get messy and will be used for administration tasks rather than providing domain services.

        Do It Right: Install your Domain Controllers as a Windows Server Core, and use remote management tools to administer your domain resources and configuration. Consider deploying one of your Domain Controller as a Windows Server with Full Desktop Experience for forest recovery scenarios.

        Mistake #9: Use Subnets Without Mapping them to Active Directory sites

        Active Directory uses sites for many purposes. One of them is to inform clients about Domain Controllers available within the closest site as the client.

        For doing that, each site is associated with the relevant subnets, which correspond to the range of IP addresses in the site. You can use Active Directory Sites and Services to manage and associate your subnets.

        When a Windows domain client is looking for the nearest Domain Controller (what’s known as the DC Locator process), the Active Directory (or more precisely, the NetLogon in one of the Domain Controllers) is looking for the IP address of the client in its subnets-to-sites association data.
        If the client’s IP address is found in one of the subnets, the Domain Controller returns the relevant site information to the client, and the client use this information to contact a Domain Controller within its site.

        When the client’s IP address cannot be found, the client may connect to any Domain Controller, including ones that are physically far away from him.
        This can result in communication over slow WAN links, which will have a direct impact on the client login process.

        If you suspect that you have missing subnets in your Active Directory environment, you can look for event ID 5807 (Source: NETLOGON) within your Domain Controllers.
        The event is created when there are connections from clients whose IP addresses don’t map to any of the existing AD sites.
        Those clients, along with their names and IP address, are listed by default in C:Windowsdebugnetlogon.log.

        You can use the following PowerShell script to create a report of all clients which are not mapped to any AD sites, based on the Netlogon.log files from all of the Domain Controllers within the domain.

        https://gist.github.com/OmerMicrosoft/1490e3e32e0d935c7be337bd6f5e285d.js

        The script output should look similar to this:

        Bottom Line: The association of subnets to Active Directory sites has a significant impact on the client machines performance. Missing this association may lead to poor performance and unexpected login times.

        Do It Right: Work together with your IT network team to make sure any new scope is covered and has a corresponded subnet that associated to an Active Directory site.

        So… this was the last part of the ‘Most Common Mistakes in Active Directory and Domain Services’ series.
        Hope you enjoyed reading these blog posts and learned a thing or two.

        Time zone issues when copying SCOM alerts

        $
        0
        0

        Background

        When trying to copy-paste (ctrl+c, ctrl+v) alerts from the SCOM console to an Excel worksheet or just a text file, we noticed that the Created field values where different from the ones displayed in the console. There was a two-hour difference.

        1

        2

        As it turns out, the server was configured in a GMT+2 time zone, and the values got pasted in UTC. Hence the two-hour difference.

        Solution

        On each of the servers/workstations with SCOM console installed where you want to fix this, simply create the following registry key and value:

        Key: HKEY_CURRENT_USERSOFTWAREMicrosoftMicrosoft Operations Manager3.0ConsoleViewCopySettings

        Value: InLocalTime (DWord)

        Data: 1

        (Where 1 means that you want to have the values in your local time, and 0 means the default behaviour of UTC)

        3

         

        Conclusion

        With some digging done by me and my colleagues using Procmon we where able to find out that the copy mechanism is trying to reach a non existing registry key and value.

        So.. “When in doubt, run process monitor” – Mark Russinovich.

         

        Hope this helps,

        Oren Salzberg.

        Viewing all 177 articles
        Browse latest View live


        <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>