Quantcast
Channel: Secure Infrastructure Blog
Viewing all 177 articles
Browse latest View live

Configuring SQL Reporting Services for Remote or different SCOM DW Database

$
0
0

This article mentions how to configure the SQL Reporting Services for generating reports from a remote SCOM Data Warehouse database (DW DB). This will also be of help for reconfiguring the Reporting Service when there is a change in the SCOM DW DB, Port or Server Name.

In a simple/test single server SCOM installation environment, there is no need to configure these settings since all the databases and reporting services are within the same server itself. However, in production environments where the Reporting Services and the Data Warehouse Database will be in separate environments, we will need to configure the reporting services to make use of the SCOM’s Data Ware house database. This database may be residing on a remote SQL Failover Cluster or part of an AlwaysOn Availability Group.

The below configuration is done on an environment where the Reporting services is installed on one of the SCOM Management Servers and the Data Warehouse DB resides on a remote SQL AlwaysOn Availability Group.

Steps:

Access the SQL Server Reporting Services Web Page and click on Details View.

Now, click on the Data Warehouse Main and select ‘Manage‘ from Dropdown menu.

Under the Data Source Type, Select “Microsoft SQL Server

Under the connection String, Provide the connection string details in the following format:

data source=<DBINSTANCE or Listener Details>;initial catalog=<DW DB Name>;Integrated Security=SSPI

e.g.:

data source=SCOMListener01,14533;initial catalog=OperationsManagerDW;Integrated Security=SSPI

(where SCOMListener01 is the SQL AlwaysOn Listener listening on port 14533 and OperationsManagerDW is the Data Warehouse Database part of the Availability Group)

Select “Windows Integrated Security” under “Connect Using” options:

Click on “Test Connection” to make sure that the connection is successful as shown in the below image.

Click on Apply.

 

This will make sure that the Reporting Services is connected correctly to DW Database.

Also, when the SCOM Database or server is changed or the listening ports are changed due to any security or maintenance reasons the above steps can be performed by providing the new details in the Connection String.


Control code execution between Load and Webtests

$
0
0
Requirement
As a performance tester, I would like to
Execute a specific plugin/block of Code during Webtest and skip the same code, when I execute the load test(Which has this WebTest) and ViceVersa
This need to done without making any configuration changes while running web or loadtest.
We can use below sample codesnippet to accomplish the same
********Execute specific block of code during WebTest and loadTest*********
public class MySampleWebTest: WebTest
{
        public MySampleWebTest()
{
this.PreAuthenticate = false;
this.Proxy = “default”;
}
        public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
if (this.Context.ContainsKey(“$LoadTestUserContext”))
{
// ***********Below code will be automatically executed when we run load Test***********
//this.Context.Add(“Debug”, “InLoadTestMode”);
//
}
else
{
// ***********Below code will be automatically executed when we run WebTest***********
//this.Context.Add(“Debug”, “InWebTestMode”);
}
}
}
********Execute specific plugin of code during WebTest and loadTest*********
//  Execte a plugin that reads test data from Csv file during LoadTest and a hardcoded value during webtest execution
    public class MySampleWebTest: WebTest
{
// Plugin initialization, to be executed only during LoadTest not during webTest
private SetUnqiueLoginName testPlugin0 = new SetUnqiueLoginName();
        public MySampleWebTest()
{
this.PreAuthenticate = true;
this.Proxy = “default”;
this.StopOnError = true;
if (this.Context.ContainsKey(“$LoadTestUserContext”))
{
// Below plugin code will be executed during LoadTest that reads data from Csv file
this.PreWebTest += new EventHandler<PreWebTestEventArgs>(this.testPlugin0.PreWebTest);
}
else
{
// Below plugin code will be executed during Webtest that reads hard coded value
this.Context.Add(“UniqueLoginName”, “smoketestuser”);
}

Read data from multiple data sources using Visual studio WebTest

$
0
0

Requirement

As a performance tester, I should be able to read the data from any number of the datasource based on the my requirement
(Ex: Different environments Test,UAT,Prod)
We can accomplish the same using below code snippet
    [DataSource(“TestDataSource”, “System.Data.SqlClient”, Constants.TestDataConnectionString, Microsoft.VisualStudio.TestTools.WebTesting.DataBindingAccessMethod.Sequential,
Microsoft.VisualStudio.TestTools.WebTesting.DataBindingSelectColumns.SelectAllColumns, “Users”)]
[DataBinding(“TestDataSource”, “Users”, “userid”, “TestDataSource.Users.userid”)]
[DataSource(“UATDataSource”, “System.Data.SqlClient”, Constants.TestDataConnectionString, Microsoft.VisualStudio.TestTools.WebTesting.DataBindingAccessMethod.Sequential,
Microsoft.VisualStudio.TestTools.WebTesting.DataBindingSelectColumns.SelectAllColumns, “Users”)]
[DataBinding(“UATDataSource”, “Users”, “userid”, “UATDataSource.Users.userid”)]
public class CreateUser : WebTest
{
public CreateUser ()
{
this.PreAuthenticate = true;
this.Proxy = “default”;
}
public override IEnumerator<WebTestRequest> GetRequestEnumerator()
{
if (Env == “Test)
{
string TestUser= Context[“TestDataSource.Users.userid”].ToString();
}
else
{
string UatUser = Context[“UATDataSource.Users.userid”].ToString();
}
}

Import Database schema in Azure SQL DB from .SQL files programmatically with SQLCMD

$
0
0

Introduction – This blog post illustrates the method through which you can import your database schema and tables into empty Azure SQL DB ( PaaS ) programmatically. Currently azure SQL DB support import from BACPAC file in PowerShell and GUI but not from .SQL files.

Assumptions  – Here we assume that you already have .SQL files generated from on-premise data base and ready to upload to azure SQL DB.

Problem statement  – I had a requirement where I needed to import schema and tables into empty azure SQL DB from .SQL files. Currently Azure only provides import of BACPAC files out of the box from PowerShell and GUI and from SQL management studio but requirement here was to do it programmatically every time the ARM deployment script creates new azure SQL DB .

 

Resolution/Workaround – Below steps you should follow

1. Install SQLCMD on the VM/desktop from where you are running the script or deployment. SQLCMD cmdlets are used to deploy SQL files into Azure SQL DB. ODBC driver is required for installing SQLCMD

ODBC driverhttp://www.microsoft.com/en-in/download/details.aspx?id=36434

SQLCMDhttp://www.microsoft.com/en-us/download/details.aspx?id=36433

2. Save all the SQL files into a folder in local VM.

3. Get the public IP of you local VM/desktop using below code.

$IP = Invoke-WebRequest checkip.dyndns.com
$IP1 = $IP.Content.Trim()
$IP2 = $IP1.Replace("<html><head><title>Current IP Check</title></head><body>Current IP Address: ","")
$FinalIP = $IP2.Replace("</body></html>","")

4. Create a new firewall rule to connect to SQL server.

New-AzureRmSqlServerFirewallRule –FirewallRuleName $rulename  -StartIpAddress $FinalIP -EndIpAddress $FinalIP –servername $SQLservername –Resourcegroupname $resourcegroupname.

5. Save SQL server full name and sqlcmd path into a variable.

$Fullservername = $SQLservername + '.database.windows.net'
$sqlcmd = "C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\110\Tools\Binn\SQLCMD.EXE"

6. Save SQL server credentials and Azure SQL DB name in variables.

$username = “SQLusername”

$password = “SQLpassword”

$dbname = “databasename”

7. Run the below command for each SQL files if u want to import it sequentially.

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file1.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file3.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\filen.sql"

 

NOTE – You can accumulate  all the code and use it in deployment scripts along with functions and error logging

 

Thanks folks, Hope it is useful.

happy blogging

Configuring SQL Reporting Services for Remote or different SCOM DW Database

$
0
0

This article mentions how to configure the SQL Reporting Services for generating reports from a remote SCOM Data Warehouse database (DW DB). This will also be of help for reconfiguring the Reporting Service when there is a change in the SCOM DW DB, Port or Server Name.

In a simple/test single server SCOM installation environment, there is no need to configure these settings since all the databases and reporting services are within the same server itself. However, in production environments where the Reporting Services and the Data Warehouse Database will be in separate environments, we will need to configure the reporting services to make use of the SCOM’s Data Ware house database. This database may be residing on a remote SQL Failover Cluster or part of an AlwaysOn Availability Group.

The below configuration is done on an environment where the Reporting services is installed on one of the SCOM Management Servers and the Data Warehouse DB resides on a remote SQL AlwaysOn Availability Group.

Steps:

Access the SQL Server Reporting Services Web Page and click on Details View.

Now, click on the Data Warehouse Main and select ‘Manage‘ from Dropdown menu.

Under the Data Source Type, Select “Microsoft SQL Server

Under the connection String, Provide the connection string details in the following format:

data source=<DBINSTANCE or Listener Details>;initial catalog=<DW DB Name>;Integrated Security=SSPI

e.g.:

data source=SCOMListener01,14533;initial catalog=OperationsManagerDW;Integrated Security=SSPI

(where SCOMListener01 is the SQL AlwaysOn Listener listening on port 14533 and OperationsManagerDW is the Data Warehouse Database part of the Availability Group)

Select “Windows Integrated Security” under “Connect Using” options:

Click on “Test Connection” to make sure that the connection is successful as shown in the below image.

Click on Apply.

 

This will make sure that the Reporting Services is connected correctly to DW Database.

Also, when the SCOM Database or server is changed or the listening ports are changed due to any security or maintenance reasons the above steps can be performed by providing the new details in the Connection String.

Import Database schema in Azure SQL DB from .SQL files programmatically with SQLCMD

$
0
0

Introduction – This blog post illustrates the method through which you can import your database schema and tables into empty Azure SQL DB ( PaaS ) programmatically. Currently azure SQL DB support import from BACPAC file in PowerShell and GUI but not from .SQL files.

Assumptions  - Here we assume that you already have .SQL files generated from on-premise data base and ready to upload to azure SQL DB.

Problem statement  - I had a requirement where I needed to import schema and tables into empty azure SQL DB from .SQL files. Currently Azure only provides import of BACPAC files out of the box from PowerShell and GUI and from SQL management studio but requirement here was to do it programmatically every time the ARM deployment script creates new azure SQL DB .

 

Resolution/Workaround – Below steps you should follow

1. Install SQLCMD on the VM/desktop from where you are running the script or deployment. SQLCMD cmdlets are used to deploy SQL files into Azure SQL DB. ODBC driver is required for installing SQLCMD

ODBC driver - http://www.microsoft.com/en-in/download/details.aspx?id=36434

SQLCMD - http://www.microsoft.com/en-us/download/details.aspx?id=36433

2. Save all the SQL files into a folder in local VM.

3. Get the public IP of you local VM/desktop using below code.

$IP = Invoke-WebRequest checkip.dyndns.com
$IP1 = $IP.Content.Trim()
$IP2 = $IP1.Replace("<html><head><title>Current IP Check</title></head><body>Current IP Address: ","")
$FinalIP = $IP2.Replace("</body></html>","")

4. Create a new firewall rule to connect to SQL server.

New-AzureRmSqlServerFirewallRule –FirewallRuleName $rulename  -StartIpAddress $FinalIP -EndIpAddress $FinalIP –servername $SQLservername –Resourcegroupname $resourcegroupname.

5. Save SQL server full name and sqlcmd path into a variable.

$Fullservername = $SQLservername + '.database.windows.net'
$sqlcmd = "C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\110\Tools\Binn\SQLCMD.EXE"

6. Save SQL server credentials and Azure SQL DB name in variables.

$username = “SQLusername”

$password = “SQLpassword”

$dbname = “databasename”

7. Run the below command for each SQL files if u want to import it sequentially.

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file1.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file3.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\filen.sql"

 

NOTE – You can accumulate  all the code and use it in deployment scripts along with functions and error logging

 

Thanks folks, Hope it is useful.

happy blogging

Configuring SQL Reporting Services for Remote or different SCOM DW Database

$
0
0

This article mentions how to configure the SQL Reporting Services for generating reports from a remote SCOM Data Warehouse database (DW DB). This will also be of help for reconfiguring the Reporting Service when there is a change in the SCOM DW DB, Port or Server Name.

In a simple/test single server SCOM installation environment, there is no need to configure these settings since all the databases and reporting services are within the same server itself. However, in production environments where the Reporting Services and the Data Warehouse Database will be in separate environments, we will need to configure the reporting services to make use of the SCOM's Data Ware house database. This database may be residing on a remote SQL Failover Cluster or part of an AlwaysOn Availability Group.

The below configuration is done on an environment where the Reporting services is installed on one of the SCOM Management Servers and the Data Warehouse DB resides on a remote SQL AlwaysOn Availability Group.

Steps:

Access the SQL Server Reporting Services Web Page and click on Details View.

Now, click on the Data Warehouse Main and select 'Manage' from Dropdown menu.

Under the Data Source Type, Select "Microsoft SQL Server"

Under the connection String, Provide the connection string details in the following format:

data source=<DBINSTANCE or Listener Details>;initial catalog=<DW DB Name>;Integrated Security=SSPI

e.g.:

data source=SCOMListener01,14533;initial catalog=OperationsManagerDW;Integrated Security=SSPI

(where SCOMListener01 is the SQL AlwaysOn Listener listening on port 14533 and OperationsManagerDW is the Data Warehouse Database part of the Availability Group)

Select "Windows Integrated Security" under "Connect Using" options:

Click on "Test Connection" to make sure that the connection is successful as shown in the below image.

Click on Apply.

 

This will make sure that the Reporting Services is connected correctly to DW Database.

Also, when the SCOM Database or server is changed or the listening ports are changed due to any security or maintenance reasons the above steps can be performed by providing the new details in the Connection String.

Tool to calculate 90th percentile for common transactions on a visual studio Load test run

$
0
0

Requirement
As a performance tester, I need to calculate 90th percentile for common transactions, from the Transaction summary of Load Test Runs

Problem Statement
In the visual studio Test transaction summary report, we have scenarios where in same transaction reported multiple times with
different response times(due to load and application behavior).
Example: Login transaction which is used as first transaction in every scenario, can report different response times at different scenarios.
In such case we usually calculate 90th percentile or AVG of common login transactions and report to developers or customer.
This will ensure consistency and provide accurate results.

Problem Solution:
Below is the generic utility, a Sql stored procedure that will automatically calculate 90th percentile for all common
transactions, on various response times(Avg/90thpercentile)

Steps to Execute the Store procedure
1. Connect to LoadTest results database. Create & Execute below Store Procedure
2. Execute below TSQL to create Stored Procedure.

Create Procedure Calc90thPercentileForCommonTransactions @loadtestid int
As
Begin

---- Get visual studio test results to Temptable
Select * into #TempTable
from
(select distinct LTC.TestCaseName,LTTSD.LoadTestRunId, WLTT.TransactionName, LTTSD.Percentile90,
PERCENTILE_CONT ( 0.9 ) WITHIN GROUP ( ORDER BY LTTSD.Percentile90 )
OVER ( partition by WLTT.TransactionName ) as 'CalculatedPercentile90th'
from LoadTestTransactionSummaryData LTTSD
Join WebLoadTestTransaction WLTT on LTTSD.TransactionId = WLTT.TransactionId , LoadTestCase LTC
where LTTSD.LoadTestRunId = @loadtestid
and LTTSD.LoadTestRunId = WLTT.LoadTestRunId
and LTC.TestCaseId = WLTT.TestCaseId and LTTSD.TransactionId = WLTT.TransactionId
and LTC.LoadTestRunId = @loadtestid) as result;

---- Calculate 90th percentile for commonTrasactions
WITH DUP
AS (
SELECT TransactionName
FROM #TempTable
GROUP BY TransactionName
HAVING COUNT(1) > 1)

SELECT t.TestCaseName,t.TransactionName, t.Percentile90 as '90thPercentileFromTestResult',t.CalculatedPercentile90th as '90thPercentileForCommonTransaction',
CASE
WHEN DUP.TransactionName IS NOT NULL
THEN 'Yes'
ELSE 'No'
END AS IsCommonTransaction,
CASE
WHEN DUP.TransactionName IS NOT NULL
THEN CalculatedPercentile90th
ELSE Percentile90
END AS Consolidated90thPercentileToReport
FROM #TempTable T
LEFT JOIN DUP ON T.TransactionName = DUP.TransactionName;

End

3. Stored procedure execution

Exec Calc90thPercentileForCommonTransactions @loadtestid
where loadtestid is the runid of the test.
Example:  Exec Calc90thPercentileForCommonTransactions 1555

4. Below is the result of store procedure
TestCaseName: Name of the TestCase
TransactionName: Name of the Transaction
90thPercentileFromTestResult: 90thPercentile Response from Transaction summary of the Test(Additional Info for debugging purpose)
90thPercentileForCommonTransaction: Calculated 90thPercentile of all Common Transactions on the Response time. (Additional Info for debugging purpose)
IsCommonTransaction: 'Yes' in case if it is common transaction(Present more than once), 'No' in case if it is not a common Transaction(Unique)(Additional Info for debugging purpose)
Consolidate90thPercentileToReport: Final Response time to developers or customer which has 90thpercentile calculated value for all Common Transactions(Present more than once)


Configuring SQL Reporting Services for Remote or different SCOM DW Database

$
0
0

This article mentions how to configure the SQL Reporting Services for generating reports from a remote SCOM Data Warehouse database (DW DB). This will also be of help for reconfiguring the Reporting Service when there is a change in the SCOM DW DB, Port or Server Name.

In a simple/test single server SCOM installation environment, there is no need to configure these settings since all the databases and reporting services are within the same server itself. However, in production environments where the Reporting Services and the Data Warehouse Database will be in separate environments, we will need to configure the reporting services to make use of the SCOM's Data Ware house database. This database may be residing on a remote SQL Failover Cluster or part of an AlwaysOn Availability Group.

The below configuration is done on an environment where the Reporting services is installed on one of the SCOM Management Servers and the Data Warehouse DB resides on a remote SQL AlwaysOn Availability Group.

Steps:

Access the SQL Server Reporting Services Web Page and click on Details View.

Now, click on the Data Warehouse Main and select 'Manage' from Dropdown menu.

Under the Data Source Type, Select "Microsoft SQL Server"

Under the connection String, Provide the connection string details in the following format:

data source=<DBINSTANCE or Listener Details>;initial catalog=<DW DB Name>;Integrated Security=SSPI

e.g.:

data source=SCOMListener01,14533;initial catalog=OperationsManagerDW;Integrated Security=SSPI

(where SCOMListener01 is the SQL AlwaysOn Listener listening on port 14533 and OperationsManagerDW is the Data Warehouse Database part of the Availability Group)

Select "Windows Integrated Security" under "Connect Using" options:

Click on "Test Connection" to make sure that the connection is successful as shown in the below image.

Click on Apply.

 

This will make sure that the Reporting Services is connected correctly to DW Database.

Also, when the SCOM Database or server is changed or the listening ports are changed due to any security or maintenance reasons the above steps can be performed by providing the new details in the Connection String.

Efficient way to retrieve work item details from a Linked Query using TFS API

$
0
0

Requirement:
Retrieve WorkItems details of a linked query using TFS API

Problem Statement:
We can query a linked Query item using TFS Workitem Query language, But the challenge with linked query is, it displays only
SourceId and TargetId along with LinkTypeId. It does not display other important fields like title, status, description e.t.c

In Order to display the same we will have
a. Get all Source and Target Id(Linked Work Item Id's) using a Linked Query to a datastructure
b. Now iterate over each Id and make API call to get detail info for each item.
This involves too many calls to TFS server.

For Example: To Query bugs that have user stories. We need
a. Execute a linked query to get all Source and Target Id(Linked Work Item Id's)
b. Execute a flat query to bug details(Bug title, status, Prority etc.) for each Target Id.
Lets say if the query has 10,000 bugs we will have to make 10,000 calls to TFS to get bug details for each target id.

Solution:
We can write one single query passing all Target Ids and get workitem details in one go.

Below is Code snippet to retreive work item details using a Linked Query (i.e Bug details of bugs that has user stories)

// Linked query to display Source(Bug id) and their respectived TargetId(Linked user story work item id) for Project "My Project"
Query query = new Query(_store, string.Format(
"SELECT [System.Id] FROM WorkItemLinks  WHERE ([Source].[System.TeamProject] = 'My Project'  AND  AND [Source].[System.WorkItemType] = 'Bug') And ([System.Links.LinkType] <> '') And ([Target].[System.WorkItemType] = 'User Story') ORDER BY [System.Id] mode(MustContain)"));

 

// Get list of work item Id for which we want to retrieve more detailed information like bug title, status, description
int[] ids = (from WorkItemLinkInfo info in wlinks
select info.TargetId).ToArray();

// Use flat query to get detailed workitem info for list of workitem ids
var DetailedInfoQuery = new Query(_store, mydetails, ids);
WorkItemCollection workitems = DetailedInfoQuery .RunQuery();

foreach (WorkItem wi in workitems)
{
WorkItemType worktype = wi.Type;
string WorkItemType = worktype.Name;
string id = wi.Id;
string bugtile = wi.Title;
}

Arabic Language Pack for SCSM Self Service Portal

$
0
0

Hi All,

one of the challenges we face in our region is providing users with their native Self Service Portal Language. Since Arabic is not part of built-in languages shipped with Service Manager Self Service Portal, we were looking into different options such as having 3rd party portal but not now 🙂

We spent some time looking into the files that SSP is using and located the language resource files which you can be used not only for Arabic, but for any other language that is not available in SCSM Self Service Portal.

In this post we will show you 2 things. First, how to filter the languages and keep required ones instead of having all languages available in the portal. Second, together we will configure Arabic Language pack for System Center Service Manager Self Service Portal to be such as below screenshot.

First: Show preferred languages (Remove unnecessary ones)

When you click on the language settings (Top Right  Corner) in Self Service Portal, by default 10 or more languages appear to select including Chinese, French, Japanese, ... etc. to make it easier for users, it is preferred to show them the languages that they could use only.Follow the procedure below to make that happen:

0- <<BACKUP BACKUP BACKUP>>

1- Browse to (C:\inetpub\wwwroot\SelfServicePortal\Views\Shared) folder

2- Edit (_Layput.cshtml) file using notepad or any other tool. (run as administrator) (Don't forgot to backup the file and saving it somewhere else before editing it)

3- Search the file for "<ul class=lang_menu ..."

4- Remove the lines for necessary languages and keep the ones you want your users to see. Remember to remove the whole line (from <li ------- to -------- </li>)

I removed all languages except English, French and Dutch

5- Refresh your portal ...

Completed .... lets see how can we configure a new language pack 🙂

 


 

Second: Configure Arabic Language pack for SSP 

As mentioned before, this is not limited to Arabic as you can use it to configure any language you want but in this example we will talk about configuring Arabic language pack. follow the procedure below

1- Browse to (C:\inetpub\wwwroot\SelfServicePortal\Views\Shared) folder

2- Edit (_Layput.cshtml) file using notepad or any other tool. (run as administrator) (Don't forgot to backup the file before editing it)

3- Add the following line inside <ul class=”language_mune …

<li value="ar-JO" tabindex="12">Arabic</li>

Note: ar-JO???? this is the Arabic Language code of  Jordan. For more info about different language code for countries read https://www.andiamo.co.uk/resources/iso-language-codes

 

4- Browse to folder (C:\inetpub\wwwroot\SelfServicePortal\App_GlobalResources)

5- Copy file (SelfServicePortalResources.en.resx) to your local machine (where Arabic keyboard supported)

6- Rename file to be (SelfServicePortalResources.ar.resx)

7- Edit the file using any tool (such as notepad++)

8- In the file you can find all words used ... Translate it into Arabic ... or download this translated file SelfServicePortalResources.ar_

 

 

9- Upload the file to the folder (C:\inetpub\wwwroot\SelfServicePortal\App_GlobalResources)

 

10 - Refresh your browser and select Arabic Language from Language Settings tab.

 

NOTE: if you don't have any Service Offering with (Arabic) language selected then you won't see any offering. at least create one service offering and select language as Arabic then add some requests offering for it

Hope this would be useful ... Thanks for reading

Mohamad Damati

Intune/EMS enrollements (ADFS scenario)

$
0
0

Many of customers are facing problems on Intune enrollment with Android devices; it can be:

  1. Missing a certificate : you need to ensure that the all the certificate chain is installed on the ADFS proxy/servers (check it here : https://www.ssllabs.com/ssltest)
  2. When enrolling in the company portal the authentication doesn’t work:
    • Check the TLS version on the ADFS proxy or your HLB
    • Check the Cypher Suite on your HLB

Then it will work,

Don’t forgot that some devices are not compatible with Intune (till now 09.2018):

Resolving WSUS Performance Issues

$
0
0

Introduction

I have recently come across multiple customers that are having issues with a High IIS Worker Process, causing the Servers to flatline,  so I wanted to take some time here to run you through all of the steps that you can follow to remediate this issue.

 

 

So what is the Issue exactly?

Even though it may seem daunting trying to figure out what is causing the headaches, the issue and solutions are quite easy

It is important to understand that it is usually a combination of 2 things:

  1. Regular WSUS Maintenance is not being done, in particular declining of Superseded Updates
  2.  Incorrect Configuration on the SCCM Server hosting the SUP Role (we will get into that a bit below)

Important Considerations

It is important to understand that even though the Server is showing to be flatlining on CPU, this is not a CPU issue. Adding more cores\Processors will not resolve the issue that you are experiencing. We need to delve deeper into the issue, to resolve the underlying issue.

 

1.Regular WSUS Maintenance not being done.

This is a Large part of the issue that customers are experiencing. It is important that we understand what we mean with this.

(Have a read through This amazing Blog from Meghan Stewart | Support Escalation Engineer to help you setup your WSUS Maintenance. It has all the required information on When, Why and How to implement your WSUS Maintenance, as well as having a great PowerShell script to help..)

 

Superseded Updates - Update that has been replaced by another newer update, and is no longer relevant. Yes these updates to report that they are "needed" however, they have been replaced by a newer update and are just taking up space\CPU time.

Example:

Let us take an example of a 1000 Client small network, running Multiple versions of Server\Client OS (Windows 7, Windows 8.1, Windows 10, Server 2012 R2, Server 2016), having 1 or more SUP's

Each client will scan against the SUP (Software Update Point) Catalog regularly, to determine what updates are available, how compliant it is, and if any Updates are needed.

When clients scan against WSUS they scan all updates that are not declined or obsolete. If 25% of your updates are superseded (for instance) that is 25% wasted CPU time from the client's machine as well as on the Server that they are scanning against.

So you need to ensure that you are regularly cleaning up the WSUS Server as per the Article above.

 

2. Incorrect Configuration on the SCCM Server hosting the SUP Role

An important Step is missed by a lot of customers, that is configuring your WSUS AppPool Correctly.

 

The AppPool memory in a lot of cases is left at the Default 1.9GB of memory. This is not sufficient if you are managing a large amount of clients, and will need to be increased.

Note: This is reserved memory that you are allocating, so ensure that you have catered for it in your planning

Open your IIS Manager App - Expand Server name - Application Pools.

Right-Click on the WsusPool - Advanced Settings

First thing you can do is to change your Queue Length from 1000 to 2000 (environment depending. Queue length is Maximum number of HTTP.sys requests that will queue for the App Pool, before the system will return 503 - Service Unavailable Error)

Secondly the Private Memory needs to be changed to a Minimum of 4GB instead of the 1.9GB default.

Once Completed Recycle the AppPool.

My Server may be flatlining so bad that I cannot open WSUS, or WSUS Cleanup is being done, so what now?

The last step that you can take in an extreme situation is to temporarily kick the clients off the WSUS Server, so that you can complete the Modifications to the WsusPool\Perform WSUS Cleanup.

Temporarily kick the clients

We are going to be creating a new AppPool and changing the website bindings so that we can access the WSUS in order to perform the cleanup.

Note: During this step, your clients will not be able to connect to your WSUS instance.

Open your IIS Manager App - Expand Server name - Application Pools.

Right-Click on the Application Pools - Add Application Pool

Once you have created the AppPool, we need to change the Website over to the Pool First

 

Now our Next Step is to change the Bindings and assign a different port number to the HTTP Connection for WSUS, so that the clients are unable to scan against it, thereby freeing up the memory for us.

Under IIS Manager App - Expand Server name - Sites - WSUS Administration

Right click - Edit Bindings

Now Assign a different Port Number (i.e. 1234 )

Once this is done, you will need to restart the Website

While still in IIS Manager App - Expand Server name - Sites - WSUS Administration - Restart Website

Now when you connect to WSUS, select the custom new port

That should allow you now to be able to run the Cleanup and re-indexing of the DB for WSUS.

Once you have completed this, make sure to change the Bindings\Pool back to what it was before so that the clients can now start scanning again.

 

Conclusion

As long as the correct configuration is applied in the environment, and the regular maintenance in place, you should not have any further WSUS Performance issues.

Creating symbolic links with PowerShell DSC

$
0
0

Background

In an Azure Windows VM you automatically get a temporary disk drive mapped to D: (on Linux it's mapped to /dev/sdb1). It is temporary because the storage is assigned from the local storage on the physical host. So if your VM is re-deployed (due to host updates, host failures, resizing, etc.), the VM is recreated on a new host and the VM will be assigned a new temporary drive on the new host. The data on the temporary drive is not migrated, but the OS disk is obviously preserved from the vhd in your storage account or managed-disk.

The problem

In this specific scenario, the customer had a 3rd party legacy application that reads and writes from two directories in the D:\ drive. The directories paths were hard-coded in the application and were a couple of gigabytes in size, so copying them to the temporary drive each time the VMs were deployed would be time and resource consuming.

Choosing a solution

After thorough testing of course, we decided to create two symbolic links from the D:\ drive to the real directories in the OS disk (where the directories were already present as part of the image). The symbolic-links creation would be accomplished with either the mklink command, or with the New-Item cmdlet in PowerShell 5.x.

Of course there are other methods overcoming this challenge, such as switching the drive letters with a data-disk and moving the PageFile to the other drive letter. But we decided that the symbolic-links approach would be faster and wouldn't require an additional data-disk, and with it, additional costs.

The implementation

Since the creation of the symbolic-links would need to happen every time the VM is created (and redeployed), we ended up adding a PowerShell DSC extension to the VM in the ARM template and since there were no built-in DSC Resources in the OS, nor in the DSC Resource Kit in the PowerShell gallery that configures symbolic-links, we wrote a (quick-and-dirty) PowerShell module and the resource to create them.

Creating the module structure and the psm1 and schema.mof files is pretty easy when you're using the cmdlets from the xDSCResourceDesigner module:

Install-Module -Name xDSCResourceDesigner

$ModuleName = 'myModule'
$ResourceName = 'SymbolicLink'
$ModuleFolder = "C:\Program Files\WindowsPowerShell\Modules\$ModuleName"

New-xDscResource -Name $ResourceName -Property @(
    New-xDscResourceProperty -Name Path -Type String -Attribute Key
    New-xDscResourceProperty -Name TargetPath -Type String -Attribute Write
) -Path $ModuleFolder

cd $ModuleFolder
New-ModuleManifest -Path ".\$ModuleName.psd1"

The contents of the .psm1 resource file C:\Program Files\WindowsPowerShell\Modules\myModule\DSCResources\SymbolicLink\SymbolicLink.psm1 should contain the three *-TargetResource functions (Get, Set and Test):

function Get-TargetResource {
    [CmdletBinding()]
    [OutputType([System.Collections.Hashtable])]
    param (
        [parameter(Mandatory = $true)]
        [System.String]
        $Path
    )

    Write-Verbose "Getting SymbolicLink for $Path"

    $Root = Split-Path -Path $Path -Parent
    $LinkName = Split-Path -Path $Path -Leaf
    $TargetPath = $null

    $link = Get-Item -Path (Join-Path -Path $Root -ChildPath $LinkName) -ErrorAction SilentlyContinue
    if($link -and  $link.LinkType -eq 'SymbolicLink') { $TargetPath = $link.Target[0] }
    
    @{Path = $Path; TargetPath = $TargetPath}
}


function Set-TargetResource {
    [CmdletBinding()]
    param
    (
        [parameter(Mandatory = $true)]
        [System.String]
        $Path,

        [System.String]
        $TargetPath
    )

    Write-Verbose "Creating a SymbolicLink from $Path to $TargetPath"

    $Root = Split-Path -Path $Path -Parent
    $LinkName = Split-Path -Path $Path -Leaf
    Set-Location -Path $Root
    New-Item -ItemType SymbolicLink -Name $LinkName -Target $TargetPath | Out-Null
}


function Test-TargetResource {
    [CmdletBinding()]
    [OutputType([System.Boolean])]
    param (
        [parameter(Mandatory = $true)]
        [System.String]
        $Path,

        [System.String]
        $TargetPath
    )

    Write-Verbose "Testing SymbolicLink for $Path"

    $current = Get-TargetResource -Path $Path
    return (($current.Path -eq $Path) -and ($current.TargetPath -eq $TargetPath))
}

Export-ModuleMember -Function *-TargetResource

And in the configuration document, remember to import the DSC resources from the module:

configuration Main {

    Import-DscResource -ModuleName PSDesiredStateConfiguration
    Import-DscResource -ModuleName myModule

    node localhost {

        SymbolicLink 'INPUT_DIR' {
            Path       = 'D:\INPUT_DIR'
            TargetPath = 'C:\PathTo\myLegacyApp\INPUT_DIR'
        }
        
        SymbolicLink 'OUTPUT_DIR' {
            Path       = 'D:\OUTPUT_DIR'
            TargetPath = 'C:\PathTo\myLegacyApp\OUTPUT_DIR'
        }
    }
}

Now, to create the zip file containing the configuration document and all required modules:

# Create the zip package
Publish-AzureRmVMDscConfiguration .\myDSC.ps1 -OutputArchivePath .\myDSC.zip

And upload it to the blob container (used in the ARM template):

# Variables
$storageAccountName = 'statweb'
$resourceGroupName = 'rg-statweb'

# Login to Azure
Login-AzureRmAccount

# Get the Storage Account authentication key
$keys = Get-AzureRmStorageAccountKey -ResourceGroupName $resourceGroupName -Name $storageAccountName

# Create a Storage Authentication Context
$context = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $keys.Item(0).value

# Upload the file to the blob container
Set-AzureStorageBlobContent -Context $context -Container dsc -File .\myDSC.zip -Blob myDSC.zip

Conclusion

There are usually several methods to accomplish a single task, and you should take under consideration all aspects and constrains, because one can be more effective than another.

And if you don't already feel comfortable scripting with PowerShell, you should hurry and Start-Learning. There are a ton of excellent resources out there, but if you prefer a face-to-face in-class learning experience, and have a Premier contract, contact your Technical Account Manager (TAM) for more information on our PowerShell Workshop series.



HTH,

Martin.

Unable to start SCOM ACS collector service – Event ID 4661

$
0
0

Problem Description and Symptoms:

The Operations Manager Audit Collections Service is not starting with the following error and event Id:

Event ID 4661 Error :
AdtServer encountered the following problem during startup:
Task: Load Certificate
Failure: Certificate for SSL based authentication could not be loaded
Error:
0x80092004
Error Message:
Cannot find object or property.

1

Solution:

1. Ensure that the certificate exists on the Management Server acting as ACS collector and is valid (If not, issue one for the Collector and import it in the Local Computer –> Personal –>Certificates Store)

image

2. Open CMD as Administrator

3. Go to the following path “%systemroot%\system32\Security\AdtServer”

4. Execute the following: adtserver.exe -c and choose the certificate to be used (This command will allow you to bind the certificate to the service)

image

5. Start the Audit Collection Service by executing: net start adtserver

image

6. Check the collector health

image

In which scenarios certificates are needed and why?

ACS requires mutual authentication between Forwarder(s) and Collector(s) servers, prior to the exchange of information, to secure the authentication process is encrypted between these two. When the Forwarder and the Collector reside in the same Active Directory domain or in Active Directory domains that have established trust relationships, they will use Kerberos authentication mechanisms provided by Active Directory.

But when the Forwarder and Collector are in different domains with no trust relationship, other mechanisms must be used to satisfy the mutual authentication requirement in a secure way. Here comes the use of certificates to ensure that authentication between these 2 parties (Forwarder and Collector) can take place, thus start exchanging information between them.


Monitoring Application Deployment Failures in Configuration Manager

$
0
0

Background

One of the key features of System Center Configuration Manager is Application deployment. Most of our enterprise customers have invested heavily in their administrative time and skills in managing the deployment of applications to thousands of machines within their environment.

 

The Scenario

With numerous applications deployed to collections, my enterprise customer found difficulty in tracking application failures across their environment. The issue they encountered was that most reports they attempted only provided the ability to report on the application deployment creation date. This is a limitation when applications are created and deployed months ago but remain active in a large environment.

 

The Solution

After much deliberation, we concluded that the customer needed the ability to report on machines that failed their application deployment in the last week (or timeline they specified) regardless of when the application deployment was initially created.

With their objectives in mind we put together the below solution for use in their environment, at their discretion of course:
•    Using a set of application deployment report(s) support the administrators in monitoring failed application deployments.
•    And lastly, they needed a means to remove direct membership of machines in the collections targeted by application deployment on a weekly basis.

 

The Implementation

 

1.    App Portal - Application deployment failure report

•    Administrators initiate their monitoring\troubleshooting of failed application deployments for their environment by reviewing this report.
•    By default, the report is configured to provide a list of machines that last attempted to install an application in the past 7 days. The timeline can be modified with the report parameters.
•    This report provides an overall view of application failures, most importantly the report provides:
1-    information on when a machine last attempted the installation of an application
2-    error code of the failed deployment
•    Top 5 Application Failures: Administrators can easily identify the top 5 applications that failed deployment and focus their remediation efforts.
•    Count of error codes: Administrators can easily identify the top 5 errors across applications.
•    List of Failed Applications: Administrators can identify the machine name, collection name, error code and most importantly the last time a machine attempted to install an application in the Enforcement Time column.

image

 

2.    App portal – Application deployment status

•    This is a widely available report that provides overall information on the application deployment status. The limitation with this report however is that it only provides information to administrators for application deployments created within specified date range specified in the input parameters.
image

 

3.    Remove machine direct membership from collections (weekly\monthly):

The customer required a method to remove machines with direct membership from multiple collections that are targeted with applications and decided that they would incorporate a PowerShell script to achieve this on a weekly\monthly basis.

The best approach for the customer was to populate an input text file where they can manage the names of Application Portal collections they chose to target. The PowerShell script would be scheduled by Task Scheduler.

 

•    PowerShell Script to remove direct membership from collection:

#############################################################################
#
# Purpose : This script removes collection direct membership from a list
#
#############################################################################

# Load Configuration Manager PowerShell Module
Import-module ($Env:SMS_ADMIN_UI_PATH.Substring(0,$Env:SMS_ADMIN_UI_PATH.Length-5) + '\ConfigurationManager.psd1')

# Get SiteCode
$SiteCode = Get-PSDrive -PSProvider CMSITE
Set-location $SiteCode":"

# Define Input File
$script_parent = Split-Path -Parent $MyInvocation.MyCommand.Definition
$inputfile = $script_parent + "\InputFile.txt"
$list = get-content $inputfile

# Define Logfile Parameters
# Logfile time stamp
$LogTime = Get-Date -Format "dd-MM-yyyy_hh-mm-ss"
# Logfile name
$Logfile = $script_parent + "\CMRemDirectMem_"+$LogTime+".log"
Function LogWrite
{
Param ([string]$logstring)
Add-content $Logfile -value $logstring
}

# Remove Collection Direct Membership
ForEach ($CollectionName In $list)
{
Remove-CMDeviceCollectionDirectMembershipRule -CollectionName $CollectionName -ResourceName * -Force -ErrorAction SilentlyContinue
LogWrite "$LogTime | $CollectionName"
Echo "$LogTime | $CollectionName"
}

•    The PowerShell script uses an InputFile.txt to specify application deployment collections.
•    The InputFile.txt will need to be created in the same folder as the PowerShell Script as the script will reference the parent folder.
•    An output logfile is created as the PowerShell script executes.

image

•    Example of the Task Scheduler configuration created for the Basic Task Action tab:

image


Program/script:
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe

Add Arguments (optional):
"D:\SOURCES\PowerShell\Collections_RemoveDirectMembership\Collections_RemoveDirectMembership.ps1"

 

4.    App portal – List of collections modified

Once the PowerShell script has executed, the below report is scheduled to run. This report records the list of collections that were modified by the PowerShell script. I have included the Date input parameters with a default offset of 1 day.
image

 

5.    App portal – List of machines in collection with direct membership

This is the second report also scheduled to run after the PowerShell script. This report displays the list of machines that still have direct membership. If this occurs, Administrators can troubleshoot further.
image

 

The Conclusion

Monitoring of Application Deployment for most organizations is a time-consuming task therefore in creating a process that is easy to follow, it will always benefit administrators allowing them to monitor and maintain large environments. I hope the information shared in the above scenario is helpful.

The reports can be downloaded below.

RDL

Note, I have split the reports into SSRS 2016 or later and SSRS 2014 or earlier. The SSRS 2014 reports do not contain any charts displayed in the screenshots above.

 

Disclaimer – All scripts and reports are provided ‘AS IS’
This sample script is not supported under any Microsoft standard support program or service. This sample script is provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of this sample script and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of this script be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use this sample script or documentation, even if Microsoft has been advised of the possibility of such damages.

Automating the clean-up of Configuration Manager Maintenance Collections

$
0
0

Background

Most organizations using System Center Configuration Manager implement collections configured for maintenance tasks. Administrators generally monitor these collections on a weekly\monthly schedule and in some instances are required to delete the machines within these collections, for example: collections containing Obsolete Clients.

Scenario

My customer was looking for a method to streamline their weekly\monthly maintenance tasks where they manually delete machines from multiple collections.

Solution

Using System Center Configuration Manager PowerShell cmdlets a script was created to:
•    Read an input text file that is populated with specific collection names.

•    Automatically delete all machines from the collections specified in the input text file.
Note, use this script with extreme caution as machines are deleted from the SCCM database therefore always ensure the correct collection names are populated in the input text file. Test this script in your lab environment to ensure it works as desired.

•    Invoke a collection membership update once machines were deleted.

•    Output a logfile that records the collection names and the machines that were deleted, respectively.

 

PowerShell Script:

#           This script performs the following actions:
#            - CAUTION, deletes machines from SCCM in specified collections
#            - Updates the collection membership
#            - Creates a Logfile with Date, Time, Collection Name and Machine Names
#
#           This script does NOT:
#            - Remove the collection rules\query

# Load Configuration Manager PowerShell Module
Import-module ($Env:SMS_ADMIN_UI_PATH.Substring(0,$Env:SMS_ADMIN_UI_PATH.Length-5) + '\ConfigurationManager.psd1')

# Get SiteCode
$SiteCode = Get-PSDrive -PSProvider CMSITE
Set-location $SiteCode":"

# Define Input File
$script_parent = Split-Path -Parent $MyInvocation.MyCommand.Definition
$inputfile = $script_parent + "\InputFile.txt"
$list = get-content $inputfile

# Define Logfile Parameters
# Logfile time stamp
$LogTime = Get-Date -Format "dd-MM-yyyy_hh-mm-ss"
# Logfile name
$Logfile = $script_parent + "\CMRemoveDevice_"+$LogTime+".log"
Function LogWrite
{
Param ([string]$logstring)
Add-content $Logfile -value $logstring
}

# Remove machines in a collection from SCCM
ForEach ($CollectionName In $list)
{
Foreach ($CMDevice in $CollectionName)
{
$CMDevice = Get-CMDevice -CollectionName $CollectionName | Format-List -Property Name | Out-String
Get-CMDevice -CollectionName $CollectionName | Remove-CMDevice -Force -ErrorAction SilentlyContinue
Get-CMDeviceCollection -Name $CollectionName | Invoke-CMCollectionUpdate -ErrorAction SilentlyContinue
LogWrite "$LogTime | $CollectionName | $CMDevice"
}
}

 

Expected Log File Output:

 

Conclusion

System Center Configuration Manager is a product that has a broad offering of features. Administrators at times can be overwhelmed by operational activities and overlook monitoring of maintenance collections or performing maintenance tasks. Automating of tasks to run on a regular schedule can provide consistency in maintaining the health of your environment. The above script can be configured to run using Task Scheduler.

 

Hope the shared script is helpful in maintaining your environment.

 

Disclaimer – All scripts and reports are provided ‘AS IS’
This sample script is not supported under any Microsoft standard support program or service. This sample script is provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of this sample script and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of this script be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use this sample script or documentation, even if Microsoft has been advised of the possibility of such damages.

Most Common Mistakes in Active Directory and Domain Services – Part 1

$
0
0

As a Premier Field Engineer (PFE) at Microsoft, I encounter new challenges on a daily basis. Every customer has its own uniqueness, and each environment is different from the other.

And yet, there are several things I repeatedly encounter over and over again. Common mistakes that IT administrators make because lack of knowledge or changes in products they are not aware of.

This blog post is the first part of a series which will cover several of those mistakes. So… Let’s get started!

Mistake #1: Configuring Multiple Password Policies for Domain Users Using Group Policy

When reviewing Group Policy settings, I often find Group Policies Objects (GPOs) that contain ‘Password Policy’ settings.

For example, when looking into a “Servers Policy” GPO, I can see that it has Password Policy settings defined, including Maximum password age, Minimum password length and so on.

When I ask the customer about it, he tells me that this policy was built to set a different password policy for some admins accounts or any other group of users.

As you already know (or might have guessed), this is NOT the correct way to configure different Password Policies in your environment. Here’s why:

  • Password Policy settings in GPO affect computers, not users.
  • When you change your Domain User password, the password change takes place on the Domain Controllers.
  • Therefore, the Password Policy that takes effect is the one applied on your Domain Controllers, usually by the ‘Default Domain Policy” GPO.
  • More accurate, the Domain Controller that holds the PDC Emulator FSMO role is the one responsible for applying the Password Policy for the domain level.
  • In terms of Group Policy, there can be only one password policy for domain users.

Bottom Line: Configure a GPO with password policy and link it to an Organizational Unit (OU) won’t change the password policy for users within this OU.

Do It Right: Use FGPP.

Mistake #2: Removing “Authenticated Users” from the Group Policy Object Security Filtering

In June 2016, Microsoft released a security update that changes the security context with which user group policies are retrieved.

Before that update, user group policies were retrieved by using the user’s security context. After installing the update, user group policies are retrieved by using the computer's security context.

Therefore, you should always make sure that any Group Policy in your environment could be retrieved by the relevant computer accounts.

Because a lot of people are not aware of this change, I usually find Group Policies with missing permissions that are not being applied at all.

When changing Group Policy Security Filtering scope from “Authenticated Users” to any other group, the “Authenticated Users” (which contains computers account as well) are removed from the Group Policy delegation tab. As a result, computer accounts don’t have the necessary “Read” permissions in order to access and retrieve group policies.

In recent versions of Group Policy Management, a warning message appears when removing the default “Authenticated Users” from the “Security Filtering” tab:

That is why you must validate that any Group Policy has the “Authenticated Users” or “Domain Computers” groups with “Read” permissions. Make sure that that you specify “Read” permission only, without selecting the “Apply group policy” permissions (otherwise any user or computer will apply this Group Policy).

The following PowerShell function can help you identify GPOs with missing permissions (missing both 'Authenticated Users' and ‘Domain Computers' groups):

Bottom Line: Group Policies with missing permissions for computers account (“Authenticated Users”, “Domain Computers” or any other group that includes the relevant computers) will NOT be applied.

Do It Right: When changing Group Policy Security Filtering, make sure you add the “Authenticated Users” group in the delegation tab and provide it with “Read” permission only.

Mistake #3: Creating a DNS Conditional Forwarder as a Non-Active Directory Integrated Zones

When creating a DNS conditional forwarder using the DNS management console (GUI), it’s created, by default, as a non-Active Directory integrated zone, meaning that it’s saved locally in the server’s registry.

Creating a non-Active Directory integrated zone raises a few problems:

  • Non-Active Directory zones do NOT replicate between the Active Directory Integrated DNS servers, therefore these zones might become out of sync when configured over two or more DNS servers.
  • Non-Active Directory zones can be easily forgotten and abandoned when replacing Domain Controllers as part of an upgrade or restore procedures.
  • In many cases, Non-Active Directory zones for conditional forwarder are defined on a single server, which causes inconsistent behavior between servers in terms of DNS resolving.

You can easily change this and create the zone as an Active Directory integrated zone by selecting the option “Store this conditional forwarder in Active Directory”.

Using PowerShell, you can specify the parameter ‘ReplicationScope’ with either ‘Forest’ or ‘Domain’ scope to store the conditional forwarder zone in Active Directory:

Bottom Line: Avoid using non-Active Directory integrated zones unless you have a really good reason.

Do It Right: When creating conditional forwarder using either PowerShell or the GUI, make sure to create it as an Active Directory-integrated forwarder.

 

Continue reading part 2 of the series.

Most Common Mistakes in Active Directory and Domain Services – Part 2

$
0
0

In this blog post, we will continue to explore some of the most common mistakes in Active Directory and Domain Services.
Part 1 of the series covered the first three mistakes, and today we'll go over another three interesting issues. Enjoy your reading 🙂

Mistake #4: Keeping the Forest and Domain Functional Levels at a Lower Version

For various reasons, customers are afraid of dealing with the Forest and Domain Functional Levels (FFL and DFL in short).

Because the FFL and DFL purpose and impact are not always clear, people avoid changing it and sometimes maintain a very old functional level like Windows Server 2008 or even Windows Server 2003.

The Forest and Domain Functional Levels reflect the lowest Domain Controller version within the forest and the domain.
In other words, this attribute is telling the Domain Controllers that all DCs in the Domain or Forest are running an OS equal to or higher than the functional level. For example, a functional level of Windows Server 2012R2 means that all DCs are running a Windows Server 2012R2 OS and above.

The functional level is used by the Active Directory to understand whether it’s possible to take advantage of new features that require the Domain Controllers to be at a minimum OS version.
The FFL and DFL are also used to prevent promoting an old Domain Controller version in the domain, as it might, theoretically, affect the usability of new AD features being used by newer OS versions.

An old Forest/Domain Functional Levels may prevent you from using some very useful Active Directory features like Active Directory Recycle Bin, Domain-Based DFS namespaces, DFS Replication for SYSVOL and Fine-Grained Password Policies.
In this link, you can find the full list of Active Directory Features in each functional level.

It's also worth mention that you can roll back the FFL and DFL all the way down to Windows Server 2008R2 using the Set-ADForestMode and Set-ADDomainMode PowerShell cmdlets. See the example below:

Bottom Line: Forest and Domain Functional Levels are used internally by the Domain Controllers and don’t affect which operating systems can be used by clients (workstation and servers).
Older functionally and features are still supported in newer functional levels, so you shouldn’t notice any differences, and everything is expected to continue to work as before.
If (for some reason) you still have concerns about certain applications, contact the vendor for clarification.

Do It Right: Backup your AD environment (using Windows Server Backup or any other solution you've got), upgrade the FFL and the DFL in your test environment and then in production.

Mistake #5: Use DNS as an Archive by Disabling DNS Scavenging

DNS is one of the most important services in each environment. It should be running smoothly and be up to date so it can resolve names to IP address correctly with no issues.

Yet, there are some cases when customers think about DNS as an archive for old and unused servers’ names and IP addresses. In those cases, administrators disable the DNS Scavenging option to prevent old DNS records from being deleted. This is a bad habit because it could easily lead to a messy DNS with duplicated and irrelevant records, where A Records point to IP addresses which do not exist anymore, and PTRs refer to old computers deleted long time ago.

For those of you who don’t know, DNS Scavenging is a DNS service responsible for cleaning-up old and unused DNS records which are not relevant anymore, based on their timestamp.
When DNS record is being updated or refreshed by a DNS client, its timestamp gets updated with the current date and time.
DNS Scavenging designed to delete records that their timestamp is older than the ‘Refresh’ + ‘No Refresh’ intervals (which are configured in the DNS zone settings). Pay attention that static DNS records are not being scavenged at all.

If DNS Scavenging is disabled in your environment for a while, I suggest running the PowerShell script below before enabling it in order to better understand which records are going to be removed as part of the scavenging process.
The script checks any Dynamic DNS Record and decided whether it’s:
• A stale record which responded to ping.
• A stale record which doesn’t respond to ping.
• An updated record (not stale).

The script’s output should look like this:

Bottom Line: DNS Scavenging is NOT the place to save all your ancient names and IP addresses. If you required to save this information, use some CMDB tool or any other platform design for this. DNS is an operational service that should response fast and reliable with the correct and relevant values only.

Do It Right: Enable DNS scavenging and get rid of those old and unused records.

Mistake #6: Using a DHCP Failover Without Configuring DDNS Update Credentials

DHCP Failover is a well-known feature that was released back in September 2012 with Windows Server 2012. The DHCP Failover provides high availability mechanism by entering two DHCP servers into a failover relationship.

When the option "Always dynamically update DNS records" in the DHCP properties is selected, the DHCP server updates the DNS with A and PTR records of DHCP clients using its own computer credentials (e.g. ‘DHCP01’ computer object).

When a DHCP Failover is configured, this can become an issue:
When the first DHCP server (e.g. DHCP01) in a DHCP Failover is registering a DNS record, it becomes its owner and gets the relevant permissions to update the record when needed.
If the second DHCP server (e.g. DHCP02) in a DHCP Failover will try to update the same record (because DHCP01 is unavailable for the moment), the update will fail because it doesn’t have the required permissions to update the record.
Pay attention that if your DNS zones are configured with "Nonsecure and secure" dynamic updates (which standing aginst the best practices), security permissions on DNS records are not enforced by any mean, and records can be updated by any client, including your DHCP servers.

To resolve this, you can configure DDNS update credentials and enter the username and password of a dedicated user account you created for this purpose (e.g. SrvcDHCP).
In general, no special permissions are required.
The DHCP servers will always use this credential when registering and updating DNS records.

Before changing the DNS dynamic update credentials, you may consider changing the ownership and the permissions of existing DNS records to include the new user account, especially if your DHCP environment is running for a long time.

In order to complete this, you can use the PowerShell script below.
The script examines each DNS record and displays a table with records that meet all of the following conditions:

  1. The DNS record is a dynamic record.
  2. Record’s current owner is a DHCP server.
  3. Record’s type is A or PTR.

If approved by the user, the script updates the selected records with the new owner and add the user account to the records ACL with a ‘full control’ permission.

Bottom Line: Using a DHCP Failover without configuring DNS dynamic update credentials will result in DNS update failures when one DHCP server will try to update records that were registered by the second DHCP server.

Do It Right: If you are using DHCP Failover, you should configure DNS dynamic updates credentials on both DHCP servers.

 

In the next (and last) blog post we’ll talk about a few more issues and warp up this series.

Understanding and using the Pending Restart Feature in SCCM Current Branch

$
0
0

I get queried a lot on new features in System Center Configuration Management and how they can be used to simplify life for customers, on a daily basis.

Microsoft's mission is to empower every person and every organization on the planet to achieve more, so how can ConfigMgr help with that?

 

With the release of ConfigMgr 1710 a new feature was added called “Pending Restart”

This has allowed Administrators to quickly, out of the console identify what machines need a restart and what is the reason for requiring a restart.

 

This blog post is going to guide you through how we can use the WQL and SQL Queries to create reports and collections to simplify the management and reporting to business, as well as using this to schedule your mass restarts to ensure that your devices remain compliant.

So… Let’s get started!

 

ClientState

 

Firstly we need to understand that when we are looking at the "Pending Restart" Tab in a Collection, that it uses the ClientState Information in the v_CombinedDeviceResources view, in the Database

 

Screen1

 

The ClientState information is what lets us know if there is a reboot pending.

There are five main states:

0 = No reboot Pending
1 = Configuration Manager
2 = File Rename
4 = Windows Update
8 = Add or Remove Feature

A computer may have more than one of the states applying at the same time, which will change the state number to a combination of the applicable states.

1 – Configuration Manager
2 – File Rename
3 – Configuration Manager, File Rename
4 – Windows Update
5 – Configuration Manager, Windows Update
6 – File Rename, Windows Update
7 – Configuration Manager, File Rename, Windows Update
8 – Add or Remove Feature
9 – Configuration Manager, Add or Remove Feature
10 – File Rename, Add or Remove Feature
11 – Configuration Manager, File Rename, Add or Remove Feature
12 – Windows Update, Add or Remove Feature
13 – Configuration Manager, Windows Update, Add or Remove Feature
14 – File Rename, Windows Update, Add or Remove Feature
15 – Configuration Manager, File Rename, Windows Update, Add or Remove Feature

 

By Querying the SCCM DB, we can see what state a machine is in.

 

Screen2

Note we are only looking here for machines that DO require a reboot.

So Far we have identified now that there are machines in our environment that do require restarts, and had a look at the different states that a machine can report on.

Restarting Machines

So how do I go about Restarting a machine?

There are 2 main ways :

1. Straight out of the console

Screen3

The first and easiest way to handle a small number of machines, is by selecting 1(or more machines) – Right click – Client Notification – Restart

This will cause a Popup Notification on the User Machines to appear.

The User will have 2 Options, Restart or Hide

 

2. Create a Collection that will list all the machines that require a restart.

This is the option when machines need to be Targeted for a restart en masse

Whether it is users that do not restart machines, or a restart required for applying a Out of Band Update, this is a quick way to group machines together, and schedule a Restart Task Sequence

 

Screen4

 

SQL Query for Collection

select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System join sms_combineddeviceresources on sms_combineddeviceresources.resourceid = sms_r_system.resourceid where sms_combineddeviceresources.clientstate != 0

 

SQL Query for Report

 

We now have a Collection of all the machines that require restarts.

I still need to be able to report to business WHERE those machines are, WHO is using them, WHAT Operating System are they using?

 

This is where you can create a report very easily using the query below.

This will list the Machine Names, Operating Systems, State, State Meaning, Last Logged On User, Last Active time for the machine

 

SELECT        Name AS [Pending restart Clients], ADSiteName, ClientState,
          (SELECT CASE [ClientState] 
WHEN '1' THEN 'Configuration Manager' WHEN '2' THEN 'File Rename' WHEN '3' THEN 'Configuration Manager, File Rename' WHEN '4' THEN 'Windows Update'
WHEN '5' THEN 'Configuration Manager, Windows Update' WHEN '6' THEN 'File Rename, Windows Update' WHEN '7' THEN 'Configuration Manager, File Rename, Windows Update' 
WHEN '8' THEN 'Add or Remove Feature' WHEN '9' THEN 'Configuration Manager, Add or Remove Feature' WHEN '10' THEN 'File Rename, Add or Remove Feature'
WHEN '11' THEN 'Configuration Manager, File Rename, Add or Remove Feature' WHEN '12' THEN 'Windows Update, Add or Remove Feature' 
WHEN '13' THEN 'Configuration Manager, Windows Update, Add or Remove Feature' WHEN '14' THEN 'File Rename, Windows Update, Add or Remove Feature'
WHEN '15' THEN 'Configuration Manager, File Rename, Windows Update, Add or Remove Feature' ELSE 'Unknown' END AS Expr1) AS [Client State Detail],
          (SELECT CASE WHEN DeviceOS LIKE '%Workstation 5.0%' THEN 'Microsoft Windows 2000' WHEN DeviceOS LIKE '%Workstation 5.1%' THEN 'Microsoft Windows XP' 
WHEN DeviceOS LIKE '%Workstation 5.2%' THEN 'Microsoft Windows XP 64bit' WHEN DeviceOS LIKE '%Server 5.2%' THEN 'Microsoft Server Windows Server 2003' 
WHEN DeviceOS LIKE '%Workstation 6.0%' THEN 'Microsoft Windows Vista' WHEN DeviceOS LIKE '%Server 6.0%' THEN 'Microsoft Server Windows Server 2008 R2'
WHEN DeviceOS LIKE '%Server 6.1%' THEN 'Microsoft Server Windows Server 2008' WHEN DeviceOS LIKE '%Workstation 6.1%' THEN 'Microsoft Windows 7' 
WHEN DeviceOS LIKE '%server 6.3%' THEN 'Microsoft Server Windows Server 2012 R2' WHEN DeviceOS LIKE '%server 6.2%' THEN 'Microsoft Server Windows Server 2012'
WHEN DeviceOS LIKE '%Workstation 6.2%' THEN 'Microsoft Windows 8' WHEN DeviceOS LIKE '%Workstation 6.3%' THEN 'Microsoft Windows 8.1' 
WHEN DeviceOS LIKE '%Workstation 10%' THEN 'Microsoft Windows 10' WHEN DeviceOS LIKE '%server 10%' THEN 'Microsoft Windows Server 2016'
ELSE 'N/A' END AS Expr1) AS [Operating System], LastLogonUser, LastActiveTime
FROM     dbo.vSMS_CombinedDeviceResources
WHERE    (ClientState > 0) AND (ClientActiveStatus = 1)

In Conclusion

The Introduction of the ClientState reporting into the Console has allowed us as Administrators, to get a view of what machines need a reboot, and why.

I hope that the Queries will help to guide you, and to help simplify your Daily Administration.

Viewing all 177 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>