Quantcast
Channel: Secure Infrastructure Blog
Viewing all 177 articles
Browse latest View live

Download files from Blob storage account

$
0
0

 

Introduction

For every test run, we had a requirement to download application log files from Azure blob storage, parse and generate performance metrics (response times).

 

Problem statement/Challenge

Our performance tests run for longer duration like (3hours/24hours/5days) and write thousands/lakhs of logs files to blob storage.

We have to filter and download logs based on different criteria’s like

  • Download Logs only after particular Timestamp in UTC.
  • Download all logs.
  • Download only those logs, in which file name contains given string.
  • Download only those log file with specific extension.

 

Solution

I had written below C# code snippet that will automatically download logs based on different criteria. This code snippet can be further customized and reused based on our needs.

Code snippet along with explanation in comments

App.Config

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <appSettings>
        <add key = "StorageAccountConnectionString"
            value = "DefaultEndpointsProtocol=https;AccountName=testaccount;AccountKey=testaccountkey+sy5AthQhOBRw==" />
        <add key = "ContainerName" value = "testcontainername" />
        <add key = "TestExecutionStartTimeinIST" value = "9/18/2015 10:30" />
        <!– To download all files, use * in value
        Example: add key = "NameContains" value = "*”
        To download file that contains a give string in name, use string name
        Example: add key = "NameContains" value = "Application”
        To download file with extention, use extention
        Example: add key = "NameContains" value = ".txt”
        – >
        <add key = "NameContains" value = "Application" />
        <add key = "FileExtention" value = ".txt" />
    </appSettings>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" />
</startup>
</configuration>

Program.cs

// Read configuration from app.config
    string strgConnString = ConfigurationManager.AppSettings["StorageAccountConnectionString"];
    string containerName = ConfigurationManager.AppSettings["ContainerName"];
    string fileNameMatch = ConfigurationManager.AppSettings["NameContains"];
    string fileExtension = ConfigurationManager.AppSettings["FileExtention"];
    string dateInputinUTC = ConfigurationManager.AppSettings["TestExecutionStartTimeinIST"];

// Create a Storage Account
    CloudStorageAccount storageAccount = CloudStorageAccount.Parse(strgConnString);

// Create the blob client
    CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();

// Get reference to a container
    CloudBlobContainer container = blobClient.GetContainerReference(containerName);

    string filename = string.Empty;

// Convert time to UTC
    DateTime configtime = DateTime.Parse(dateInputinUTC).ToUniversalTime();
    Console.WriteLine("*******************Downloading Files from blob after :{0}", configtime);

// Traverse and download each item matching filter criteria
    foreach (IListBlobItem item in container.ListBlobs(null, false))
    {
        if (item.GetType() == typeof(CloudBlockBlob))
        {
            CloudBlockBlob blob = (CloudBlockBlob)item;
            // Condition to download files based on last modified
            if (blob.Properties.LastModified.Value.Date >= configtime.Date)
            {
                    if (blob.Properties.LastModified.Value.TimeOfDay >= configtime.TimeOfDay)
                    {
                            // Download all files
                            if (fileNameMatch == "*")
                            {
                                Console.WriteLine(blob.Uri.AbsolutePath);
                                string path = @"DownloadFolder\" + filename;
                                using (var fileStream = System.IO.File.Create(path))   
                                {
                                    blob.DownloadToStream(fileStream);
                                }   
                            }
                            // Download all files that contains given string
                            else if (blob.Name.Contains(fileNameMatch) == true)
                            {
                                Console.WriteLine(blob.Uri.AbsolutePath);
                                filename = blob.Uri.AbsolutePath.Split('/')[2];
                                string path = @"DownloadFolder\" + filename;
                                using (var fileStream = System.IO.File.Create(path))
                                {
                                    blob.DownloadToStream(fileStream);
                                }
                            }

                            // Download all files with extension
                            else if (blob.Name.EndsWith(fileExtension))
                            {
                                Console.WriteLine(blob.Uri.AbsolutePath);
                                filename = blob.Uri.AbsolutePath.Split('/')[2];
                                string path = @"DownloadFolder\" + filename;
                                using (var fileStream = System.IO.File.Create(path))
                                {
                                    blob.DownloadToStream(fileStream);
                                }

                            }

                    }

             }

        }
    }

Hope this will help you. Suggestion and Feedback is welcome.

Happy Coding!

 

References

https://azure.microsoft.com/en-in/documentation/articles/storage-dotnet-how-to-use-blobs/


How to Wrap an Android LOB application using MS AWT and deploy on MS Intune as Managed Application.

$
0
0

 

Before wrapping, an application should fulfill the below mentioned criteria:

1.  The application should be a valid Android application package with .apk extension.

2. Must not be already wrapped by any wrapping tool.

3. Should be written for Android 4.0 or above.

4.Generic applications downloaded from Google Play Store by Microsoft, Google and other vendors cannot be wrapped e.g. Cortana, YouTube etc.

 

Pre-requisites:

1. Java Development Kit (1.7,1.8 ) by Oracle. you can download the JDK from this link.

http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html

2. Application Wrapping Tool for Android by Microsoft. Download the latest App Wrapping Tool from Microsoft. Right now the latest version is:

https://www.microsoft.com/en-us/download/details.aspx?id=47267.

3. Before uploading the application to Intune, it should be signed. For signing, Java Keytool can be used. A keystore should be created for this first.

 

Procedure to be followed:

1. Install the JDK environment.

2. Create a keystore.

3. Install the App Wrapping Tool.

4. Wrap the application.

5. Sign the wrapped application.

6. Upload the signed apk.

7. Create a Mobile Management Policy in MS Intune portal.

8. Deploy the application.

 

How to Create a Keystore:

1. Open the elevated PowerShell prompt.

2. To create the keystore, navigate to “C:\Program Files\Java\jdk1.7.0_79\bin” (Default location of Jdk installation for x64).

3. Run the following command:

.\Keytool.exe –genkey –v –keystore MSAWT.keystore –alias MSAWT –validity 10000

Provide the details like Org, country etc. and press enter. Provide the passwords for keystore when prompted.

Now the keystore is ready and can be used to sign the applications.

 

How to Install the Application Wrapping Tool.

Application Wrapping Tool is a command line tool. Install the tool by double-clicking the file.

clip_image002

clip_image004

clip_image006

 

How to Wrap the Application:

1. Run the following command in elevated PowerShell module to import AWT module:

“PS> Import-Module “C:\Program Files (x86)\Microsoft Intune Mobile Application Management\Android\App Wrapping Tool\IntuneAppWrappingTool.psm1”

(“C:\Program Files (x86)\Microsoft Intune Mobile Application Management\Android\App Wrapping Tool” is the Default location of AWT installation on x64 machine)

2.  Next run the following command to wrap the app:

Invoke-AppWrappingTool  -InputPath “Path to your application.apk” –OutputPath “Path to your output wrapped application.apk”

3.  It will take a minute or two and will wrap the application, if all the criteria for the apk as mentioned above is met.

(For reference: https://technet.microsoft.com/en-in/library/mt147413.aspx)

 

How to Sign the wrapped application:

You can sign the application while wrapping the application by modifying the command, or it can be signed later.

To sign the application at the time of wrapping, modify the command in elevated PowerShell module as below:

Invoke-AppWrappingTool  -InputPath “Path to your application.apk” –OutputPath “Path to your output wrapped application.apk”  -KeyStorePath “C:\Program Files\Java\jdk1.7.0_79\bin\MSAWT.keystore” –KeyAlias MSAWT –verbose

Provide the keystore password when prompted.

Or

To sign the application separately, run the following commands:

1. Navigate to “C:\Program Files\Java\jdk1.7.0_79\bin” in elevated PowerShell module:

2. PS C:\Program Files\Java\jdk1.7.0_79\bin> .\jarsigner.exe -verbose -keystore .\MSAWT.keystore “Path of your wrapped application.apk” –keyalias MSAWT

Provide the keystore password when prompted.

3. To check if the file is signed, run

PS C:\Program Files\Java\jdk1.7.0_79\bin> .\jarsigner.exe –verify “Path of your wrapped&signed application.apk”

The application package app.apk is now wrapped and signed. It can be uploaded to MS Intune portal and can be deployed as managed application.

 

How to create a mobile management policy:

To create a mobile management policy, please refer to the link:

https://technet.microsoft.com/en-us/library/dn878026.aspx

How to Deploy the application:

To deploy a managed application, please refer to the link below:

https://technet.microsoft.com/en-us/library/dn646972.aspx

Temporary Post Used For Theme Detection (13c9ac5b-a405-4886-b35f-4d54d0846e0b – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

$
0
0

This is a temporary post that was not deleted. Please delete this manually. (a7c5abdf-2c09-4a75-a997-e94efc4c1105 – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

AAD Connect: The Three Forests Story.

$
0
0

I am writing on a grey area that I recently encountered while working on a AAD Connect project for a customer. The customer had one primary forest(forest A) and a secondary forest(Forest B) which essentially had same users represented twice (both in enabled state). In addition a third forest (Forest C) which is an extranet users forest having a disparate set of users.  A quick look at AAD Connect Supported topologies yields that this is a supported topology.(https://azure.microsoft.com/en-us/documentation/articles/active-directory-aadconnect-topologies/)

My topology would look something like this. 

MultiForestSingleDirectory

The some users would match (between Forests A and B) while some wont (Forest C).

MultiForestFullMesh

I ended up choosing the following settings for matching users, Source Anchor and UPN.

image

Now, if both user objects(matched by UPN) are enabled in Forests A and B, by default AAD Connect can project from either of the objects and join the left over one. This wasn’t acceptable as ADFS is based on Forest A hence, AAD user objects must have the value of objectGUID of Forest A in sourceAnchor.

The solution to this problem is if you allow explicitly only Forest A to project and Forest B to join alone. This can be done by tweaking the default AAD Connect config as follows

1. Open Synchronization rules editor:

image

2. Open the User Join Inbound Sync Rule for ForestB.com and edit it.

image

Note: Normally we create a custom rule and should avoid editing the OOTB rules. However, in this case we must edit the default rule.

3. Change the link type to Join instead of provision.

image

Now Forest B objects will only join to Forest A objects that are project already. Voila, we have everything working as expected now. No other changes are needed to Forest A or Forest B join rules.

Run two PowerShell scripts on a same VM through custom script extension at different stage of Deployment in ARM

$
0
0

Introduction – This blog post illustrates the method through which you can run two different PowerShell scripts on a same VM through custom script extension at different stages /time of deployment in ARM. Currently , it is not possible to run two custom script to perform two different tasks on a same VM through custom script extension.

Assumptions – Here we assume that you are familiar with basics of deploying resources in azure preview portal in ARM mode and use/construction of JSONs.

Problem statement – I had requirement where I had to deploy an IaaS infrastructure in ARM through PowerShell orchestration script and JSON template. This include creation of IaaS VMs like domain controllers and SQL VMs etc. At one point I had to perform the task of creating AD domain users on domain controller through PowerShell script and at the final stage of deployment ( after SQL VMs configuration ) , I had to push group policy on the same domain controller through PowerShell script. After adding a resource block in JSON for second PowerShell script , I ran the complete deployment. As expected , at the final stage while pushing GPO PowerShell script into DC

New-AzureResourceGroup : 11:12:54 AM – Resource Microsoft.Compute/virtualMachines/extensions 'adgptst02/ADGPO' failed with message 'Multiple

VMExtensions per handler not supported for OS type 'Windows'. VMExtension 'ADGPO' with handler 'Microsoft.Compute.CustomScriptExtension' already

added or specified in input.'

This was expected as VM already had custom script extension in it which was injected previously for creating domain users at the earlier stage of deployment.

Resolution/Workaround – Follow the below to resolve this :

1. Remove the resource block for second PowerShell from JSON template as of now.

2. In the PowerShell orchestration script, from where you run the deployment ,  Add the below command to remove custom script extension once the deployment is done . This command will be right after New-azurermresourcegroupdeployment command which perform the deployment:

Remove-AzurermVMCustomScriptExtension -ResourceGroupName $ResourceGroupName -VMName $CustVMname –Name $customscriptname -Force

Note – Replace the variables with actual values. This will remove the custom script extension from the VM and will not have any effect on the configuration done by the custom PowerShell script earlier

3. Create a new JSON template ( from the same template you are using for deployment ) which will have only one resource block for second custom script extension.( delete all other resource blocks as those tasks will already be completed ).You don’t need to make any changes in parameters and variables section of the template as most of the values will not used and will not have any effect. Also you need not to make any change in template parameter file. Below is how the resource block for second custom script extension will look like:

"resources": [

       {
           "type": "Microsoft.Compute/virtualMachines/extensions",
           "name": "[concat(parameters('ADVirtualMachine'),'/ADGPO')]",
           "apiVersion": "2015-06-15",
           "location": "[parameters('location')]",
           "properties": {
               "publisher": "Microsoft.Compute",
               "type": "CustomScriptExtension",
               "typeHandlerVersion": "1.4",
               "settings": {
                   "fileUris": [
                       "[variables('ADGPOScriptFileUri')]"
                   ],
                   "commandToExecute": "[variables('ADGPOToExecute')]"
               },
               "protected Settings": {
                   "storageAccountName": "[variables('ADcustomScriptStorageAccountName')]",
                   "storageAccountKey": "[listKeys(variables('ADaccountid'),'2015-05-01-preview').key1]"
               }

 

           }
       }


   ]

4. Save the JSON template with different name along with parent template for the deployment.

5. In the PowerShell orchestration script , Remove-AzurermVMCustomScriptExtension, once again run the New-azurermresourcegroupdeployment , this time with new JSON template. Hence , in short , the process here is to remove the custom script extension first and then add it again with the required script.

 

Thanks folks , hope it is useful

Happy blogging

Windows Automatic Services Monitoring using SCOM

$
0
0

Monitoring services in windows computers is available out of box in SCOM through Service Monitoring Template. But in a large enterprise with over 1000s of windows computers and 100s of applications, it is difficult to list out all services that needs to be monitored in each computer and create monitoring using template. Consider monitoring on average 30 services in 1000 computers would result on 30,000 instances added to SCOM DB. This will create numerous classes, discoveries and cause bloating of instance space which will make SCOM less responsive.

Also, we cannot create a monitor for each service and target it across all computers as each service may be present on bunch of computers and not on others. Thus targeting unanimously will result in false alarms and again, we may need 30+ windows service monitors targeted to all windows computers which will create overhead on agents and thus on the computers running the agent.

So, What is the solution?

Optimal solution would be creating a single rule to monitor all automatic services in each computer and alert on those which are not running. This can be accomplished using Powershell script with property bag output.

The rule runs on each computer at specific time interval, creates property bags for each service which is set to automatic but not running and an alert is generated for each property bag.

A catch to note in this monitoring scenario is not to alert on services that are stopped only for a moment. To overcome the issue, we will use consolidator condition. So only if the service is failed for ‘n’ consecutive samples, we will alert.

This solution, though optimal pose another challenge – What if we do not want to monitor a service which is set to automatic in one or few of computers.

This can be handled using a centrally located file with details of service and the computers to be excluded from monitoring.

We will see how to construct the Management Pack XML to accomplish this. You can also create MP using Visual Studio, MP Studio or Authoring Console.

Step 1:

Add references to the Management pack.

1 <ManagementPack ContentReadable="true" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> 2 <Manifest> 3 <Identity> 4 <ID>GKLab.Windows.Automatic.Service.Monitoring</ID> 5 <Version>1.0.0.0</Version> 6 </Identity> 7 <Name>GKLab Windows Automatic Service Monitoring</Name> 8 <References> 9 <Reference Alias="SC"> 10 <ID>Microsoft.SystemCenter.Library</ID> 11 <Version>6.1.7221.0</Version> 12 <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> 13 </Reference> 14 <Reference Alias="Windows"> 15 <ID>Microsoft.Windows.Library</ID> 16 <Version>6.1.7221.0</Version> 17 <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> 18 </Reference> 19 <Reference Alias="Health"> 20 <ID>System.Health.Library</ID> 21 <Version>6.1.7221.0</Version> 22 <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> 23 </Reference> 24 <Reference Alias="System"> 25 <ID>System.Library</ID> 26 <Version>6.1.7221.0</Version> 27 <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> 28 </Reference> 29 <Reference Alias="Performance"> 30 <ID>System.Performance.Library</ID> 31 <Version>6.1.7221.0</Version> 32 <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> 33 </Reference> 34 </References> 35 </Manifest>

Step 2:

Now create a Powershell property bag probe script. The Powershell script fetches list for all services that are set to start automatic and checks for the current status. For each service that are set to Automatic but not running, a property bag is created.

To exclude some services from being monitored, a centrally located CSV file is used and the path of file is passed as parameter to the script. The script reads list of services to be excluded from monitoring from CSV file and compares it with the list of services in the target computer. The property bag for excludes services are not created.

1 param ( 2 [string] $excludeservicelist 3 ) 4 if (test-path $excludeservicelist) { 5 write-eventlog -logname "Operations Manager" -Source "Health Service Script" -EventID 776 -Message "WindowsAutomaticServiceMonitoring.ps1 - Accessing Exclusion List CSV" -EntryType Information 6 $contents = Import-Csv $excludeservicelist 7 } 8 $TargetComputer = hostname 9 $api = New-Object -comObject 'MOM.ScriptAPI' 10 $auto_services = Get-WmiObject -Class Win32_Service -Filter "StartMode='Auto'" 11 foreach ($service in $auto_services) 12 { 13 $isExcluded = 0 14 $state = $service.state 15 $name = $service.DisplayName 16 If ($Contents){ 17 $contents | ForEach-Object{ 18 $ExcludeServiceDisplayName = $_.ServiceToExclude 19 $ExcludeComputerName = $_.ComputersToExclude 20 if (($name -match $ExcludeServiceDisplayName) -and (($TargetComputer -match $ExcludeComputerName) -or ($ExcludeComputerName -match "ALL_COMPUTERS"))){ 21 $isExcluded = 1 22 #write-eventlog -logname "Operations Manager" -Source "Health Service Script" -EventID 777 -Message "WindowsAutomaticServiceMonitoring.ps1 - Excluded Service Name - $ExcludeServiceDisplayName, Excluded Computer Name - $ExcludeComputerName" -EntryType Information 23 } 24 } 25 } 26 if (($isExcluded -eq 0) -and ($state -eq "Stopped")){ 27 #write-eventlog -logname "Operations Manager" -Source "Health Service Script" -EventID 778 -Message "WindowsAutomaticServiceMonitoring.ps1 - Windows Service set to Automatic but Not Running - $name" -EntryType Information 28 $bag = $api.CreatePropertyBag() 29 $bag.AddValue("ServiceName", $name) 30 $bag.AddValue("Status", $state) 31 $bag 32 } 33 }

Step 3:

Create a data source module incorporating the above written Powershell script. We will use consolidator condition as discussed in solution part to alert only on valid service failures.

1 <TypeDefinitions> 2 <ModuleTypes> 3 <DataSourceModuleType ID="GKLab.Windows.Auto.Service.Monitoring.DataSource" Accessibility="Internal" Batching="false"> 4 <Configuration> 5 <xsd:element minOccurs="1" name="ExcludeServiceList" type="xsd:string" /> 6 <xsd:element minOccurs="1" name="IntervalSeconds" type="xsd:integer" /> 7 <xsd:element minOccurs="1" name="ConsolidationInterval" type="xsd:integer" /> 8 <xsd:element minOccurs="1" name="Count" type="xsd:integer" /> 9 </Configuration> 10 <OverrideableParameters> 11 <OverrideableParameter ID="IntervalSeconds" Selector="$Config/IntervalSeconds$" ParameterType="int" /> 12 <OverrideableParameter ID="Count" Selector="$Config/Count$" ParameterType="int" /> 13 <OverrideableParameter ID="ConsolidationInterval" Selector="$Config/ConsolidationInterval$" ParameterType="int" /> 14 </OverrideableParameters> 15 <ModuleImplementation Isolation="Any"> 16 <Composite> 17 <MemberModules> 18 <DataSource ID="Trigger" TypeID="System!System.SimpleScheduler"> 19 <IntervalSeconds>$Config/IntervalSeconds$</IntervalSeconds> 20 <SyncTime>00:00</SyncTime> 21 </DataSource> 22 <ProbeAction ID="Probe" TypeID="Windows!Microsoft.Windows.PowerShellPropertyBagProbe"> 23 <ScriptName>WindowsAutomaticServicesMonitoring.ps1</ScriptName> 24 <ScriptBody><![CDATA[ 25 param ( 26 [string] $excludeservicelist 27 ) 28 if (test-path $excludeservicelist) { 29 write-eventlog -logname "Operations Manager" -Source "Health Service Script" -EventID 776 -Message "WindowsAutomaticServiceMonitoring.ps1 - Accessing Exclusion List CSV" -EntryType Information 30 $contents = Import-Csv $excludeservicelist 31 } 32 $TargetComputer = hostname 33 $api = New-Object -comObject 'MOM.ScriptAPI' 34 $auto_services = Get-WmiObject -Class Win32_Service -Filter "StartMode='Auto'" 35 foreach ($service in $auto_services) 36 { 37 $isExcluded = 0 38 $state = $service.state 39 $name = $service.DisplayName 40 If ($Contents){ 41 $contents | ForEach-Object{ 42 $ExcludeServiceDisplayName = $_.ServiceToExclude 43 $ExcludeComputerName = $_.ComputersToExclude 44 if (($name -match $ExcludeServiceDisplayName) -and (($TargetComputer -match $ExcludeComputerName) -or ($ExcludeComputerName -match "ALL_COMPUTERS"))){ 45 $isExcluded = 1 46 write-eventlog -logname "Operations Manager" -Source "Health Service Script" -EventID 777 -Message "WindowsAutomaticServiceMonitoring.ps1 - Excluded Service Name - $ExcludeServiceDisplayName, Excluded Computer Name - $ExcludeComputerName" -EntryType Information 47 } 48 } 49 } 50 if (($isExcluded -eq 0) -and ($state -eq "Stopped")){ 51 write-eventlog -logname "Operations Manager" -Source "Health Service Script" -EventID 778 -Message "WindowsAutomaticServiceMonitoring.ps1 - Windows Service set to Automatic but Not Running - $name" -EntryType Information 52 $bag = $api.CreatePropertyBag() 53 $bag.AddValue("ServiceName", $name) 54 $bag.AddValue("Status", $state) 55 $bag 56 } 57 } 58 ]]></ScriptBody> 59 <Parameters> 60 <Parameter> 61 <Name>ExcludeServiceList</Name> 62 <Value>$Config/ExcludeServiceList$</Value> 63 </Parameter> 64 </Parameters> 65 <TimeoutSeconds>300</TimeoutSeconds> 66 </ProbeAction> 67 <ConditionDetection ID="Consolidator" TypeID="System!System.ConsolidatorCondition"> 68 <Consolidator> 69 <ConsolidationProperties> 70 <PropertyXPathQuery>$Target/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$</PropertyXPathQuery> 71 <PropertyXPathQuery>Property[@Name='ServiceName']</PropertyXPathQuery> 72 </ConsolidationProperties> 73 <TimeControl> 74 <WithinTimeSchedule> 75 <Interval>$Config/ConsolidationInterval$</Interval> 76 </WithinTimeSchedule> 77 </TimeControl> 78 <CountingCondition> 79 <Count>$Config/Count$</Count> 80 <CountMode>OnNewItemTestOutputRestart_OnTimerSlideByOne</CountMode> 81 </CountingCondition> 82 </Consolidator> 83 </ConditionDetection> 84 </MemberModules> 85 <Composition> 86 <Node ID="Consolidator"> 87 <Node ID="Probe"> 88 <Node ID="Trigger" /> 89 </Node> 90 </Node> 91 </Composition> 92 </Composite> 93 </ModuleImplementation> 94 <OutputType>System!System.ConsolidatorData</OutputType> 95 </DataSourceModuleType> 96 </ModuleTypes> 97 </TypeDefinitions>

Step 4:

Next we will create a rule using the data source. Below configuration needs to be customized according to the need.

ExcludeServiceList – the UNC path for the excluded services list file (in CSV format). Sample CSV provided below.

CSV has two headers- “ServiceToExclude” which is display name of service.

ComputersToExclude – NetBIOS Name of computer. If two or more computers, it can be specified as individual entry or using regular expression syntax. If need to exclude in all computers, the value should be “ALL_Computers”

1 ServiceToExclude,ComputersToExclude 2 Distributed Transaction Coordinator,SCOM2012R2 3 Windows Audio,Win2k12-DC 4 Remote Registry,ALL_Computers 5 Software Protection,SCOM2012R2|Win2k12-DC

IntervalSeconds – Polling Interval in Seconds

Count – Number of polls, the service should fail to alert. (Minimum 2)

ConsolidationInterval – The interval time within which the service status fails ‘n’ number of times to generate alert.  (Minimum value = (n-1) * IntervalSeconds where n = count)

1 <Monitoring> 2 <Rules> 3 <Rule ID="GKLab.Windows.AutomaticService.Monitoring.Rule" Enabled="true" Target="Windows!Microsoft.Windows.Computer" ConfirmDelivery="true" Remotable="true" Priority="Normal" DiscardLevel="100"> 4 <Category>Alert</Category> 5 <DataSources> 6 <DataSource ID="DS" TypeID="GKLab.Windows.Auto.Service.Monitoring.DataSource"> 7 <ExcludeServiceList>\\SCOM2012R2\Configs\WindowsAutomaticServiceMonitoringExclusionList.csv</ExcludeServiceList> 8 <IntervalSeconds>300</IntervalSeconds> 9 <ConsolidationInterval>600</ConsolidationInterval> 10 <Count>2</Count> 11 </DataSource> 12 </DataSources> 13 <WriteActions> 14 <WriteAction ID="Alert" TypeID="Health!System.Health.GenerateAlert"> 15 <Priority>1</Priority> 16 <Severity>2</Severity> 17 <AlertMessageId>$MPElement[Name="GKLab.Windows.AutomaticService.Monitoring.Rule.AlertMessage"]$</AlertMessageId> 18 <AlertParameters> 19 <AlertParameter1>$Data/Context/DataItem/Property[@Name='ServiceName']$</AlertParameter1> 20 </AlertParameters> 21 <Suppression> 22 <SuppressionValue>$Target/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$</SuppressionValue> 23 <SuppressionValue>$Data/Context/DataItem/Property[@Name='ServiceName']$</SuppressionValue> 24 </Suppression> 25 </WriteAction> 26 </WriteActions> 27 </Rule> 28 </Rules> 29 </Monitoring>

Step 5:

Final step is to construct XML for presentation and language packs. Ensure the close the <ManagementPack> tag.

1 <Presentation> 2 <StringResources> 3 <StringResource ID="GKLab.Windows.AutomaticService.Monitoring.Rule.AlertMessage" /> 4 </StringResources> 5 </Presentation> 6 <LanguagePacks> 7 <LanguagePack ID="ENU" IsDefault="true"> 8 <DisplayStrings> 9 <DisplayString ElementID="GKLab.Windows.Automatic.Service.Monitoring"> 10 <Name>GKLab Windows Automatic Service Monitoring</Name> 11 <Description>GKLab Windows Automatic Service Monitoring Management Pack</Description> 12 </DisplayString> 13 <DisplayString ElementID="GKLab.Windows.Auto.Service.Monitoring.DataSource"> 14 <Name>GKLab Windows Automatic Service Monitoring Data Source</Name> 15 <Description>GKLab Windows Automatic Service Monitoring Data Source</Description> 16 </DisplayString> 17 <DisplayString ElementID="GKLab.Windows.AutomaticService.Monitoring.Rule"> 18 <Name>Windows Automatic Services Monitoring Rule</Name> 19 <Description>Windows Automatic Services Monitoring Rule</Description> 20 </DisplayString> 21 <DisplayString ElementID="GKLab.Windows.AutomaticService.Monitoring.Rule" SubElementID="Alert"> 22 <Name>Alert</Name> 23 </DisplayString> 24 <DisplayString ElementID="GKLab.Windows.AutomaticService.Monitoring.Rule" SubElementID="DS"> 25 <Name>GKLab Windows Automatic Service Monitoring Data Source</Name> 26 </DisplayString> 27 <DisplayString ElementID="GKLab.Windows.AutomaticService.Monitoring.Rule.AlertMessage"> 28 <Name>Windows Automatic Services Monitoring Alert</Name> 29 <Description>Windows Service {0} is set to auto-start but is currently not running.</Description> 30 </DisplayString> 31 </DisplayStrings> 32 </LanguagePack> 33 </LanguagePacks> 34 </ManagementPack>

Step 7:

Deploy the MP in lab and check for alerts.

image

 

I have attached copy of XML which you can import in to any authoring tool. Customize as per your needs and have fun.

Happy SCOMing…

Installing MIM CM 2016 for Multiple Forests–Part 1

$
0
0

Howdy Folks. MIM 2016 went GA some time ago and one of the new features of MIM for the Certificate Management component was the support for cross forest issuance of certificates\smart cards. Though most enterprises comprise of a single forest, in the time of mergers and acquisitions many enterprises would consist of multiple forests in an account\resource forest configuration with a trust or even multiple forests in an enterprise.

Today I will walk you through the requirements and additional configurations to enable cross forest issuance of certificates\smart cards between two forests in the lab environment. This blog assumes that an environment consisting of two forests with a two-way trust is already setup. The resource forest has a certificate authority, SQL server and MIM CM server.

Servers : Certificate authority, SQL and MIM CM server in the resource forest.

Step 1 – Schema extension in both forests.

Execute below file on the schema master of the resource forest .

C:\MIM\Certificate Management\x64\Schema\resourceForestModifySchema.vbs

Execute below file on the schema master of the account forest.

C:\MIM\Certificate Management\x64\Schema\userForestModifySchema.vbs

Schema change is typically a one way operation and requires a forest recovery to roll back so make sure you have necessary backups.

Step 2 – Prepare the certificate templates.

Prepare three certificate templates for the MIM CM agent accounts as per the guidelines in below article.

Prepare the MIM CM Agent Certificate Templates.

Step 3 – Install MIM CM on the Certificate Authority.

Browse to \MIM\Certificate Management\x64\ and execute setup.exe. Make sure MIM CM CA Files option is enabled while running the wizard on the Certificate authority as shown in the image below.

image

Step 4 – Install IIS on the MIM CM Portal server.

Install Web server(IIS) role from server manager.

Select below options along with the options that are by default enabled when installing IIS in the role services section of the wizard.

a. Common HTTP Features – HTTP Redirection

b. Health and Diagnostics – Request Monitor

c. Performance – Dynamic Content Compression

d. Security – Basic Authentication, Windows Authentication

e. Application Development – .NET Extensibility 4.5, ASP, ASP.NET 4.5, ISAPI Extensions.

f. Management Tools – IIS Management Console, IIS 6 Management Compatibility (All)

Step 5 – Install CM component on the MIM CM server.

Browse to \MIM\Certificate Management\x64\ and execute setup.exe. Make sure MIM CM Portal option is enabled while running the wizard on the server as shown in the image below.

image

Below is the virtual folder for your MIM CM portal. You can add a custom name if you’d like. Make sure you have the same name if installing multiple MIM CM portal servers.

image

Step 6 – Configure MIM CM.

Click Start and you will see the Certificate Management configuration wizard under newly installed applications. Execute it as an administrator. When running the configuration wizard, make sure you are running it as an account that has permissions to write to configuration and domain partition of resource forest. An enterprise admin is recommended.

image

You can use multiple CAs to issue certificates using MIM CM. Select one CA which will be the first CA and you can add the rest later.

image

Enter the name of the SQL server and the credentials which has rights to create the database.

image

Select the database name. You can use the default name or a friendly name. Again, make sure you are using the same name for the database if installing multiple MIM CM servers.

image

Since we have a two-way trust, we will see the trusted forest. Once we click on the checkbox next to the forest name it shows green as shown below. It will fail if there are issues with the trust, DNS or if the schema is not extended. Also you can change the Service Connection Point name to reflect the common name if you have two servers by clicking on change and setting the common name.

image

Select Windows Integrated Authentication.

image

Select the agent accounts to be used. You can create custom accounts and add them here by unchecking ‘Use the FIM CM default settings’ and clicking on custom accounts or you can let the MIM CM configuration wizard create the accounts automatically. If creating multiple MIM CM servers, we would recommend to create the accounts before hand.

image

Select the corresponding templates created in step 2.

image

Specify the name of SMTP server you want to use for email registration.

image

Click the configure button to start the configuration.

image

It will give a popup to require SSL. This can be done later by binding a certificate to IIS.

image

Click on the finish button to complete the configuration.

image

Once above steps are complete, we need to perform post-install tasks as was done for FIM CM. Refer to below article to complete the post-installation tasks for MIM CM.

Post-installation tasks

Your MIM CM server is configured for cross forest enrollment but we still have some more configuration to do on the Certificate Authority and Active Directory before we can issue the certificates\smart cards across the forest. That will be part 2 of this blog.

Lishweth KM

Import Database schema in Azure SQL DB from .SQL files programmatically with SQLCMD

$
0
0

Introduction – This blog post illustrates the method through which you can import your database schema and tables into empty Azure SQL DB ( PaaS ) programmatically. Currently azure SQL DB support import from BACPAC file in PowerShell and GUI but not from .SQL files.

Assumptions  – Here we assume that you already have .SQL files generated from on-premise data base and ready to upload to azure SQL DB.

Problem statement  – I had a requirement where I needed to import schema and tables into empty azure SQL DB from .SQL files. Currently Azure only provides import of BACPAC files out of the box from PowerShell and GUI and from SQL management studio but requirement here was to do it programmatically every time the ARM deployment script creates new azure SQL DB .

 

Resolution/Workaround – Below steps you should follow

1. Install SQLCMD on the VM/desktop from where you are running the script or deployment. SQLCMD cmdlets are used to deploy SQL files into Azure SQL DB. ODBC driver is required for installing SQLCMD

ODBC driverhttp://www.microsoft.com/en-in/download/details.aspx?id=36434

SQLCMDhttp://www.microsoft.com/en-us/download/details.aspx?id=36433

2. Save all the SQL files into a folder in local VM.

3. Get the public IP of you local VM/desktop using below code.

$IP = Invoke-WebRequest checkip.dyndns.com
$IP1 = $IP.Content.Trim()
$IP2 = $IP1.Replace("<html><head><title>Current IP Check</title></head><body>Current IP Address: ","")
$FinalIP = $IP2.Replace("</body></html>","")

4. Create a new firewall rule to connect to SQL server.

New-AzureRmSqlServerFirewallRule –FirewallRuleName $rulename  -StartIpAddress $FinalIP -EndIpAddress $FinalIP –servername $SQLservername –Resourcegroupname $resourcegroupname.

5. Save SQL server full name and sqlcmd path into a variable.

$Fullservername = $SQLservername + '.database.windows.net'
$sqlcmd = "C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\110\Tools\Binn\SQLCMD.EXE"

6. Save SQL server credentials and Azure SQL DB name in variables.

$username = “SQLusername”

$password = “SQLpassword”

$dbname = “databasename”

7. Run the below command for each SQL files if u want to import it sequentially.

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file1.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\file3.sql"

& $sqlcmd -U $username -P $password -S $Fullservername -d $dbname -I -i "C:\SQL\filen.sql"

 

NOTE – You can accumulate  all the code and use it in deployment scripts along with functions and error logging

 

Thanks folks, Hope it is useful.

happy blogging


Connecting Power BI to on Premises SSAS Tabular without Active Directory Sync (Effective username mismatch Problem)

$
0
0

 

One of the great new features of Power BI is its ability to connect to on-premises SSAS Tabular data models directly with the use of “Power BI AS connector”.

This connector allows you to directly query on-premises Tabular models without caching or refreshing your data, it is simply a pass-through connector between on premises and Azure.

This is quite beneficial when data is updated on regular basis (Near-Real-Time) or the data is sensitive and cannot leave on-premises servers.  Power BI AS connector does not in anyway cache the data, only pass the query parameters at the time of user login and user permissions and security roles apply to the data on the fly with every refresh.

You can find details on how to setup PBI AS Connector here.

However if your Power BI service account domain is not synched locally you will receive the error message: “Power BI user name ‘xxx@xxx.xx’ cannot connect to ‘servername’ analysis Services Server due to an effective username mismatch.”

image

This error suggests that your Power BI username is not matching your on premises username.  A proper solution for this issue would be to perform Directory Sync of your on-prem active directory with your Azure Active Directory, however if all you need to do is a demo or development or if for some practical reasons you cannot setup ADS a nice trick would be to add “Alternative UPN Suffix” to your Azure active directory that would match your on-prem UPN.

For example, if I create my service account under my Microsoft ID (myEmail@microsoft.com )  and I login to my SQL Analysis Services server using (myEmail@contoso.com) then the effective username error above will appear. 

To help resolve this issue if you are doing development or demo work, you can create the same usernames between your Azure and on-prem AD and then add alternative UPN Suffix to your on-prem AD that matches the one from your PowerBI service account.

So the username will have to match, and the domain can be faked.

image

As mentioned earlier for production or proper solution Active Directory on-prem and Azure must be federated and that would be the proper way ahead, however for testing, development or demo we may need to do this workaround.

This work around is possible due to the fact that Power BI Analysis service connector does not use or store user passwords when connecting to SSAS tabular models.

Happy Power BI everyone!

Migrating Performance Point To New SharePoint Site with different path

$
0
0

Introduction

When you migrate SharePoint site to a different location that is in a different site structure with a different path the SharePoint content will be migrated with the content itself. But some components will not work as expected because the site URL structure is different.

One of these components is Performance Point content.

Performance point content contains relevant links to Performance Point Connections in the Connections library. Because the path is changed this relevant link will be not valid. Also the Dashboards will have Reference links to other Performance Point contents like KPIs, Reports, Score cards, …etc. These Reference links will not be valid also.

This will cause the performance point web parts in the dashboard pages to show error that the data source does not exist or you don’t have a permission.

Migration Steps

The migration procedure consists of two major steps:

1. Exporting Performance Point content and connections in a Dashboard Designer Workspace (ddwx) file from the Source environment.

2. Import the ddwx file in the destination environment.

Export Dashboard Designer Workspace (ddwx) file from Source environment

1. Launch PerformancePoint dashboard designer.

2. Click on the PerformancePoint content list.

3. Select all the items in the list (Ctrl-A)

Export Content_1

 

4. Click on “Add Items” button on the ribbon under workspace section.

Export Content_2

 

5. All Items should be Added to the workspace area as shown in the image below

Export Content_3

6. Apply the steps 2 to 5 for all the Performance Point Content Lists and Connections Lists

7. Save the workspace by clicking on the office button Save workspace as.

        Export Content_4

 

Import Dashboard Designer Workspace (ddwx) file in the destination environment

In this step you will need to import the ddwx file to your destination environment.

1. Launch PerformancePoint dashboard designer.

2. Click on Import Items

Import Content_1

 

3. Map Performance Point Items to the corresponding item in your destination environment.

Import Content_2

Import Content_3

 

4. Select “Import data sources that already exists in the destination” Click On Next

Import Content_4

 

5. Wait until import is completed. Make Sure that all items are updated with no errors.

Import Content_6

Issues

I faced an issue one time after I finished the migration process. I found that there are some reports and KPIs that have the connections links still not corrected.  I figured out that there were more than one performance point reports and KPIs that has the same name in the same content list. In this case I found that one of the reports that has the same name is updated the other report was not updated. It was the same case also for the KPIs.

In this case I had to recreate the reports and KPIs that was not updated.

 

 

Issue with passing data from PowerShell to data bus in SCO, check this out !! this might help you

$
0
0

Hello readers! PowerShell scripts executed within a System Center Orchestrator runbook uses built-in “Run .NET Script” activity which by default uses PowerShell version 2.0. At many times we would require PowerShell script’s to be executed in version 3.0 and one of the way to do it is by executing PowerShell.exe from “Run .NET Script” activity. A script can be run in PowerShell Version 3.0, 64-bit PowerShell environment using C:\Windows\sysnative\WindowsPowerShell\v1.0\powershell.exe { <your code> }.

As part of System center orchestrator runbook workflow the data might be required to be published on data bus from “Run .NET Script” activity and for the above kind of scenarios a PowerShell custom object has to be created in PowerShell Version 3 . The below example briefs how this can be done using “Run .NET Script” activity.

[sourcecode language='powershell' ]
$inputobjs1 = C:\Windows\sysnative\WindowsPowerShell\v1.0\powershell.exe {

$SvchostPID = get-process | where { $_.ProcessName -eq 'svchost'} | select -ExpandProperty id
$NotepadPID = get-process | where { $_.ProcessName -eq 'notepad'} | select -ExpandProperty id

New-Object pscustomobject -Property @{
SvchostPID_OP = $SvchostPID
NotepadPID_OP = $NotepadPID
}

}

$SvchostPID =$inputobjs1.SvchostPID_OP
$NotepadPID = $inputobjs1.NotepadPID_OP
[/sourcecode]

 

Using the similar type of code, I was working upon a script wherein I was not able to retrieve the data stored in PS custom object out of PowerShell version 3.0 and after troubleshooting for hours, I was able to identify the issue.

Let me explain the issue using a sample script to provide an better understanding, the below script will connect to SCVMM server and gets information of Number of VCPU’s , Memory and Generation of a specific virtual machine.

[sourcecode language='powershell' ]
$inputobjs2 = .$env:windir\sysnative\windowspowershell\v1.0\Powershell.exe{

import-module virtualmachinemanager
Get-SCVMMServer -ComputerName scvmm2012R2
$vmvalues = get-vm testvm3 | select -Property cpucount, memory, generation

New-Object pscustomobject -Property @{
CPUCount_OP = $vmvalues.cpucount
Mem_OP = $vmvalues.Memory
Generation_OP = $vmvalues.Generation
}

}

$cpucount = $inputobjs2.CPUCount_OP
$Mem = $inputobjs2.Mem_OP
$generation = $inputobjs2.Generation_OP
[/sourcecode]

This script will not be able to pass the values to Orchestrator data bus using PS custom object and to make this script work, line number 3 has to be updated with storing SCVMM connection values to a variable.

$session = Get-SCVMMServer -ComputerName scvmm2012R2

This is required because whenever we connect to SCVMM server, an another shell is invoked and further script gets executed in the shell due to which the PS custom object created cannot be retrieved to Orchestrator data bus.

The updated script is as below:

[sourcecode language='powershell' ]
$inputobjs2 = .$env:windir\sysnative\windowspowershell\v1.0\Powershell.exe{

import-module virtualmachinemanager
$session = Get-SCVMMServer -ComputerName scvmm2012R2
$vmvalues = get-vm testvm3 | select -Property cpucount, memory, generation
$session.disconnect()

New-Object pscustomobject -Property @{
CPUCount_OP = $vmvalues.cpucount
Mem_OP = $vmvalues.Memory
Generation_OP = $vmvalues.Generation
}

}

$cpucount = $inputobjs2.CPUCount_OP
$Mem = $inputobjs2.Mem_OP
$generation = $inputobjs2.Generation_OP
[/sourcecode]

So folks, whenever you are connecting to applications through PS cmdlets which invokes its own shell and you require data to be passed back to Orchestrator, add a point to save the connection parameters to a variable.

Cheers !! :)

DHCP Pool creation in SCVMM 2012R2

$
0
0

 

Hello Readers! I thought of to put some notes on how do we create DHCP pool in System center virtual machine manager 2012 R2 and let you all know how simple this can be done

To start with , for creating a DHCP Pool you require fabric administrator access in SCVMM.

DHCP pool is created in Fabric pane, under Networking , Logical Networks. The logical networks section will list all networks defined in SCVMM which has to be designed so that it maps to actual physical network structure in the environment.

On the menu bar, select the option create IP Pool an window as shown in figure 1 pops up, provide the Name, Description for the pool you want to create. Select the logical network under which the pool has to be created.

 clip_image001

Under network site section, select “Use an existing network site” option if the network site is already defined in SCVMM or select ” create a network site” option if a network site has to be defined. Select “create a multicast IP address pool” option when you wanted to use multicast or broadcast option with the subnet which is a new feature introduced in SCVMM 2012 R2 / SCVMM 2012 SP1

 clip_image002

In the next pane, provide the range of Ipaddress which are to be part of DHCP pool by mentioning Starting Ipaddress and Ending Ipaddress.

You can exclusively mention the Ipaddress which are to be reserved for load balancer VIPs which are between the selected range of IP’s in the section “IP addresses reserved for load balancer VIPs” and also if any Ipaddress to be reserved / used for some other purpose and are part of selected range of IP’s same can be reserved by mentioning the Ipaddress in the section “IP addresses to be reserved for other users”

 clip_image003

In the next section, specify the gateway of the subnet as shown in below figure

clip_image004

 

In next section, provide the DNS servers and DNS suffix details which are to be used for the subnet

clip_image005

 

Review the settings, in the summary page and click finish

clip_image006

An job would be triggered in SCVMM for creating an IP pool in SCVMM and once completed pool created will be seen under the logical network it was created.

clip_image007

That’s all !! Very simple and ease to create DHCP pool in SCVMM and as well managing of IP’s is more automated with SCVMM when compared with normal windows server holding DHCP role.

Export/Import native windows teaming configuration to xml for disaster recovery

$
0
0

This article describes on how to export the Native Windows Teaming Configuration to an XML file and import it back to recreate the team configuration.

Configuring the NIC teaming manually is a tedious task for the administrator where he has to identify the correct network adapters, team them, create additional team interfaces, assign VLAN IDs to them. Inbuilt PowerShell commands for creating and configuring the NIC team can be used for deployments, however when it comes to manage a dynamic environment, the deployment scripts also need to be kept up to date frequently to capture any changes in the environment.

Also, currently, there is no in-built mechanism in windows servers to export the teaming configuration, which can be used to do a complete NIC team restore in case of a disaster or OS reinstallation etc. This is where the scripts in this article will become useful as it can export the Native Windows Teaming Configuration to an XML file and import it back to recreate the team configuration anytime. This script can be readily used on any windows Server 2012 or Windows Server 2012 R2 Operating System, whether it’s on a physical server or virtual server.

Following are the details captured in the xml file for the export and import of configuration.

· Team names and number of teams

· Load balancing Algorithm for each team

· Team Members of each team

· Teaming Mode for each Team

· Standby Adapters per team

· Team interfaces per team

· VLAN ID for each team interface

The above details are exported to a structured XML file, which can be later used by the import_script.ps1 to restore the configuration.

Assumption: It is assumed that the same physical NICs are present in the server while doing the NIC team configuration export/import. The script exports the NIC names to the xml and hence the same NIC names should be present in the server while doing an import if the environment has customized NIC names. If the NIC teams or interfaces in the environment have special characters that interfere with XML tags, then the script needs to be modified with appropriate escape characters. Currently the script handles only whitespaces in the xml values.

1. Export the Team configuration

Now let’s have a high level overview of the ExportNICTeam.ps1. The following steps are commented in the PowerShell script at appropriate places to make it easily understandable.

Step1 – Specify the output xml file name to include the computername and get the number of teams.

Step2 – Create the xml Tags along with the Team Count information to be written to the xml file:

Based on the number of teams available in the system, the following step will be performed in a loop (Loop1)

Step3 – Get the details of each team into variables and append them to the xml file

Based on the number of VLAN interfaces available in each team, the following step will be performed in a loop (Loop2)

Step4 – List down the details of the VLAN interfaces for the team and append them to the xml file

Step5 – Close the XML tags to mark the end of file.

2. Import the Team configuration

The import script reads the values from the xml file and uses them as parameters for the PowerShell cmdlets to create team and modify the team properties.

Now let’s have a high level overview of the ImportNICTeam.ps1 script

Step1 – Specify the input xml file and load it to a variable

Step2 – Get the number of teams from the xml

Based on the number of teams available in the xml, the following step will be performed in a loop (Loop1)

Step3 – Collect all the attributes for creating a new team from the xml and identify the primary interface and its VLAN ID

Step4 – Create the new team with the primary interface details

Step5 – Set the standby NICs for the new team if present in the xml

Based on the number of VLAN interfaces available in each team, the following step will be performed in a loop (Loop2)

Step6 – Create VLAN interfaces for the non-primary Team Interfaces

The above steps 3 -6 will be repeated based on the number of teams in the configuration file.

The complete ExportNICTeam.ps1 and ImportNICTeam.ps1 scripts are shared here.

Additional Tip: The export script can be modified to point the output xml file to a shared location instead of default “C:\test\”. And this script can be executed on multiple machines remotely to have the teaming details of all the servers in the environment at a single location. This can be used later for recovery in case of a disaster.

Enhancement: This script can be further enhanced to include additional information such as TCP/IP configuration etc of the team interfaces.

Using Powershell to integrate SCOM Management Group to Operations Management Suite

$
0
0

Environment: SCOM 2012 R2 with Update Rollup 11

 

Register SCOM Management Group:

On-premises SCOM Management Groups can be integrated to Operations Management Suite (OMS) from the SCOM Console under the administration Tab. Here, I am demonstrating a way to integrate SCOM to OMS via PowerShell cmdlets. We will integrate the SCOM to OMS using the cmdlet “Register-SCAdvisor“.

This cmdlet needs a certificate pfx file from the OMS workspace (not the certificate from the azure portal). In order to obtain this certificate, suffix “DownloadCertificate.ashx ” to your OMS workspace URL root. So the new URL will look like:

https://<workspacename>.portal.mms.microsoft.com/DownloadCertificate.ashx

Eg: in case of the workspace I am using for this demo, it will be: https://deepuomsdemo.portal.mms.microsoft.com/DownloadCertificate.ashx

Once you enter this URL to a browser after logging in to your OMS portal, you will be prompted to download the .pfx file. Save it to a folder in your SCOM management server. Before proceeding with the integration make sure that the necessary management packs for the OMS integration are imported into the Management server. For Update Rollup (UR 11), the following Management Packs (version 7.1.10226.1239) under the location “C:\Program Files\Microsoft System Center 2012 R2\Operations Manager\Server\Management Packs for Update Rollups” need to be imported:

  • Microsoft.SystemCenter.Advisor.Internal.mpb
  • Microsoft.SystemCenter.Advisor.mpb
  • Microsoft.SystemCenter.Advisor.Resources.ENU.mpb

You can use Import-SCOMManagementPack command to import the above management packs as shown below. To see the changes brought in by the above management packs, close and reopen the SCOM console.


In most of the cases, the SCOM server will be communicating to the internet/oms via proxy. In that case, we can configure the proxy settings in the SCOM console under Administration -> Operations Management Suite -> Connection –> Configure Proxy Server. Or this can be done via Powershell with the Set-SCAdvisorProxy cmdlet as shown below. For Proxy servers, that needs authentication, “System Center Advisor Run As Profile Proxy” Profile
needs to be configured with the RunAs Account.

 


Now open the Operations Manager Shell and run the following command:

Register-SCAdvisor -CertificatePath <path of pfx file>

The above command will return “True” if the registration with OMS is successful. Eg:


Note that the SCOM UR11 adds support to register the Operations Manager Management group to OMS workspaces in regions other than Eastern US by using an additional optional parameter (-SettingServiceUrl), which is the URL for setting the service in the region of the workspace.

Tip: If SettingServiceUrl is not specified, the workspace is assumed to be in the Eastern US. You will get the following error message if the workspace is in a region other than Eastern US and is not explicitly mentioned in the command.


Now the SCOM Management Group is registered with the OMS Portal. However, OMS will not collect the monitoring data from the SCOM agents unless they are added as Managed Computers in the Administration Tab of SCOM Console under the Operations Management Suite container. We can add the agents to the OMS via the PowerShell cmdlet “Add-AdvisorAgent”. Eg:


One the agents are added, they will start syncing with OMS and we need to verify the OMS integration.

 

Verification:

We can verify the integration after a couple of hours by the following methods.

  1. Via PowerShell: Run the cmdlet Get-SCAdvisorAgent to get the list of servers monitored by SCOM which are in turn syncing the data with the OMS portal.EG:


     

  2. Via OMS Portal: Open the OMS workspace, go to Settings -> Connected Sources -> System Center. You will see the SCOM Management group listed like below. It will give you the name of the management group, Number of SCOM agents connected to the OMS and the last data update time.


     

  3. Via SCOM Console: Navigate to Monitoring -> Operations Management Suite -> Health State. Here we can see the health status of the SCOM management servers connected to the OMS portal. Eg:


 

Unregister SCOM from OMS.

To unregister SCOM Management Group from OMS, please refer this excellent blog by Kevin:

https://blogs.technet.microsoft.com/kevinholman/2016/03/26/how-to-remove-oms-and-advisor-management-packs/

 

Cheers.

Decommission of Exchange Server 2010

$
0
0

To perform Exchange Server 2010 decommission follow the below procedure, the account you use must be delegated membership in the Exchange Full Administrator role on Exchange Servers.

  1. Move all legacy Exchange 2010 mailboxes to newly deployed Exchange server 2013/2016 in the organization.
  2. Move all content from the public folder database on the Exchange 2010 server to a public folder database on an Exchange 2013/2016 server in the organization.
  3. Remove all replicas of PFs on the 2010 side using 2010 management tools so that all PFs in the 2010 hierarchy have only one replica. This should be do able even though you migrated the PFs to 2013/2016.
  4. Remove the public folder mailbox and stores on the Exchange 2010 server
  5. On Exchange 2010 servers, for each offline address book (OAB), move the generation process to an Exchange 2013/2016 server. Ensure 2013/2016 is the one generating/serving OABs for users.
  6. Remove all added DB copies of mailbox DBs so each DB has a single copy in Exchange Server 2010.
  7. Remove all nodes from any existing Exchange Server 2010 Database Availability Group
  8. Delete the Exchange Server 2010 Database Availability Group
  9. Optional: Set the RpcClientAccessServer value of all 2010 DBs to the FQDN of their server
  10. Optional: Remove the CAS Array Object(s)
  11. Check the SMTP logs to see if any outside systems are still sending SMTP traffic to the servers via hard coded names.
  12. Start removing mailbox databases to ensure no arbitration mailboxes still exist on Exchange 2010 servers
  13. Verify that Internet mail flow is configured to route through your Exchange 2013/2016 transport servers
  14. Verify that all inbound protocol services (Microsoft Exchange ActiveSync, Microsoft Office Outlook Web App, Outlook Anywhere, POP3, IMAP4, Auto discover service, and any other Exchange Web service) are configured for Exchange 2013/2016.
  15. Start uninstalling Exchange Server 2010 and reboot the server.

 


AAD Connect: The Three Forests Story.

$
0
0

I am writing on a grey area that I recently encountered while working on a AAD Connect project for a customer. The customer had one primary forest(forest A) and a secondary forest(Forest B) which essentially had same users represented twice (both in enabled state). In addition a third forest (Forest C) which is an extranet users forest having a disparate set of users.  A quick look at AAD Connect Supported topologies yields that this is a supported topology.(https://azure.microsoft.com/en-us/documentation/articles/active-directory-aadconnect-topologies/)

My topology would look something like this. 

MultiForestSingleDirectory

The some users would match (between Forests A and B) while some wont (Forest C).

MultiForestFullMesh

I ended up choosing the following settings for matching users, Source Anchor and UPN.

image

Now, if both user objects(matched by UPN) are enabled in Forests A and B, by default AAD Connect can project from either of the objects and join the left over one. This wasn’t acceptable as ADFS is based on Forest A hence, AAD user objects must have the value of objectGUID of Forest A in sourceAnchor.

The solution to this problem is if you allow explicitly only Forest A to project and Forest B to join alone. This can be done by tweaking the default AAD Connect config as follows

1. Open Synchronization rules editor:

image

2. Open the User Join Inbound Sync Rule for ForestB.com and edit it.

image

Note: Normally we create a custom rule and should avoid editing the OOTB rules. However, in this case we must edit the default rule.

3. Change the link type to Join instead of provision.

image

Now Forest B objects will only join to Forest A objects that are project already. Voila, we have everything working as expected now. No other changes are needed to Forest A or Forest B join rules.

Docker – Fails to create container on Windows 10 – Error response from daemon container..encountered an error during start

$
0
0

Thought to share findings that came across on fixing the below issue. May be it can help someone while working in docker with Windows 10.

 Issue: Unable to create docker container on Windows 10 Version 1607

Error response from daemon container..encountered an error during start

Workaround: docker run -it –rm –net=none microsoft/nanoserver cmd

 Finding & Cause:

Gather the network trace using command – netsh trace start globallevel=7 provider=Microsoft-Windows-Host-Network-Service  report=di on viewing the logs we found the message “HNS failed to create vmswitch port with error ‘0x80070003’, switch id = ‘c502a850-2f21-4d55-9879-14cc66f69193’, port id = ‘e2e3b5ba-1de9-4650-a0e0-50276c0e2cb8’ and type = ‘Value_3’” 

Checked the VMSwitch found the NAT switch is missing  ( Normally deleting and re-creating vmswitch based upon Hyperv VM’s requirement as it’s in Lab)

get-vmswitch

Checked the Container network and found the NAT network is in second order

get-containernetwork

Solution: Follow the below steps that will help you to get rid of the error

Get-containernetwork | Remove-Containernetwork -force

Restart-service hns

Restart-service docker

Get-containernetwork

Get-vmswitch

Get-netnat

And finally created the container it worked successfully

Lesson Learnt: Whenever you play on VMSwitch with the Hyper-V it will also impact over the docker J

Configuring SQL Reporting Services for Remote or different SCOM DW Database

$
0
0

This article mentions how to configure the SQL Reporting Services for generating reports from a remote SCOM Data Warehouse database (DW DB). This will also be of help for reconfiguring the Reporting Service when there is a change in the SCOM DW DB, Port or Server Name.

In a simple/test single server SCOM installation environment, there is no need to configure these settings since all the databases and reporting services are within the same server itself. However, in production environments where the Reporting Services and the Data Warehouse Database will be in separate environments, we will need to configure the reporting services to make use of the SCOM’s Data Ware house database. This database may be residing on a remote SQL Failover Cluster or part of an AlwaysOn Availability Group.

The below configuration is done on an environment where the Reporting services is installed on one of the SCOM Management Servers and the Data Warehouse DB resides on a remote SQL AlwaysOn Availability Group.

Steps:

Access the SQL Server Reporting Services Web Page and click on Details View.

Now, click on the Data Warehouse Main and select ‘Manage‘ from Dropdown menu.

Under the Data Source Type, Select “Microsoft SQL Server

Under the connection String, Provide the connection string details in the following format:

data source=<DBINSTANCE or Listener Details>;initial catalog=<DW DB Name>;Integrated Security=SSPI

e.g.:

data source=SCOMListener01,14533;initial catalog=OperationsManagerDW;Integrated Security=SSPI

(where SCOMListener01 is the SQL AlwaysOn Listener listening on port 14533 and OperationsManagerDW is the Data Warehouse Database part of the Availability Group)

Select “Windows Integrated Security” under “Connect Using” options:

Click on “Test Connection” to make sure that the connection is successful as shown in the below image.

Click on Apply.

 

This will make sure that the Reporting Services is connected correctly to DW Database.

Also, when the SCOM Database or server is changed or the listening ports are changed due to any security or maintenance reasons the above steps can be performed by providing the new details in the Connection String.

Difference between Azure Service Manager and Azure Resource Manager

$
0
0

ASM

ARM

 

 

This is an old portal which provides Cloud
service for Iaas Workload and few specific Paas Workload

They are new portal provides service for all
Workload of IaaS and PaaS

Access over the Url:
https://manage.windowsazure.com
which  termed as V1 portal.

Access over the Url: https://portal.azure.com
which  termed as V2 portal  having Blade design Portal View

Azure Service Manager are XML driven REST API

Azure Service Manager are JSON driven REST API

Had a concept of Affinity Group which has been
deprecated

They have container concept called Resource
Group which is logical set of correlated cloud resources which can span
multiple region and services

Private Azure Portal can be built using
Windows Azure Pack

Private Azure Portal can be built using  Azure Stack

Removal or Deletion is not easy as Azure Resource
Manager

Removal of resource is easier by deleting the
resource group (RSG) which will help to delete all the resource present in
the RSG

Deployment can be performed using PowerShell
script

Deployment can be performed using ARM
templates which provide simple orchestration and rollback function. They have
their own PowerShell Module

Features and function are not available

Role Based Access Control Feature is Present

Features and function are not available

Resource from the resource group can be moved
between within the same region

Features and function are not available

Resource Tagging which is name-pair value
assigned to resource group which can have up to 15 tags per resources

Features and function are not available

Massive and Parallel Deployment of VM’s
possible with Asynchronous Operations

Features and function are not available

We can have custom policy created to restrict
the operation that can be performed

Features and function are not available

Azure Resource Explorer  – https://resources.azure.com/ which helps
for more understanding on resources and for deployment

Features and function are not available

 Resource Locks provides the policy to
enforce lock level that prevent from accident deletion

 

For more details refer : https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-deployment-model

Windows Automatic Services Monitoring using SCOM

$
0
0

Monitoring services in windows computers is available out of box in SCOM through Service Monitoring Template. But in a large enterprise with over 1000s of windows computers and 100s of applications, it is difficult to list out all services that needs to be monitored in each computer and create monitoring using template. Consider monitoring on average 30 services in 1000 computers would result on 30,000 instances added to SCOM DB. This will create numerous classes, discoveries and cause bloating of instance space which will make SCOM less responsive.

Also, we cannot create a monitor for each service and target it across all computers as each service may be present on bunch of computers and not on others. Thus targeting unanimously will result in false alarms and again, we may need 30+ windows service monitors targeted to all windows computers which will create overhead on agents and thus on the computers running the agent.

So, What is the solution?

Optimal solution would be creating a single rule to monitor all automatic services in each computer and alert on those which are not running. This can be accomplished using Powershell script with property bag output.

The rule runs on each computer at specific time interval, creates property bags for each service which is set to automatic but not running and an alert is generated for each property bag.

A catch to note in this monitoring scenario is not to alert on services that are stopped only for a moment. To overcome the issue, we will use consolidator condition. So only if the service is failed for ‘n’ consecutive samples, we will alert.

This solution, though optimal pose another challenge – What if we do not want to monitor a service which is set to automatic in one or few of computers.

This can be handled using a centrally located file with details of service and the computers to be excluded from monitoring.

We will see how to construct the Management Pack XML to accomplish this. You can also create MP using Visual Studio, MP Studio or Authoring Console.

Step 1:

Add references to the Management pack.

1 <ManagementPack ContentReadable="true" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> 2 <Manifest> 3 <Identity> 4 <ID>GKLab.Windows.Automatic.Service.Monitoring</ID> 5 <Version>1.0.0.0</Version> 6 </Identity> 7 <Name>GKLab Windows Automatic Service Monitoring</Name> 8 <References> 9 <Reference Alias="SC"> 10 <ID>Microsoft.SystemCenter.Library</ID> 11 <Version>6.1.7221.0</Version> 12 <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> 13 </Reference> 14 <Reference Alias="Windows"> 15 <ID>Microsoft.Windows.Library</ID> 16 <Version>6.1.7221.0</Version> 17 <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> 18 </Reference> 19 <Reference Alias="Health"> 20 <ID>System.Health.Library</ID> 21 <Version>6.1.7221.0</Version> 22 <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> 23 </Reference> 24 <Reference Alias="System"> 25 <ID>System.Library</ID> 26 <Version>6.1.7221.0</Version> 27 <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> 28 </Reference> 29 <Reference Alias="Performance"> 30 <ID>System.Performance.Library</ID> 31 <Version>6.1.7221.0</Version> 32 <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> 33 </Reference> 34 </References> 35 </Manifest>

Step 2:

Now create a Powershell property bag probe script. The Powershell script fetches list for all services that are set to start automatic and checks for the current status. For each service that are set to Automatic but not running, a property bag is created.

To exclude some services from being monitored, a centrally located CSV file is used and the path of file is passed as parameter to the script. The script reads list of services to be excluded from monitoring from CSV file and compares it with the list of services in the target computer. The property bag for excludes services are not created.

1 param ( 2 [string] $excludeservicelist 3 ) 4 if (test-path $excludeservicelist) { 5 write-eventlog -logname "Operations Manager" -Source "Health Service Script" -EventID 776 -Message "WindowsAutomaticServiceMonitoring.ps1 - Accessing Exclusion List CSV" -EntryType Information 6 $contents = Import-Csv $excludeservicelist 7 } 8 $TargetComputer = hostname 9 $api = New-Object -comObject 'MOM.ScriptAPI' 10 $auto_services = Get-WmiObject -Class Win32_Service -Filter "StartMode='Auto'" 11 foreach ($service in $auto_services) 12 { 13 $isExcluded = 0 14 $state = $service.state 15 $name = $service.DisplayName 16 If ($Contents){ 17 $contents | ForEach-Object{ 18 $ExcludeServiceDisplayName = $_.ServiceToExclude 19 $ExcludeComputerName = $_.ComputersToExclude 20 if (($name -match $ExcludeServiceDisplayName) -and (($TargetComputer -match $ExcludeComputerName) -or ($ExcludeComputerName -match "ALL_COMPUTERS"))){ 21 $isExcluded = 1 22 #write-eventlog -logname "Operations Manager" -Source "Health Service Script" -EventID 777 -Message "WindowsAutomaticServiceMonitoring.ps1 - Excluded Service Name - $ExcludeServiceDisplayName, Excluded Computer Name - $ExcludeComputerName" -EntryType Information 23 } 24 } 25 } 26 if (($isExcluded -eq 0) -and ($state -eq "Stopped")){ 27 #write-eventlog -logname "Operations Manager" -Source "Health Service Script" -EventID 778 -Message "WindowsAutomaticServiceMonitoring.ps1 - Windows Service set to Automatic but Not Running - $name" -EntryType Information 28 $bag = $api.CreatePropertyBag() 29 $bag.AddValue("ServiceName", $name) 30 $bag.AddValue("Status", $state) 31 $bag 32 } 33 }

Step 3:

Create a data source module incorporating the above written Powershell script. We will use consolidator condition as discussed in solution part to alert only on valid service failures.

1 <TypeDefinitions> 2 <ModuleTypes> 3 <DataSourceModuleType ID="GKLab.Windows.Auto.Service.Monitoring.DataSource" Accessibility="Internal" Batching="false"> 4 <Configuration> 5 <xsd:element minOccurs="1" name="ExcludeServiceList" type="xsd:string" /> 6 <xsd:element minOccurs="1" name="IntervalSeconds" type="xsd:integer" /> 7 <xsd:element minOccurs="1" name="ConsolidationInterval" type="xsd:integer" /> 8 <xsd:element minOccurs="1" name="Count" type="xsd:integer" /> 9 </Configuration> 10 <OverrideableParameters> 11 <OverrideableParameter ID="IntervalSeconds" Selector="$Config/IntervalSeconds$" ParameterType="int" /> 12 <OverrideableParameter ID="Count" Selector="$Config/Count$" ParameterType="int" /> 13 <OverrideableParameter ID="ConsolidationInterval" Selector="$Config/ConsolidationInterval$" ParameterType="int" /> 14 </OverrideableParameters> 15 <ModuleImplementation Isolation="Any"> 16 <Composite> 17 <MemberModules> 18 <DataSource ID="Trigger" TypeID="System!System.SimpleScheduler"> 19 <IntervalSeconds>$Config/IntervalSeconds$</IntervalSeconds> 20 <SyncTime>00:00</SyncTime> 21 </DataSource> 22 <ProbeAction ID="Probe" TypeID="Windows!Microsoft.Windows.PowerShellPropertyBagProbe"> 23 <ScriptName>WindowsAutomaticServicesMonitoring.ps1</ScriptName> 24 <ScriptBody><![CDATA[ 25 param ( 26 [string] $excludeservicelist 27 ) 28 if (test-path $excludeservicelist) { 29 write-eventlog -logname "Operations Manager" -Source "Health Service Script" -EventID 776 -Message "WindowsAutomaticServiceMonitoring.ps1 - Accessing Exclusion List CSV" -EntryType Information 30 $contents = Import-Csv $excludeservicelist 31 } 32 $TargetComputer = hostname 33 $api = New-Object -comObject 'MOM.ScriptAPI' 34 $auto_services = Get-WmiObject -Class Win32_Service -Filter "StartMode='Auto'" 35 foreach ($service in $auto_services) 36 { 37 $isExcluded = 0 38 $state = $service.state 39 $name = $service.DisplayName 40 If ($Contents){ 41 $contents | ForEach-Object{ 42 $ExcludeServiceDisplayName = $_.ServiceToExclude 43 $ExcludeComputerName = $_.ComputersToExclude 44 if (($name -match $ExcludeServiceDisplayName) -and (($TargetComputer -match $ExcludeComputerName) -or ($ExcludeComputerName -match "ALL_COMPUTERS"))){ 45 $isExcluded = 1 46 write-eventlog -logname "Operations Manager" -Source "Health Service Script" -EventID 777 -Message "WindowsAutomaticServiceMonitoring.ps1 - Excluded Service Name - $ExcludeServiceDisplayName, Excluded Computer Name - $ExcludeComputerName" -EntryType Information 47 } 48 } 49 } 50 if (($isExcluded -eq 0) -and ($state -eq "Stopped")){ 51 write-eventlog -logname "Operations Manager" -Source "Health Service Script" -EventID 778 -Message "WindowsAutomaticServiceMonitoring.ps1 - Windows Service set to Automatic but Not Running - $name" -EntryType Information 52 $bag = $api.CreatePropertyBag() 53 $bag.AddValue("ServiceName", $name) 54 $bag.AddValue("Status", $state) 55 $bag 56 } 57 } 58 ]]></ScriptBody> 59 <Parameters> 60 <Parameter> 61 <Name>ExcludeServiceList</Name> 62 <Value>$Config/ExcludeServiceList$</Value> 63 </Parameter> 64 </Parameters> 65 <TimeoutSeconds>300</TimeoutSeconds> 66 </ProbeAction> 67 <ConditionDetection ID="Consolidator" TypeID="System!System.ConsolidatorCondition"> 68 <Consolidator> 69 <ConsolidationProperties> 70 <PropertyXPathQuery>$Target/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$</PropertyXPathQuery> 71 <PropertyXPathQuery>Property[@Name='ServiceName']</PropertyXPathQuery> 72 </ConsolidationProperties> 73 <TimeControl> 74 <WithinTimeSchedule> 75 <Interval>$Config/ConsolidationInterval$</Interval> 76 </WithinTimeSchedule> 77 </TimeControl> 78 <CountingCondition> 79 <Count>$Config/Count$</Count> 80 <CountMode>OnNewItemTestOutputRestart_OnTimerSlideByOne</CountMode> 81 </CountingCondition> 82 </Consolidator> 83 </ConditionDetection> 84 </MemberModules> 85 <Composition> 86 <Node ID="Consolidator"> 87 <Node ID="Probe"> 88 <Node ID="Trigger" /> 89 </Node> 90 </Node> 91 </Composition> 92 </Composite> 93 </ModuleImplementation> 94 <OutputType>System!System.ConsolidatorData</OutputType> 95 </DataSourceModuleType> 96 </ModuleTypes> 97 </TypeDefinitions>

Step 4:

Next we will create a rule using the data source. Below configuration needs to be customized according to the need.

ExcludeServiceList – the UNC path for the excluded services list file (in CSV format). Sample CSV provided below.

CSV has two headers- “ServiceToExclude” which is display name of service.

ComputersToExclude – NetBIOS Name of computer. If two or more computers, it can be specified as individual entry or using regular expression syntax. If need to exclude in all computers, the value should be “ALL_Computers”

1 ServiceToExclude,ComputersToExclude 2 Distributed Transaction Coordinator,SCOM2012R2 3 Windows Audio,Win2k12-DC 4 Remote Registry,ALL_Computers 5 Software Protection,SCOM2012R2|Win2k12-DC

IntervalSeconds – Polling Interval in Seconds

Count – Number of polls, the service should fail to alert. (Minimum 2)

ConsolidationInterval – The interval time within which the service status fails ‘n’ number of times to generate alert.  (Minimum value = (n-1) * IntervalSeconds where n = count)

1 <Monitoring> 2 <Rules> 3 <Rule ID="GKLab.Windows.AutomaticService.Monitoring.Rule" Enabled="true" Target="Windows!Microsoft.Windows.Computer" ConfirmDelivery="true" Remotable="true" Priority="Normal" DiscardLevel="100"> 4 <Category>Alert</Category> 5 <DataSources> 6 <DataSource ID="DS" TypeID="GKLab.Windows.Auto.Service.Monitoring.DataSource"> 7 <ExcludeServiceList>\\SCOM2012R2\Configs\WindowsAutomaticServiceMonitoringExclusionList.csv</ExcludeServiceList> 8 <IntervalSeconds>300</IntervalSeconds> 9 <ConsolidationInterval>600</ConsolidationInterval> 10 <Count>2</Count> 11 </DataSource> 12 </DataSources> 13 <WriteActions> 14 <WriteAction ID="Alert" TypeID="Health!System.Health.GenerateAlert"> 15 <Priority>1</Priority> 16 <Severity>2</Severity> 17 <AlertMessageId>$MPElement[Name="GKLab.Windows.AutomaticService.Monitoring.Rule.AlertMessage"]$</AlertMessageId> 18 <AlertParameters> 19 <AlertParameter1>$Data/Context/DataItem/Property[@Name='ServiceName']$</AlertParameter1> 20 </AlertParameters> 21 <Suppression> 22 <SuppressionValue>$Target/Property[Type="Windows!Microsoft.Windows.Computer"]/PrincipalName$</SuppressionValue> 23 <SuppressionValue>$Data/Context/DataItem/Property[@Name='ServiceName']$</SuppressionValue> 24 </Suppression> 25 </WriteAction> 26 </WriteActions> 27 </Rule> 28 </Rules> 29 </Monitoring>

Step 5:

Final step is to construct XML for presentation and language packs. Ensure the close the <ManagementPack> tag.

1 <Presentation> 2 <StringResources> 3 <StringResource ID="GKLab.Windows.AutomaticService.Monitoring.Rule.AlertMessage" /> 4 </StringResources> 5 </Presentation> 6 <LanguagePacks> 7 <LanguagePack ID="ENU" IsDefault="true"> 8 <DisplayStrings> 9 <DisplayString ElementID="GKLab.Windows.Automatic.Service.Monitoring"> 10 <Name>GKLab Windows Automatic Service Monitoring</Name> 11 <Description>GKLab Windows Automatic Service Monitoring Management Pack</Description> 12 </DisplayString> 13 <DisplayString ElementID="GKLab.Windows.Auto.Service.Monitoring.DataSource"> 14 <Name>GKLab Windows Automatic Service Monitoring Data Source</Name> 15 <Description>GKLab Windows Automatic Service Monitoring Data Source</Description> 16 </DisplayString> 17 <DisplayString ElementID="GKLab.Windows.AutomaticService.Monitoring.Rule"> 18 <Name>Windows Automatic Services Monitoring Rule</Name> 19 <Description>Windows Automatic Services Monitoring Rule</Description> 20 </DisplayString> 21 <DisplayString ElementID="GKLab.Windows.AutomaticService.Monitoring.Rule" SubElementID="Alert"> 22 <Name>Alert</Name> 23 </DisplayString> 24 <DisplayString ElementID="GKLab.Windows.AutomaticService.Monitoring.Rule" SubElementID="DS"> 25 <Name>GKLab Windows Automatic Service Monitoring Data Source</Name> 26 </DisplayString> 27 <DisplayString ElementID="GKLab.Windows.AutomaticService.Monitoring.Rule.AlertMessage"> 28 <Name>Windows Automatic Services Monitoring Alert</Name> 29 <Description>Windows Service {0} is set to auto-start but is currently not running.</Description> 30 </DisplayString> 31 </DisplayStrings> 32 </LanguagePack> 33 </LanguagePacks> 34 </ManagementPack>

Step 7:

Deploy the MP in lab and check for alerts.

image

 

I have attached copy of XML which you can import in to any authoring tool. Customize as per your needs and have fun.

Happy SCOMing…

Viewing all 177 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>