Powershell Needful Things put that in your pipeline

4Aug/1052

Check free space on volume mount points

Posted by Jean Louw

Wow! It’s been a while since I have posted any scripts! This is mainly due to the fact that I am rather busy at work, and also working hard at completing my MCITP.

A while back a client of mine, asked if there was an easy way to use one computer to check the free space of mount points. This was a real problem for them, as the administrators would come in every morning and manually logon to each server, and use disk management to check the free space.

I was certain that there had to be a WMI object for mount points, so after a little digging, I came up with the following script:

$TotalGB = @{Name="Capacity(GB)";expression={[math]::round(($_.Capacity/ 1073741824),2)}}
$FreeGB = @{Name="FreeSpace(GB)";expression={[math]::round(($_.FreeSpace / 1073741824),2)}}
$FreePerc = @{Name="Free(%)";expression={[math]::round(((($_.FreeSpace / 1073741824)/($_.Capacity / 1073741824)) * 100),0)}}

function get-mountpoints {
$volumes = Get-WmiObject -computer $server win32_volume | Where-object {$_.DriveLetter -eq $null}
$volumes | Select SystemName, Label, $TotalGB, $FreeGB, $FreePerc | Format-Table -AutoSize
}

$servers = (Get-Content .\servers.txt)

foreach ($server in $servers){
get-mountpoints
}

The script is written to collect server names from a text file, but you could use any other method to supply you server names.

Hope this helps someone else!

17Sep/090

A very basic queue monitor

Posted by Jean Louw

At my office we recently needed a method to quickly know if the queues on any of the Exchange servers were building up. We have monitoring in place, but these guys can sometimes miss a build-up which leaves us with the problem.

As a very rudimentary solution, I compiled the following script.

In a nutshell, it enumerates the message count of all the queues on all Exchange servers in the Org. This includes Exchange 2003 and 2007. The script then measures the sum total of all messages. If it exceeds a predetermined amount, 1000 in my case, it will send a notification message to the administrators.

This is really a catch 22, if the server with the queue build-up is also your relay host, or happens to be the server with the problem. As a workaround for this you could probably do a NET SEND message or use 2 SMTP servers to relay the message through. An alternative is to send an SMS to administrators if you have the facility.

Initially the script waited with a while loop and polled the queues every 5 minutes. I have opted to change that, and launch the script with Task Scheduler instead, which means I don’t have to actually be logged onto the console for the script to run.

The script writes out the date and message count to a log. This log cycles daily.

I know this is very basic, but it gets the job done in terms of what we needed as an interim solution.

You can download the script here:

12Jun/090

Whats going on here?

Posted by Jean Louw

As part of the Exchange audit scripts, I recently changed the 2007 version of the script to use .NET to collect the event logs instead of WMI. Virtu-Al made an interesting suggestion, which was to say, which of these methods are quicker at collecting the logs. So in order to do this I needed to setup a race.
This race would basically involve the two methods of retrieval collecting a large list of events from a selected server. The basic command to accomplish this is as follows: For WMI one would simply use:
$wmi = Get-WmiObject -computer SERVER1 Win32_NTLogEvent  
Using .NET, it retrieves the actual Event Logs, so the entries have to be enumerated with a quick bit of code:
System.Diagnostics.EventLog]::GetEventLogs('SERVER1') ForEach ($eventLog in $eventLogs){ $dotNet += ($eventLog.entries)} 
In both cases, where SERVER1 is the name of the remote server you need to collect the events from. Now, in order to make sure that there is no cheating, I would have to count how many objects are returned by each method. This could be done by simply saving the collection to a variable and counting the total. So in this scenario, .NET would return approximately 56000 items and WMI would return less. About 500+ less every time. From here I went down a crazy path of checking date and time formats etc. and in the end, I came to the conclusion that it had to be the security log. Entries were being written into the Security Log so quickly, that by the time the 2nd script is run, the number of entries have changed, or I remembered that you needed special permissions to read certain Security Log entries. Or so I thought. So I decided to exclude the Security log from my collection. This was easy enough, but still the totals were inconsistent. In an effort to try and eliminate where the problem could be, I decided to include only one log at a time, starting with the Application Log. Here is the script used to collect the Application from a remote server using WMI:
$d1 = get-date

$wmiDate = [System.Management.ManagementDateTimeConverter]::ToDmtfDateTime([DateTime]::Now.AddDays(-1))
$WMI = Get-WmiObject -computer SERVER1 -query ("Select * from Win32_NTLogEvent Where Logfile = 'Application' and TimeWritten >='" + $WmiDate + "'")

$wmiCount = ($WMI).Count

$wmiDT = [System.Management.ManagementDateTimeConverter]::ToDateTime($wmiDate)
Write-Host From Date $wmiDT
Write-Host Total $wmiCount
$d2 = Get-Date
$d2 - $d1
WMI Script results: From Date 09/06/2009 01:28:49 PM Total 317 Here is the script used to collect the same event log entries from the same server, using .NET instead:
$d1 = get-date
$dotNetDate = ([DateTime]::Now.AddDays(-1))
$eventLogs=[System.Diagnostics.EventLog]::GetEventLogs('SERVER1') | where {$_.LogDisplayName -eq "Application"}
ForEach ($eventLog in $eventLogs ){

$dotNet += ($eventLog.entries) | where {($_.TimeWritten -ge $dotNetDate)}
}

$dotnetCount = ($dotNet).count

Write-Host From Date $dotNetDate
Write-Host Total $dotnetCount
$d2 = Get-Date
$d2 - $d1
.NET Script Results:
From Date 09/06/2009 01:28:49 PM Total 650
This was still very confusing so, to see exactly at which record the problem is, I had both scripts display the record number of the first and last record in each respective collection, by adding the following to each script: For the .NET script:
$dotNet | Select-Object -First 1 $dotNet | Select-Object -Last 1 For the WMI script: $WMI | Select-Object RecordNumber, TimeWritten, Type, SourceName, EventCode -First 1 $WMI | Select-Object RecordNumber, TimeWritten, Type, SourceName, EventCode -Last 1 
Now I could see that, at least they were starting at the same record, but for some odd reason, WMI was quitting before the job was done. .NET record results:
Index Time Type Source EventID ----- ---- ---- ------ ------- 51 Jun 09 14:55 Warn MSExchange Availa... 4004 705 Jun 10 14:51 Warn MSExchange Active... 1008 WMI Results: RecordNumber TimeWritten Type SourceName ------------ ----------- ---- ---------- 353 20090610012624.00000... Warning MSExchange ActiveSync RecordNumber TimeWritten Type SourceName ------------ ----------- ---- ---------- 51 20090609145522.00000... Warning MSExchange Availability
To make sure this problem wasn’t specific to the current server I started collecting logs from other servers, to record the results. I also did an add-member on the WMI script to convert the time and date back for easier reading. With the following string:
ForEach-Object { Add-Member -inputobject $_ -Name myTime -MemberType NoteProperty -Value ([System.Management.ManagementDateTimeConverter]::ToDateTime($_.TimeWritten)) -Force -PassThru} 
Over a number of servers this still made no difference, WMI still did not return all the results. This seems to be a problem specific to the Application and Security Log, and could well be related to the WMI impersonation or authentication which will be available in version 2.
This I have not had time to investigate. I decided to re-write the WMI script to collect all results and then filter out the unwanted events with “where-object”. At this point I also changed the selected log to the system event log, as someone cleared the application logs on the selected servers.
This worked great for most of the servers and finally I was getting similar results from both scripts. I did however find, that servers with large numbers of events generate a WMI Quota Violation, which seems to imply that there are too many items in the list, which is yet another blow to WMI.
This could also explain the incomplete results from previous attempts. The Quota Violation is a known problem and there is a resolution for it posted here: http://support.microsoft.com/kb/828653. To get around this problem, I changed the script again, to use the WMI query. So now that we were getting results, it was time to start testing the speed of each method.
I decided to test the speed against 3 different servers, and increment the number of records retrieved until I could not collect anymore, or up to a maximum of 240 days worth of events.
I decided to also give each method and average read time over 3 attempts.
Here are some of the results:

As the amout of days, or number of records increase, the read speed of WMI starts decreasing.

In summary, WMI scales nicely when using a WMI query directly in the Get-WMIObject command. It does however loose speed as the number of records to retrieve start increasing.
It has to be mentioned, that WMI slows down to a crawl, if all records are retrieved and the result is filtered with “where-object”.
Although WMI is faster with less records, I am going base all my event log queries on .NET for now, as WMI proved to be inconsistent and erroneous in what retrieves, or atleast in my testing it did.
I hope that this problem is related to impersonation, and that it is resolved in Powershell v2. The final scripts I used to retrieve the information can be downloaded from here:
 
5Jun/097

Update: Exchange 2007 audit script

Posted by Jean Louw

In an attempt to resolve some issues with regards to the event logs, I have made a few updates to the Exchange 2007 audit script:
* I now use [System.Diagnostics.EventLog]::GetEventLogs() to collect the remote event logs and entries instead of WMI
* The output to the host displays exactly which event log it is busy reading.
* The date range seems more accurate now when the event log contains a large amount of data.
* The physical memory on the basic server information is now displayed as GB and is neatly rounded.
* The Mailbox stores are sorted in alphabetical order by Store Name.
* Added more verbose output to the console while the script runs, to give a better indication of what the script is busy with.
I hope this resolves most of the problems for now, comments / suggestions are always welcome. The script can be downloaded from here:

Complete version and download information can be found here:
http://www.powershellneedfulthings.com/?page_id=276

21May/096

Exchange 2007 Audit Report

Posted by Jean Louw

I had some extra time this week to complete the Exchange 2007 version of the Audit script, as I am going on leave for a week, and needed to have the process automated while I am gone.
This version of the script still uses WMI for some of the items on the report, but uses the Exchange 2007 commandlets for most of the Exchange related information.

The one tricky bit of information to retrieve was the installed Exchange rollups. These are not available via WMI or any other method I could find. I did find a very effective solution on flaphead.com. This little piece of magic, locates the installed patches in the remote registry, and loops through the keys to find and list the installed rollups.


Unlike Exchange 2003, Exchange 2007 servers are installed with specific roles. This plays a part, when checking things like queues and mailbox stores. For instance, there is no point in checking a pure Hub Transport server for mailbox stores etc. I initially built in a check which would check the ServerRole property of the server to match a specific role, forgetting that one server could have multiple roles. I now do a match for the role anywhere in the property string with this if statement: if ($exServer.ServerRole -notlike "*Mailbox*") This will skip the mailbox related check if the word “Mailbox” cannot be located anywhere in the string.

To automate the running of the checks on a daily basis I setup a scheduled task on one of my Exchange 2007 servers as the script requires the commandlets.

I really had no idea how to get the scheduled task to run in the Exchange management shell so, as a test I basically used the following command: C:\WINDOWS\system32\windowspowershell\v1.0\powershell.exe -PSConsoleFile "D:\Program Files\Microsoft\Exchange Server\bin\exshell.psc1" c:\scripts\ExchangeAudit2k7.ps1 .\servers.txt

This did the trick and the entire check process now runs and completes before I even get to work. My version of the script, also creates an HTML menu and moves the reports to our departmental web server for my managers’ viewing pleasure. The mailbox stores now also indicate the last backup time, as we have had issues before where the backups aren't completed, and we don’t find out until it’s too late.

I am busy working on a little piece of code, which will connect to the OWA site and simply test if the site is available, but that will have to wait until I am back from leave.

Complete version and download information can be found here:
http://www.powershellneedfulthings.com/?page_id=276

Filed under: exchange, remote, script, wmi 6 Comments
12May/092

Update: Powershell Remote WMI Defrag

Posted by Jean Louw

As with most things in life, people are only happy with limited features for a little while, and then the enhancement requests pour in.
The administration guys at the office have been using the remote defrag script for a couple of weeks, and soon realised that there was no way for them to show off the results of their labour to management. So inevitably, they requested that I add some sort of reporting to the script which they can send to management.

Initially, I had all the results write out to a text file, for each volume, but this became a mess to manage after you defrag hundreds of servers with multiple volumes. Having recently completed the Exchange 2003 audit script with the use of Virtu-Al’s HTML template, I imagined it would be possible to report the defrag results using a similar format.

The script runs through a list of servers, contained in servers.txt and starts a remote defrag using WMI. It waits for the process to complete and then moves on to the next volume. The script will check if dfrgntfs.exe is running on the remote host, and then skip that server.

The script changes the colour of the drive on the report, based on whether a defrag was actually run or not. Green means it was skipped, orange that defrag was already running and red that it was defragged.
Finally, at the bottom of each drives’ report the script will give you a quick before and after result.

The script can be downloaded from my Skydrive:

7May/090

Exchange WMI Audit

Posted by Jean Louw

I recently needed to automate my Exchange 2003 server daily checks. I have done some basic work on this before, but I really needed to automate the process and write the results to HTML to make it more “manager friendly”.

While searching the web for something I could use as a basic start up script, I came across an awesome script on Virtual-Al. This script uses WMI to audit a list of remote computers, and reports in a very neat HTML format. It was exactly the platform I needed, and it meant not having to re-invent the wheel.

I did however have some trouble with WMI and the mailbox stores, and finding a method for reporting the number of users and whether the store is mounted or not. I managed to find a workaround for the number of users, but it seems that checking the store status would have to be done with CDOEXM. This felt like a little too much effort as we are in the middle of our migration to Exchange 2007.

Speaking of Exchange 2007. The script cannot be used against Exchange 2007 servers, as Exchange 2007 does not include any WMI providers.
I am however working on an Exchange 2007 version or an Exchange version check process for this script. All credit for the HTML template and the original script should go to Alan Renouf, I merely took a great script and adapted it for use with Exchange.

The script will show only Exchange related information on the report, this includes Hotfixes, Services and Event Log entries. The version of the script which I use myself, creates an HTML menu, with a list of all of the servers processed and links to their individual reports. It also moves the files to a web server, which makes it much more automated. Comments and suggestions are always welcome.

This script is not displayed in a code window, but can be downloaded from here:

Filed under: exchange, remote, wmi No Comments
17Apr/091

Update Network Interface Card parameters using WMI

Posted by Jean Louw

The following little function can be used if you need to manually override DNS and WINS addresses on a list of remote computers, where they may have already obtained addresses from a DHCP server. The code gets a list of IP enabled NICs from a remote computer using WMI, you can list the servers in servers.txt file in the same folder. The script updates your DNS servers search list to add two manual entries and also adds two manual WINS servers. I had some fun the SetWINSServer method as it only accepts the variable as an array. Finally, the script modifies the registry, to create a DNS suffix search list. Although this script only modifies limited parameters, it can easilly be adapted to update any of the other parameters.
function updateNIC {
$NICs = gwmi -computer $server Win32_NetworkAdapterConfiguration | where{$_.IPEnabled -eq “TRUE”}

foreach ($NIC in $NICs) {

$DNS=("1.1.1.1","2.2.2.2")
$WINS=@("1.1.1.1","2.2.2.2")
$DOMAIN="acme.com"

$NIC.SetDNSServerSearchOrder($DNS)
$NIC.SetDynamicDNSRegistration("TRUE")
$NIC.SetWINSServer($WINS[0],$WINS[1])
$NIC.SetDNSDomain($DOMAIN)

$baseKey = [Microsoft.Win32.RegistryKey]::OpenRemoteBaseKey('LocalMachine', $server)
$baseKey.OpenSubKey
$subKey=$baseKey.OpenSubKey("SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters",$true)
$subkey.SetValue('SearchList','acme.local,acme.com')

}
}
foreach($server in (gc .\servers.txt)){
updateNIC
}
Here are some images of the results of the Advanced TCP/IP Settings page after running the script. Here is the WINS tab.
10Mar/092

Remote Defrag using WMI

Posted by Jean Louw

This is a script I created to analyze and defrag Windows 2003 server volumes using the WMI win32_volume defrag method.  The script will collect all volumes on a list of remote servers using WMI. Each volume is then analyzed for fragmentation using the FilePercentFragmentation property. If the fragmentation property is higher than 10 the script will initiate a remote defrag on the volume.  You should see a process on the remote server called “dfrgntfs.exe” running while the defrag is in progress. Sadly I have not found a method to track the progress of the defrag process. You can adjust the fragmentation percentage threshold at which a defrag is initiated by editing line 12.  Replace "SERVER1" "SERVER2" with your server names. Comments or suggestions are always welcome.

$servers="SERVER1", "SERVER2"
foreach( $server in $servers){
Write-Host ""
$v=(gwmi win32_volume -computer $server)
"CURRENT SERVER: {0} " -f $server
"NUMBER VOLUMES: {0} " -f $v.length

foreach( $volume in $v){
Write-Host ""
write-host "Analyzing fragmentation on" ($volume.DriveLetter) "..."
$frag=($volume.defraganalysis().defraganalysis).FilePercentFragmentation
if ($frag -gt "10") {
write-host "Drive" ($volume.DriveLetter) "is currently" $frag "% fragmented." -foreground RED
write-host "Starting remote defrag..."
$volume.defrag($true)
}
else {
write-host "Drive" ($volume.DriveLetter) "is not fragmented" -foreground GREEN
Write-Host ""
}
}
}