Sunday, December 18, 2011

Delete files based on date – ksh bash

Referring to an earlier blog post on how to delete log files or DRPlans using PowerShell I have another TSM server that is running on AIX. It’s new so I don’t have a bunch of DRM plans in the directory currently. Also this is a server I use for my internal testing and development so it’s not too heavily used by others.

Here is the directory listing before I wrote and ran the script:

image

After I wrote and ran the script:

image

I set it to only keep 7 days worth of files and now all I need to do is put it to run in my crontab every day…

Here’s the script – Cheers!


#!/usr/bin/ksh

DRM_Directory=/opt/tivoli/storage/drm/drplan
DaysToKeep=7

find $DRM_Directory -type f -mtime +$DaysToKeep -exec rm -f {} \;

Friday, December 16, 2011

Email Files with PowerShell

I have a need when dealing with customers and their disaster recovery plans provided by Tivoli Storage Manager (TSM) to get these files offsite on a regular basis. Normally every day at about the same time. It’s a great idea to email them to yourself, however not such a great idea if the email is on the server you may need to recover. I recommend in most cases that people get an external email account (Gmail, Live, Yahoo, etc.) and have the disaster recovery plans sent to them there. That way they are more likely to be able to retrieve them then if they were on the Exchange or Lotus Notes (Yes, people still use Notes for email) server that was in the datacenter that just imploded.
You need to update a few things and to make this work:
  • $SMTPServer – Make it your SMTP server
  • $DistributionList – I did it this way so you (or someone else) don’t need to edit the script when the recipients change
  • $SendingAddress – Who is this email going to be coming from?
  • $DataDirectory – What directory are the files kept in that need to be sent?
  • $RequiredFiles – The file names that need to be sent
In this instance the DR Plan itself is the last file to be created and has a different name everyday. I’m using the time difference to add it to the list of files that are needed.

# Author: Thomas Wimprine
# Creation Date: Dec 14, 2011
# Filename: SendDRPlan.ps1
# Description: Collect files needed for TSM Dr Recovery and email them to a distibution list
 
Function SendEmail {
    param (
        $FilesArray
    )
    $SMTPServer = "mail.domain.com"
    $DistributionList = "DRPlanRecipiants@domain.com"
    $SendingAddress = "TSM@domain.com"
    
    # Create our mail message objects
    $ToAddress = New-Object System.Net.Mail.MailAddress $DistributionList
    $FromAddress = New-object System.Net.Mail.MailAddress $SendingAddress
    $Message = New-Object System.Net.Mail.MailMessage $FromAddress, $ToAddress
    
    $Date = Get-Date
    $Date = $Date.ToShortDateString()
    $Message.Subject = "TSM DR Plan for $Date"
    $Message.Body = @("This is the daily DR plan as created by TSM with the required files to recover. Retain this message with attachments until it is no longer needed")
    
    # Add the attachments we need to the message
    foreach ($File in $FilesArray) {
        $Attachment = New-Object System.Net.Mail.Attachment($File,'text/plain')
        $Message.Attachments.Add($Attachment)
    }
    
    $Client = New-Object System.Net.Mail.SMTPClient $SMTPServer
    
    $Client.Send($Message)
    
}
 
Function GetLatestDRPlan {
    param ($Directory)
    foreach ($File in Get-ChildItem $Directory) {
        if ($NewestFile.CreationTime -lt $File.CreationTime) {
            $NewestFile = $File
        }
    }
    $NewestFile
}
 
$DataDirectory = "D:\DRPlanDir"
$RequiredFiles = "devconfig","volhist"
$RequiredFiles += GetLatestDRPlan($DataDirectory)
 
$AttachFiles = @()
foreach ($File in $RequiredFiles) {
    $AttachFiles += "$DataDirectory\$File"
}
 
SendEmail($AttachFiles)

Thursday, December 15, 2011

PowerShell–Remove old log files

Have you ever had an application running on a server and was completely happy until you realized your drive space was slowly getting chewed up? You do a little investigation and realize that this application has been writing log files since creation and never does any cleanup.  This may not be a huge problem as before with our 3Tb hard drives, however if this has been a system that’s been in production for a few years and forgotten about this could become an issue.

This isn’t completely limited to log files, some applications actually create files for you to use and leave them on the drive assuming (correctly in some cases) that you will deal with them. One application that does this and I support is IBM’s Tivoli Storage Manager (TSM). This is a data protection/backup/archive/management software and it creates disaster recovery plans on a daily basis, when configured properly. These plans get created and place in a per-determined location for you to do what you need to get them offsite so you can recover your systems and data in the event of a disaster.

I recently went into a customer’s location and they have had this system running for many years. This server was sitting in the rack just happy as could be, however they had data on the drives that went back… a long time… Unfortunately this was their history files and the disaster recovery plans from forever ago. They didn’t need to keep it beyond its useful life, which in this case is really only a few days. This is one of the scripts I wrote for them using PowerShell to clean out all the log files and DRM plans that are no longer needed.

This gets put in a file “C:\Scripts\RemoveExpiredLogs.ps1” that is scheduled to run daily.

# Author: Thomas Wimprine
# Creation Date: Dec 14, 2011
# FileName: RemoveExpiredLogs.ps1
# Description: Delete logfiles from a specified directory based on age

$RetainDays = 10 # Number of days you want to keep logs
$LogDirectory = "D:\LogDir\" # In this instance I need the trailing slash

$DeleteDate = [DateTime]::Today.AddDays(-$RetainDays)

foreach ($file in Get-ChildItem $LogDirectory) {
if ($file.CreationTime -lt $DeleteDate) {
Remove-Item $LogDirectory$file
}
}




 



Simple & Effective – I find myself re-writing this more often than I thought I would so I hope you enjoy…

Friday, December 9, 2011

You need Watson in your business!!!

Honestly – I just want someone to need this so I can work on it. This is one of those projects that would be amazing to be involved in and I want to be involved!

Quoting from the YouTube comments:

“Watson is designed to further the science of natural language processing through advances in question and answer technology. This "first of its kind" workload optimized system has applicability in your day-to-day business analytics challenges as well.
Solving these challenges requires many of the same architectural elements as Watson. Power Systems with POWER7 processor technology is uniquely positioned to deliver these capabilities..“

Friday, September 23, 2011

Automated nmon collection

I’m working with a customer and I get this question
“My system is running slow, do you know why?”
“How do you know it’s running slow?”, I ask.
“I don't know it just seems like it’s taking longer to do <whatever>”, they say.
“Do you have any trending that we can look at?”
“No – it was running fine. Why would I need that if it everything was fine?”, they ask.
I shake my head and die a little inside…
As a result of a few conversations like this I decided that rather than leaving it up to the customer to collect performance statistics on a regular basis, I need to do it for them. This gives me a few bits of information to use when they finally start coming to me with the inevitable performance problems and/or questions:
  • We have a baseline to work from!!!
  • We can determine what changed, if anything
  • It may only be in your head – we need to prove it’s not! (Some crazy people work on computers)
  • If there really is a problem, where do we even start looking?
I whipped up this simple script to make it a lot easier to collect that baseline. Nmon is now part of the AIX operating system and there is really no reason why you shouldn’t be using it to collect data. It’s a very good “whole system health” type monitoring tool that you can get down and dirty with if needed.
USAGE:
  1. Change the variables to valid values for you!
    1. REPORT_RCPT – Who is going to receive this analysis?
    2. COMPANY – If you are collecting this for multiple companies (I am) put the company name here so the reports make sense when you get them.
    3. REPORTS_TO_KEEP – How many are we going to keep on the local system (not in your email!)
  2. Schedule the script in cron.
    1. A lot of people like to schedule this for midnight, however that’s when a lot of people schedule maintenance or backups. This splits those periods of high activity into multiple reports. Consider scheduling it for a regular “slow” period, like when people leave work ( 17:00 to 19:00 ) or usually about 06:00 before they come in to hit the system and nightly processes are finished.
  3. Collect the reports it sends to you – DON’T DELETE THEM! Copy them to your system if you need to get them out of email. Remember this is data collection not data collect and toss!!!
  4. Analyze periodically so you have a quantitative value and idea what your system is doing – how often depends on your environment.
  5. When you have a performance problem later – reference earlier reports to determine what changed.

 #!/usr/bin/sh

export DATE=`date +%m%d%Y`
export CURRENT_DIR=`pwd`

export REPORT_RCPT="your@mail.com"
export COMPANY="MyCompany"
export REPORTS_TO_KEEP=7


# if the directory doesn't exist - create it

if [[ ! -d "/tmp/nmon" ]] then
 mkdir /tmp/nmon
fi
cd /tmp/nmon


# Now lets get the one from yesterday and email to where is needs to be
export NMON_FILES=`ls -ctl | awk '{print $9}' | grep nmon | grep -v gz`
for i in $NMON_FILES; do
 export NEW_FILENAME=`echo $COMPANY ${i} | awk '{print $1 "_" $2}'`
 sort $i > $NEW_FILENAME
 tar cvf - $NEW_FILENAME | gzip -c > $NEW_FILENAME.tar.gz
 
 # we have the file we need now lets email it to whomever need it
 uuencode $NEW_FILENAME.tar.gz $NEW_FILENAME.tar.gz | mail -s "$COMPANY nmon report for $DATE" $REPORT_RCPT
 
 # Cleanup just a bit
 rm -f $i
 rm -f $NEW_FILENAME
done


find /tmp/nmon -type f -mtime +$REPORTS_TO_KEEP -exec rm -f {} \;


# Start NMON for the next day!
nmon -f -s 60 -c 1440

# Just incase you run it interactivly - return to where you started
cd $CURRENT_DIR


Cheers

Wednesday, July 27, 2011

Automated creation of mksysb and copy to NFS Server

I have a bunch of systems that we are working on, however we don’t have the tape library connected yet. It makes me very nervous to not have any backups so I put this together. It creates a mksysb on the local system and copies it to the NFS export. In my case I created this on my NIM server so if I needed to create a SPOT and reinstall or cut it to tape it’s already available there.
It’s pretty much ready to use so all you only need to do a few things:
  1. Update the variables
  2. Copy it to each system
  3. Schedule with cron
#!/usr/bin/ksh

HOSTNAME=`uname -a | awk '{print $2}'`
DATE=$(date +%m%d%Y)
FILENAME=$HOSTNAME.$DATE
RETAIN_LOCAL_BACKUPS=1
RETAIN_NFS_BACKUPS=7
BACKUPDIR="/mksysb/nfs_mksysb"
NFSSERVER="NFSServer"

# Check to make sure the directory we need to mount is created
if [ ! -e "/mksysb/nfs_mksysb" ]; then
    mkdir -p /mksysb/nfs_mksysb
fi

# Determine if the NFS share is mounted unless it's the server serviing the NFS Mount
if [ ! "$NFSSERVER" == "$HOSTNAME" ]; then
    mount | grep nfs_mksysb || mount "$NFSSERVER:/mksysb/nfs_mksysb" "$BACKUPDIR"
    mount | grep nfs_mksysb && MOUNTED = 1
fi

# Ensure the directory for the system is created
if [ ! -e "$BACKUPDIR/$HOSTNAME" ]; then
    mkdir -p "$BACKUPDIR/$HOSTNAME"
fi

# Everything is mounted and ready to go - lets create our backup
/usr/bin/mksysb -e -i "/mksysb/$FILENAME"

# MKSYSB is completed so lets copy it over to the NFS share
cp "/mksysb/$FILENAME" "$BACKUPDIR/$HOSTNAME/"

# Clean up local directories
find /mksysb \( ! -name mksysb -prune \) -name "$HOSTNAME.*" -mtime +$RETAIN_LOCAL_BACKUPS -exec rm {} \;

# Clean up NFS share
find "${BACKUPDIR}/${HOSTNAME}" -name "${HOSTNAME}.*" -mtime +${RETAIN_NFS_BACKUPS} -exec rm {} \;

# We are finished so lets unmount the share
[ ! -z "$MOUNTED" ] && umount /mksysb/nfs_mksysb

Friday, July 15, 2011

Collect WWN from AIX Systems

I have a need to collect all the WWN from my AIX systems that are running. Unfortunately I’m inheriting this environment from someone and they didn’t keep records and I’m really not interested in the other method of getting them from the HMC.
I have a file that has all the systems or IP address that I need the names from called “hostfile” in the current directory – you can parse this from any file you have with just a bit more scriptfu if you need.
for x in `cat hostfile | sort`
do
        echo $x
        ssh root@$x "for i in \`lsdev -Cc adapter | grep fcs | awk '{print \$1}'\`; do \
                lscfg -vpl \$i | grep 'Network Address'; done"
done



Running this will produce output similar to this:      (With the actual WWN of course)


server01
        Network Address.............C0xxxxxxxxxxxxxx
        Network Address.............C0xxxxxxxxxxxxxx
        Network Address.............C0xxxxxxxxxxxxxx
        Network Address.............C0xxxxxxxxxxxxxx
server02
        Network Address.............C0xxxxxxxxxxxxxx
        Network Address.............C0xxxxxxxxxxxxxx
        Network Address.............C0xxxxxxxxxxxxxx
        Network Address.............C0xxxxxxxxxxxxxx
server03
        Network Address.............C0xxxxxxxxxxxxxx
        Network Address.............C0xxxxxxxxxxxxxx
        Network Address.............C0xxxxxxxxxxxxxx
        Network Address.............C0xxxxxxxxxxxxxx

Tuesday, July 12, 2011

Blog Name Change

I’m hope there are a few people that subscribe to my blog feed. Unfortunately for some of those people that have subscribed for my technical Windows or PowerShell content I may need to disappoint  you.

I have recently changed employers and I was the only person in the shop with AIX experience and they had an over abundance of Windows administrators. This led me to a decision that I’m hoping will pay off and not haunt me later. I’ve moved over to handle the IBM pSeries side of the shop. This includes working on anything the pSeries will support as an installed operating system, interface with or be managed by.

While I’m not completely new to various flavors of UNIX, this will certainly be a learning experience for me and I will be posting technical content, tips & tricks and other times as I learn them.

Tuesday, February 22, 2011

CertLog Consuming Large Amounts of Disk Space

Problem

Yesterday we had an issue where our certificate server stopped responding. The OS was responsive, however the CA stopped servicing requests and there were a fair amount of errors in the Application log that were similar to this one:
image
When we looked in the directory we found files that looked like this:
image
People that are familiar with Exchange recognize that ESENT is a Jet database. The log files and the edb.log and edb.chk files also look really familiar. The problem was that we had 7Gb of log files filling up this drive and the certificate services couldn’t write  the log files due to a lack of free space. Doing a simple search showed a fair amount of results explaining how to stop the services and delete the log files, however this didn’t seem like the correct course of action since this is a database. There is no way I would just delete the log files for my Exchange server so why would I do it for my certificate server? I would backup my Exchange server and that would truncate all my log files.

Solution

Another search on “Backup certutil” sent me to TechNet and the article explaining how to backup my certificate authority. The command “certutil –p P@ssw0rd –backup D:\CertBackup” performs a full backup of the database and truncates the log files, thus returning all the used drive space. This creates the directory “CertBackup” on the D drive if it doesn’t exist and populates it with a certificate file “ServerName.p12” and another directory called DataBase with the actual edb file and a dat file.
image
After the backup completes all the log files will be truncated and the services, if stopped, can be restarted. We will be running this periodically to make sure we don’t have this problem again. One issue with the scripted approach is that it will not overwrite the previous backup so you must delete or rename the previous one or create a new path for each backup which isn’t hard if you are a Scripting Guy.

Wednesday, February 16, 2011

Get MAC Addresses through PowerShell

$Servers = Import-Csv c:\Temp\servers.csv

foreach ($Server in $Servers) {
    $NetAdapter = Get-WmiObject -Class Win32_NetworkAdapterConfiguration -ComputerName $Server.Name -Filter "IpEnabled = TRUE" 
    foreach ($Adapter in $NetAdapter) {
        $Name = $Server.Name
        $MAC = $Adapter.MacAddress
        Write-Host "$Name - $MAC"
    }
}




We needed to get the MAC addresses for the network team and I didn’t have a script for it in my library. We already had the list of servers we need, so I used that to query WMI and return the MAC on the adapters that would be connected to the network.

Thursday, February 3, 2011

Protect Organizational Units from Deletion

While this is a real simple script it could save you from a lot of problems later. Have you (or any of your users) accidently right-clicked on a folder and moved it somewhere and you just can’t find it afterwards? Image that happing to your AD because an administrator made a mistake moving the mouse. Even worse when you did it yourself and you notice half a second too late.

In Windows 2008 R2 it defaults to being protected, however if the OU needed to be moved or was upgraded the flag may not be set.

This simply searches all your OU in Active Directory and if the ‘ProtectedFromAccidentalDeletion’ flag is not set to TRUE it sets it, it doesn’t matter how deeply buried.

Import-Module ActiveDirectory

$OU = Get-ADOrganizationalUnit -Filter {Name -like "*"} -property ProtectedFromAccidentalDeletion | Where-Object {$_.ProtectedFromAccidentalDeletion -eq $False} 

foreach ($UNIT in $OU) {

Set-ADOrganizationalUnit $UNIT -ProtectedFromAccidentalDeletion $true 

}


Hope this saves at least one person a late evening of stress and heartache!

Friday, January 21, 2011

Update Computer Description from Active Directory

We try and keep everything in our domain synchronized, however it’s not always easy when the data is kept in three or more independent locations. I created this script a few years ago with the Quest cmdlets but figured since I can do it natively with the AD cmdlets I would update it and repost it.
 


$Servers = Get-ADComputer -Filter {OperatingSystem -like "*Server*"} | Select-Object Name, Description | Sort-Object Name 
foreach ($Server in $Servers) {
    $ServerWMI=get-wmiobject -class win32_operatingsystem  -computername $Server.Name
    $ServerWMI.Description = $Server.Description
    $ServerWMI.Put()
}


This script simply queries AD for all the servers,gets the computer name and description. It then connects to the server and updates the description. Pretty straight forward and simple…

Thomas