Essential PowerShell: Define default properties for custom objects

After posting my blog entry about naming your custom object types on Thursday, Hal Rottenberg left me a comment saying how it’s a shame that you have to manually create ps1xml files to store your type data and format data extensions.  Hal’s right.  Having to create ps1xml files to accompany each script you make that generates custom objects is too much work for the script author, and the script consumer has more files to download each time.  But then I thought twice about what Hal said, and asked myself: Is creating the ps1xml files to get the output you want from custom objects really necessary?

Fortunately the answer to this is “No”.  But why not?  Before I answer that, I should give a short explanation about how PowerShell determines what properties it will display when an object is output and in what format that object will be displayed.

How PowerShell determines the default format for an object

Let’s use the WMI service object for the Windows Update service as an example.  You can get this object in PowerShell using this command:

$object = gwmi Win32_Service -Filter ‘name=”wuauserv”‘

This command calls Get-WmiObject (using the gwmi alias) and requests the Win32_Service object that has the name “wuauserv”.  The result object is stored in the $object variable.

Now that the object is stored, if you want to view it you simply need to enter “$object” (minus the quotes) in your PowerShell console and you’ll see the default representation of that object.  It looks like this:

image

Here you can see the ExitCode, Name, ProcessId, StartMode, State and Status properties for the Win32_Service object.  There are many more properties than these six though.  You can type “$object | Format-List *” in PowerShell for yourself to see them all…there is quite a long list.  So how did PowerShell decide that these six properties were the ones to display by default?  In this case, the object itself contained the list of properties that would be displayed by default.  In PowerShell, any object may have the set of properties that will be output by default when that object is displayed stored in a member on the object itself.  It’s not kept in the most obvious of locations, but you can find it when you need to.

For any object in PowerShell, you can access its PSStandardMembers.DefaultDisplayPropertySet property to see the properties that are default for that object, if they are defined.  They won’t be defined for all objects, but in our example, they happen to be.  I show the results of the command to see the default property set here:

image

In this output window, the ReferencedPropertyNames property contains the list of properties that are displayed in PowerShell by default (ExitCode, Name, ProcessId, …).

What caused our object to display the properties in list format though?  After the default properties are set on an object (assuming they are set), when you output an object to the console without any Format-verb cmdlets at the end of the pipeline, the PowerShell formatting engine looks in the format data it has loaded and finds the first format data specification whose object type matches the one of the types in the object hierarchy that can be found by accessing the PSObject.TypeNames parameter.  It doesn’t matter if it is for a list format, a table format, a wide format, or a custom format — the first one found is used.  The types in the object hierarchy are looked up starting with the lowest derived type (index 0 in that collection) and then moving on to the next lowest (index 1), and so on until the base type is reached.

For our Win32_Service object, there is no format data specification for any of the object types, in which case PowerShell applies a default formatting rule: if there are four properties or less in the list of properties to display (the default properties if they were assigned, all properties if they were not), display the object in table format; otherwise, display it in list format.  For other types of objects there may be formatting data found, and when this happens the default properties are ignored and the default format is derived using the first matching format data specification.  This simplifies this whole process, but it should give you an idea how it works.

How to define the default properties for any object

Now that you have a general idea how formatting works, do you see the shortcut to defining the default output for custom objects you create?  Here’s a hint: it isn’t through the creation of a ps1xml file.

The easiest way to define the default output for a custom object is to add a PSStandardMembers member to the object and set the default properties in that member.  For objects that don’t have their default properties defined in a type data file, this is very easy to do.  Assume you have a script that generates and returns one or more custom objects with six properties: Name, Property1, Property2, Property3, Property4 and Property5.  Here’s a script to create one such object:

$myObject = New-Object PSObject
$myObject | Add-Member NoteProperty Name ‘My Object’
$myObject | Add-Member NoteProperty Property1 1
$myObject | Add-Member NoteProperty Property2 2
$myObject | Add-Member NoteProperty Property3 3
$myObject | Add-Member NoteProperty Property4 4
$myObject | Add-Member NoteProperty Property5 5

To view this object, enter $MyObject in the console.  Here is the default output for that object:

image

There are no default properties assigned for this object and there are no type data or format data for the object type, so PowerShell resorts to determining the output format based on the number of properties that the object has.  This is fine for one object, like we have here, but if you have a script that returns a collection of these, having their default output in list format does not give users of your script a very good user experience because it is very hard to find information in a long list of objects output in list format in PowerShell.  To show the objects in table format users can simply pipe them to the Format-Table cmdlet where they can specify the properties to show, but they shouldn’t have to do that.  Instead, you can update your script so that it specifies the default properties when these objects are created.  Here are the additional commands required to specify the default properties for our sample object:

$defaultProperties = @(‘Name’,’Property2′,’Property4′)

$defaultDisplayPropertySet = New-Object System.Management.Automation.PSPropertySet(‘DefaultDisplayPropertySet’,[string[]]$defaultProperties)

$PSStandardMembers = [System.Management.Automation.PSMemberInfo[]]@($defaultDisplayPropertySet)

$myObject | Add-Member MemberSet PSStandardMembers $PSStandardMembers

The first command sets the default properties.  Then in the next command we create a new property set containing those default properties.  With that property set, we can create the collection of member info objects we need.  And then once we have that collection we can add that as our member set to our object.

After you have done this your custom object will display the default properties you specified when you output it without any Format-verb cmdlets.  Here’s our $myObject default output after running the commands shown above:

image

That’s more like it!  Now we have a custom object with a predefined default property set, ready for users to start using, and our custom object still contains all of the properties for the object so that users can get additional fields if they want them!

Taking it further

Armed with this knowledge, you should be able to specify the default properties on any custom objects you create without using ps1xml files.  What if you wanted to take this further?  What functionality would be useful to have for this in a generic script so that you could get even more use out of it?

Using the knowledge derived above I was able to put together a script called Select-Member.ps1 that provides rich support for selecting the default properties for an object if none exist, or overriding the default properties for an object if they were defined in a type data file.  Select-Member can be used in a pipeline against a collection, or it can be used by itself passing the objects to process into the inputObject property.

Note that when you download this script, if you haven’t already you will also need to download Get-PSResourceString.ps1.  This is a simple utility script used to look up localized error messages, and it is used by Select-Member.

Here are the syntaxes supported by Select-Member:

Select-Member [-include] <string[]> [-exclude <string[]>] [[-inputObject] <psobject>]
Select-Member -exclude <string[]> [[-inputObject] <psobject>]
Select-Member -reset [[-inputObject] <psobject>]

The first variation allows you to specify which properties you want to include and optionally which properties you want to exclude.  If you exclude properties, those will be removed only after the list of properties to include have been processed.  Both the include and the exclude parameters support wildcards as well.

The second variation allows you to specify which properties you want to exclude from the default without including any parameters.

The last variation allows you to reset the object so that it will use the default property set as defined by the type data files and throw away any default property set that was added with Select-Member.

Here’s a screenshot showing some cool things you can do with this script and WMI objects:

image

Here’s another screenshot showing how you can get better formatting on ADSI objects:

image

Note that in both these examples, the value isn’t simply in being able to specify the defaults ad-hoc like this; format-table can allow you to specify which properties you see.  The real value lies in writing scripts or functions that output objects already formatted a certain way.  You could have a script that would set the default properties for a bunch of WMI objects you use, or a script that creates its own objects and outputs them with default properties defined.  There are other opportunities with this cmdlet as well, but this should get you started.

What’s missing from this?

The Select-Member script doesn’t yet support specifying the parameter sort order, nor does it support specifying the single default parameter that is used when using wide format.  These could be easily added in the future, and I will look into that as time permits.

Give feedback!

As with all scripts I write, I’d love to hear what you think.  Whether you use the simple solution to specify default properties for custom objects or the more advanced Select-Member script, let me know how well it works out for you.

Thanks,

Kirk out.

Share this post:

Essential PowerShell: Name your custom object types

PowerShell is a very flexible scripting language that allows users to dynamically create and/or extend objects with additional methods and properties.  This is very useful when you’re trying to build up a rich data set with all of the properties or methods you need.  One important thing that is often overlooked when people are writing scripts that do this is that they can also give those objects a type name.  Why is this type name important?  Three reasons:

  1. If you want to further extend those types automatically via a type data file, you’ll want a unique type name so that only the appropriate objects are extended.
  2. If you want to apply specific default custom formatting to those types via a format data file, you’ll want a unique name so that only the appropriate objects are formatted this way.
  3. If you want to associate specific links and actions with your custom object type in PowerGUI, you’ll want a unique name so that you don’t get links and actions associated with other types.

In practice there are only two use cases where I need to create a custom object type name, and I apply different names depending on the scenario I’m working with at the time.

If I have created a brand new generic PSObject, then I apply a name appropriate to the object.  In this case, after I created my custom object and added the properties, I would do the following:

$myObject.PSObject.TypeNames.Insert(0,’MyObjectTypeName’)

Alternatively, if I am extending an object of a particular type, then I apply an extended type name for that object to the modified version.  In this case, after I created my custom object and added the properties, I would do the following:

$derivedTypeName = $myObject.PSObject.TypeNames[0]
$myObject.PSObject.TypeNames.Insert(0,”$derivedTypeName#MyExtensionName”)

Then if you create type or format data files, you simply need to use your new type name in appropriate XML attribute to set up the association.  Or if you’re adding functionality to PowerGUI, any links and actions you create will automatically be associated with the lowest derived object type, which will be the type name you applied to the object before outputting it in the PowerGUI data grid.

You can add as many type names as you want to your objects, so if you want to create a virtual object hierarchy with your custom object types, and then associate format or type data specifications with derived or base object types, you can do that as well through multiple calls to the Insert method on the TypeNames collection.  This can be useful if you want to share a certain set of functionality between two types of objects you are creating but in addition you want some functionality to be specific to each type and you don’t want to duplicate code.

Hopefully this will encourage you to name your custom object types and define their default properties in scripts that you share with others.

If you’re working with custom objects you should also check out the follow-up post to this that resulted from Hal’s comments on this post, titled “Essential PowerShell: Define default properties for custom objects“.

Enjoy!

Kirk out.

Share this post:

PowerGUI hits 100,000 downloads!

Last week marked a major milestone in the history of PowerGUI.  Since the first beta was released to the web last year, PowerGUI has been downloaded 100,000 times!  Thanks to everyone for their support and participation in the forums!

Still not using PowerGUI as part of your PowerShell toolbelt?  Download the latest version from www.powergui.org and see what it’s all about!

Kirk out.

Share this post:

PowerShell Quick Tip: Use the command argument last when calling PowerShell.exe

Have you ever tried calling PowerShell.exe with the -NoExit argument and wondered why PowerShell is still exiting when it’s done your script?  For example, if you want to quickly launch a new clean PowerShell session that immediately runs a script, you might run this:

PowerShell.exe -Command “your script” -NoExit

When you do this, PowerShell still exits after it is done with your script.  Also, if you were paying attention it likely reports an error that references “NoExit”.  It seems that this happens because PowerShell.exe looks at all arguments after -Command and treats them as the command you want to execute in your new PowerShell session.

The solution to this problem is simple.  Always make -Command the last named argument in your argument list.  Looking at the above example, that means rearranging the command to look like this instead:

PowerShell.exe -NoExit -Command “your script

This will execute your script and leave PowerShell open for you to work with it afterwards.

Kirk out.

Share this post:

Get-ChildItem -ne dir

For the past little while I’ve had many opportunities to be able to speak in front of a lot of IT professionals (both administrators and developers) about PowerShell.  This has included touring with the EnergizeIT Certification Bootcamp User Group Tour and presenting at various local user group events here in Ottawa.  It also included helping out a little at the PowerShell demo station at the IT Pro week of TechEd 2008.  All of this has been great fun because I really love being able to talk face-to-face with IT professionals and help them with their problems (I guess sitting in a cubicle for 10 years writing code must have gotten to me).

During these presentations, one (of several) common messages that I have been delivering is this:

If you go into cmd.exe to run some command, do it in PowerShell instead.

In general, this recommendation works well because you can draw from skills you’ve acquired while using cmd.exe and apply those directly in PowerShell.  These commands don’t work exactly the same way they did outside of PowerShell, but in most cases you won’t notice and you can start getting comfortable using the PowerShell console instead of cmd.exe.

What about the cases where you will notice that one of the commands that you know and use doesn’t function the same way in PowerShell?  Let’s look at an example of a very common cmd.exe command that either doesn’t work in PowerShell in many cases without modification or that doesn’t give you the results you were looking for in others.  The command I’m talking about is the dir command.  Dir is a good example to use to illustrate this for several reasons:

  1. It is often the first command that anyone will try in PowerShell when they open it for the first time if they haven’t seen demonstrations showing all of the cool commands like Get-Service, Get-Process, Get-QADUser, etc. (aside from help, but what else are you going to try in a command shell you’re not familiar with where your current directory is somewhere in the file system?).
  2. It has a ton of command-line arguments to allow you to do some really complex directory searches on the file system with a very pithy command syntax.
  3. It outputs some useful details and aggregate information that is not part of the list of objects returned.
  4. It isn’t a standalone application that can be simply run from PowerShell.

Assume for example that you want to perform a recursive directory search for all files with a certain extension.  In cmd.exe, you would use a command syntax similar to this (varying on the extension, of course):

dir /s *.ps1

Running this command as is inside of PowerShell results in an error because the Get-ChildItem command (for which dir is an alias) doesn’t have a /s parameter.  All parameters in PowerShell are prefixed with a dash (-) instead of a slash (/), and the parameter for recursion in Get-ChildItem is -recurse (or -r), something that /s doesn’t necessarily translate to very intuitively.  Still, the help documentation you get from help dir is comprehensive, so you can quickly discover the -recurse parameter that way.  So now we have this:

dir -r *.ps1

When you run this in PowerShell, you won’t get an error anymore, but you will not likely get the results you expect either.  Personally, I know this has caused me to do some head-scratching a number of times.  The only ps1 files that will be listed are those that are in your current directory and those that are in a subdirectory (recursive) that ends with “.ps1”.  In other words, you most likely won’t get any results from this command.  Another visit to the help file and looking at the examples reveals that you actually meant to pass *.ps1 as the filter, not the path, so you need to do this:

dir -r -fi *.ps1

That’s not quite the same syntax as dir /s *.ps1 and it will definitely take some getting used to.  Also, if you were looking for the total amount of space consumed on the hard drive by these sorts of files like you would get in the results from cmd.exe, you won’t find that in PowerShell either.  This can be quite frustrating, especially to the newcomer.

Here’s another scenario: what if you want to do something simple like view all files in the current directory that are hidden?  It should be simple, right?  Here’s the cmd.exe syntax for that command:

dir /a:h

This command will also raise an error in PowerShell.  Once you get past the error and you learn the correct syntax, you’ll discover that the syntax in PowerShell, is a little more complicated, like this:

dir -fo | ? { $_.Attributes -band 2 }

Please note that the value of 2 is actually the integer value of [System.IO.FileAttributes]::Hidden and the ? is an alias for the Where-Object cmdlet.  In practice I would use the  [System.IO.FileAttributes]::Hidden enumeration instead of the value of 2 because I don’t have the attribute values memorized, but I am using pithy commands as short as I can make them to illustrate a point.

In both of these cases, as well as every other case I could come up with using dir, the old cmd.exe dir command syntax is shorter and it produces more information.  Plus, like many PowerShell users who have been using cmd.exe for a while, I already know the switches that are available in dir today.  There are a ton of cases like this because dir has so many switches that it supports, and these switches can be combined in many different ways to produce very useful results.

There are other issues with applying your knowledge of the dir command within PowerShell as well.  For example, with Windows Vista and Windows Server 2008, the dir command had some new switches added to it:

  • /r to show alternate file streams
  • /a:i to show files that are not content indexed
  • /a:l to show reparse points

Each of these switches works in Windows Vista and Windows Server 2008 but they don’t work in Windows XP or Windows Server 2003.  And by default, they don’t work in PowerShell either.

What do you do when facing differences like this that make it harder for you to get what you need from PowerShell?  Well, there are three options to choose from:

  1. Run back to cmd.exe when working with the file system because it already works (if it ain’t broke, don’t fix it).
  2. Translate the dir commands you use into PowerShell dir (or Get-ChildItem or gci) syntax and use them.
  3. Find a Poshoholic and ask him to help.

Personally I recommend the third option. 🙂

If you choose the first option you’re not learning PowerShell, you’re going back to oldschool cmd.exe (yuck!), you’re not getting rich .NET objects that you can work with in a pipeline, and you’re not getting support for the newer dir switches in downlevel operating systems like Windows XP and Windows Server 2003.  If you choose the second option, while that’s an admirable choice, there may be more work there than you think.  Really, the third option is the best.  And if you go with the third option, here are the sort of results you are going to get:

dir with tutorial in PowerShell

Over the last little while I’ve slowly put together a set of PowerShell extensions that will allow you to have full dir support in PowerShell like what is shown above on Windows XP, Windows Vista, Windows Server 2003 and Windows Server 2008, whether you’re running 32-bit or 64-bit.  This includes supporting for the following:

  1. Writing the PowerShell pipeline equivalent of dir to the host, regardless of which switches are used (this can be disabled).
  2. Full alternate file stream (AFS) support on demand; this includes retrieving alternate file streams, blocking or unblocking files, and adding or removing alternate data streams on any file or folder.  Even on Windows XP and Windows Server 2003 where dir in cmd.exe doesn’t support retrieving this information!
  3. Filtering options for files that are not content indexed and showing reparse points (not to mention any other attribute filters that are supported by dir) on all platforms supported by PowerShell.
  4. Recognition of and support for the dircmd environment variable.
  5. Support for every dir switch that is available today.
  6. Outputting detailed header and footer information just like you get from dir in cmd.exe.

See that green block of text in the screenshot?  That’s tutorial information, showing you the dir command you typed as well as the equivalent PowerShell command.  This is output no matter what your dir command is, unless you disable it by running “$PSTutorialDisabled = $true”.  Want the tutorial back?  Just remove the $PSTutorialDisabled variable or set it to $false.  Also if you noticed, the screenshot output includes detailed header and footer information plus integrated alternate file stream details like you get in dir.  That output is controlled by a script I wrote called Process-FileSystemChildItem (alias: pfsci), which you can see being called in the PowerShell equivalent script generated by the tutorial.  This modular support for outputting header and footer details as well as alternate file streams allows you to use Get-ChildItem and pipe the results to Process-FileSystemChildItem, selecting which output options are important to you.

What if you want to work with alternate file streams?  Looking at that list of files in the output above, removing the alternate file streams and showing header and footer information with the results is as simple as running this script:

gci *.ps1 `
    | % { if ($_.IsBlocked) { $_.Unblock() } } `
    | pfsci -showAlternateFileStreams

Want to block a file or folder?  Just do this:

(Get-Item $myItem).Block()

How about setting your own alternate file stream on a file or folder?  Here’s that syntax:

(Get-Item $myItem).WriteAlternateFileStream(‘Secret message’,’Poshoholic was here.’)

To retrieve the contents of that stream, it’s as simple as this:

(Get-Item $myItem).AFS[‘Secret message’].GetContents()

If either the true dir support or the AFS support interests you, you’ll need to download a few files and import them into PowerShell.  The required files along with their details for importing them into PowerShell are as follows:

ImportDirCmd.ps1
Requirements: dir.ps1, Process-FileSystemChildItem.ps1, Get-PSResourceString.ps1, FileStreams.types.ps1xml, and FileStreams.format.ps1xml

This file contains the required commands to load all of this functionality into PowerShell, provided that all items are in the same folder.  To use this functionality in PowerShell, simply dot-source this file after you have downloaded it and all required files.  This is the easiest way to extend PowerShell using this functionality.

dir.ps1
Requirements: Process-FileSystemChildItem.ps1

This file contains the script that processes the dir command you enter, parses it, generates the equivalent Get-ChildItem pipeline, outputs the tutorial information to the host and executes the command.  To use this command, simply invoke the script file directly using the call operator or dot-source the file and use the dir alias it creates.

Process-FileSystemChildItem.ps1
Requirements: FileStreams.types.ps1xml, FileStreams.format.ps1xml, and Get-PSResourceString.ps1

This file contains a script that supports outputting detailed header and footer information with pipelined file system items as well as alternate file stream information.  If the required type data and format data files are automatically loaded into the PowerShell session if they haven’t been loaded already.  To use this command, simply invoke the script file directly using the call operator or dot-source the file and use either the Process-FileSystemChildItem or the pfsci alias it creates.

Get-PSResourceString.ps1
Requirements: None

This file contains a script that supports loading localized resources to be used in PowerShell scripts.  This is a utility command often used when making parameters required, and therefore this file must either be placed in your path or it must be dot-sourced to create the Get-PSResourceString and grs aliases.  I use this internally when reporting certain errors back to the host, and I always call it by name without a path.

FileStreams.types.ps1xml
Requirements: None

This file contains the xml definitions for extending the PowerShell types to include types for alternate file streams, as well as extensions for FileInfo and DirectoryInfo objects so that they support file streams.  To use these extensions, import the file as follows:

Update-TypeData .\FileStreams.types.ps1xml

If you don’t update the type data manually, when you call Process-FileSystemChildInfo the type data will be updated automatically as long as the file is in the path or in the same directory as the Process-FileSystemChildInfo.ps1 script.

FileStreams.format.ps1xml
Requirements: None

This file contains the xml definitions for formatting alternate file streams in PowerShell.  To use this formatting, import the file as follows:

Update-FormatData -PrependPath .\FileStreams.format.ps1xml

You must use -PrependPath to import this file correctly.  If you don’t update the format data manually, when you call Process-FileSystemChildInfo the format data will be updated automatically as long as the file is in the path or in the same directory as the Process-FileSystemChildInfo.ps1 script.

Once you’ve downloaded these files, unblocked them and imported them into PowerShell, dir should work as well for you (better in some cases) as it does in cmd.exe, and you should have additional management support for blocked files and alternate file streams in general through PowerShell itself.  And hopefully, you’ll learn more about PowerShell along the way!

Thanks for reading,

Kirk out.

P.S.  Thanks to /\/\o\/\/ (Marc van Oursow) for the work he did with alternate file streams in this post and for the related files he included in PowerTab.  This work was my starting point for the support for alternate file streams.  My extension here differs slightly from his in that I don’t use a dll (this allows me to support both 32-bit and 64-bit platforms), my method signatures are slightly different and I added support for folders since they can also have streams added to them, but his work gave me a big head start on this.

Share this post:

 

PowerPack Challenge now open!

I mentioned earlier that there is a PowerGUI contest going on this summer called the PowerPack Challenge.  Well as of July 1st (Happy Canada Day everyone!), this contest is officially open and participants can start submitting their entries!

Interested in the details?  Read all about it here!

I look forward to seeing your contest entries!

Kirk out.

Share this post:

Book Review: Windows PowerShell Pocket Reference

While attending the IT Pro week of image TechEd 2008, I won my pick of a book from a table of 30 or so to choose from.  I noticed a few copies of Lee Holmes‘ new PowerShell book called ‘Windows PowerShell Pocket Reference‘ tucked in the middle and decided to pick one up.  Sure, it would only have cost me $12.99 Cdn to buy it and the other books I had to choose from were a $60 value or more, but I’m a Poshoholic, what else can I say?

To put it simply, I’m thrilled that I decided to pick up this book off the table.  Good things come in small packages.  I’ve only had Lee’s book for a little over a week and it has already provided me with enough value that I would even be satisfied if I had paid more than two times the list price.  Here are a few key reasons why I just love this book:

  1. It’s literally a pocket book, about the size of a short novel.  It’s lightweight and doesn’t take much space in my laptop bag, so I’ll be carrying this book with me everywhere for quite a while.
  2. It only briefly (19 short pages) introduces PowerShell and the rest is all meaty reference material that complements the documentation that is baked into PowerShell very nicely.
  3. Within 1 week I’ve already been able to use this book to quickly look up some things that either aren’t included in the baked in documentation or that aren’t detailed enough in the baked in documentation, and I’ve already discovered a few things that I wasn’t aware of that I could do with PowerShell because of this book.
  4. I can find what I’m looking for in the book very quickly by just flipping through the pages.

I should note that you shouldn’t look to this book to give you all the help you need on every cmdlet.  There’s no need for that because the PowerShell documentation already has lots of information on that front, and there are already a number of books that cover most cmdlets in detail.  But if you’re like me you  don’t want that information in a book like this anyway, because that would make it too heavy and reduce its usefulness as a quick reference.  If you are looking for fast access to cool things like PowerShell regular expression syntax, statements, operators, .NET string formatting options, variables, and details on certain really useful cmdlets like Add-Member and the Format-* cmdlets and useful .NET and WMI classes, then this book is for you.

Right now this is absolutely my favorite book format.  I don’t have time to do very much reading these days, and this book is great in that regard because it cuts right to the chase and gives me what I want in very little time, which fits my schedule quite nicely.  I’d love to have a book in a similar format for using PowerShell to manage and automate Exchange 2007, System Center Operations Manager 2007, System Center Virtual Machine Manager 2007/8, SQL Server 2008, or any other platform that has PowerShell support.  Skip the introductory portion (or minimize it), throw out all of the unnecessary details, and give me the meat in a small package that I can carry with me.  Modularity in books like you can find in well written software.  Sure, you won’t cover everything, but it should cover enough so that I’ll get valuable information that I can use in the little time that I have, and it should be portable.  That’s what I’m after

Aside: Ever see Japanese technical books?  They are much smaller than what we find here in North America.  Everything is still made too big over here, books included.

If you’re in the market for a new PowerShell book and you already use PowerShell, I seriously recommend you take a look at this book.  If you’re new to PowerShell and plan to use it a lot I think it’s worthwhile for you too, but if you like to learn directly from books with lots of details and descriptive examples you may want another book as well.  Then you can keep this book with you and leave the heavier one in your office.

Kirk out.

Share this post:

Scripting/Sysadmin meme

Andy Schneider called me out on the Scripting/Sysadmin meme that is circulating right now.  The questions are interesting and worth sharing to readers who want to get to know me better, so count me in!

How old were you when you started using computers?

9

What was your first machine?

A TRS-80.  Oh how I long for the days of fast-forwarding the tape drive until the analog counter reached the position of the program (game) I wanted to run.  The only game that I still remember is Scarfman.  After that it was an IBM PCjr, where I discovered King’s Quest (on a cartridge even, before it was labelled King’s Quest I).

What was the first real script you wrote?

I started learning Basic by writing one-liners that I would learn about in magazines like Compute!  These weren’t quite the same as PowerShell one-liners you find today though.  Then I would evolve those one-liners into more complex scripts as I expanded my knowledge of Basic.  If I recall correctly I think my first real Basic script beyond the one-liner (or several-liner) was one that displayed a menu with a list of song titles and it would play the song you selected (that I had encoded into script by hand), while changing background colors as the song played.

What languages have you used?

Basic, C, Pascal, SmallTalk, Lisp, C++, Delphi, Fortran, Visual Basic, C#, java, VBScript and PowerShell.

What was your first professional Sysadmin gig?

Well, technically I’ve never been hired as a Sysadmin.  My first related full-time job was while I finished University.  I started working at FastLane Technologies Inc., where I worked on FINAL, a scripting language for Sysadmins.  You can read more about that here.

If you knew then what you know now, would you have started in IT?

Honestly it’s hard to say.  Over the past 10 years while working in IT I’ve developed a strong passion for the environment, so while I love PowerShell I’d definitely be investigating both IT and environmentally friendly career alternatives.  Plus I love presenting, so maybe I’d be going after some combination of the three: presenting green IT solutions to sysadmins and showing them how to automate them using PowerShell.  Actually, I did that a presentation about that last week as part of the Speaker Idol competition at the TechEd IT Pro week (minus the PowerShell part…it was a 5 minute presentation after all).  I’ll blog more about that soon.

If there is one thing you learned along the way that you would tell new Sysadmins, what would it be?

Participate in the community!  Kai Axford and I talked about this to attendees of the EnergizeIT Certification Bootcamp User Group Tour last month…if you’re not participating in the community online and offline, attending your local user group meetings, and meeting other like-minded individuals around you, then you’re missing out on a huge opportunity.

What’s the most fun you’ve ever had scripting?

I’d have to say teaching myself PowerShell was by far the most fun I’ve ever had scripting.  There is just so much you can do with PowerShell, every day I was encountering multiple “Wow” moments as I grew my knowledge of PowerShell.

Who are you calling out?

Dmitry Sotnikov

Don Jones

Jeffery Hicks

Kai Axford

Thanks Andy for calling me out on this.  It was fun.

Kirk out.

Technorati Tags: ,,

Share this post:

PowerGUI wins Best of Tech Ed 2008 IT Professional Breakthrough Product award!

Well color me a happy camper!  Windows IT Pro just announced that PowerGUI has won the Best of Tech Ed 2008 IT Professional Breakthrough Product award!  Breakthrough Product is described by Windows IT Pro as the best single product of the Tech Ed 2008 IT Professionals conference, and could be from any IT Pro award sub-category.  Do yourself a favor and download PowerGUI today!  Even better, use PowerGUI to enter the PowerPack Challenge contest!

Kirk out.

Share this post:

Why isn’t my PowerPack showing up in PowerGUI?

Here’s an interesting question that came up recently:

“I just installed PowerGUI with the Exchange PowerPack and then I installed the Exchange 2007 Management Console.  The Exchange folder isn’t showing up in the PowerGUI Console.  Can you help?”

Yes, I can help.  Actually I came across this very same scenario myself not too long ago when working with a new VM.

If you install PowerGUI today, you can let it pick which of the core PowerPacks to install.  By default, PowerGUI will pick the PowerPacks for which you have the required PowerShell snapin(s) already installed.  For example, this means that the Exchange 2007 PowerPack will be installed automatically if you have the Exchange Management Console installed on your local machine, and the Active Directory PowerPack will be installed automatically if you have the Quest AD Cmdlets installed on your local machine.

Alternatively, you can also pick which PowerPacks you want to install yourself whether you have the required snapin(s) or not by using the custom install option.  I usually take this approach when I install PowerGUI, picking all PowerPacks so that I get all of the functionality that it comes with.  Most of the time this works fine when I have already installed all of the snapins I need, but what happens if you are missing one or more of the required snapins?  When you open PowerGUI for the first time your PowerGUI tree will automatically load all of the PowerPacks you installed for which you have the required snapins.  If you installed PowerPacks for which you don’t have the required snapins, they won’t show up in the PowerGUI tree.

Sounds like an easy enough problem to solve, right?  You can just install the missing snapin and then come back into PowerGUI and…um…notice that the PowerPack isn’t there.  This is because the automatic loading of the PowerGUI tree is only done once (at the moment; this will likely change in the future).  If you don’t have the required snapin installed when that happens, the PowerPack doesn’t get loaded into the PowerGUI tree.  Here lies the problem for the individual who asked the question stated earlier.

Fortunately the solution is straightforward once you know where the missing PowerPack is stored.  When PowerGUI is installed, all PowerPacks that are installed with it are placed in the PowerPacks subfolder of the PowerGUI installation folder.  To load a PowerPack that is missing because the prerequisite snapin wasn’t installed earlier, all you need to do is select File | PowerPack Management, click Import, browse to that folder and select the PowerPack that you want to import.  Once PowerGUI has verified that you have the required snapin(s) for that PowerPack installed it will import the PowerPack and your missing folder will be available in the PowerGUI tree.  And in case you weren’t aware, this is also the same way you would extend PowerGUI with other PowerPacks that you download from the library…download the PowerPack file, right-click the root node, select Import, and then select the PowerPack file you downloaded.

Hopefully this will help as you start using more snapins to manage and automate more with PowerShell and PowerGUI.

Kirk out.

Share this post: