Essential PowerShell: Use non-portable console customizations sparingly

Isn’t it odd how issues with software are often raised in groups?  I’ve been helping people use software for a long time and I find it uncanny how when I come across an issue once, there are bound to be two or three other occurrences of the same issue just about to be brought to my attention.  Maybe it just seems that way because once I’m on track to discovering the issue I notice other occurrences of the same issue more easily.  All I know is that this happens all the time.

Recently one such issue came to my attention quite a few times and it needs to be talked about.  Any of the following questions were used by those facing the issue:

  • Why doesn’t this command work in that PowerShell console?
  • Why did my PowerShell script work when I ran it here but it doesn’t work when I run it there?
  • Why can’t I find command commandName?  It worked fine when I used it in the other console.
  • I used to be able to use drive driveName, but I can’t anymore.  Why?

The answer to these questions lies in the recognition of one area where PowerShell is not very consistent: through the use of customized PowerShell consoles.

Since PowerShell has been out now for over a year, many product teams now provide PowerShell snapins for their products.  This includes Microsoft products like Exchange, System Center Operations Manager, System Center Virtual Machine Manager, SQL Server and IIS (among others) as well as ISV products like VMware ESX and Server (the VI toolkit) and Quest ActiveRoles Server (the QAD cmdlets).  And there are others who provide PowerShell snapins for a particular business need, like SDM Software’s Group Policy Management Console (GPMC) Cmdlets, /n software’s NetCmdlets, and SoftwareFx’s PowerGadgets.  Many (the majority, in fact) of these snapins come with their own customized PowerShell console.  These customized console are designed to do one or more of the following:

  1. Display welcome text with help.
  2. Show a tip of the day.
  3. Run in elevated mode on Windows Vista and Windows Server 2008.
  4. Load the PowerShell snapin(s) relevant to the product that the shell customization came with.
  5. Change the current location to a provider that was included with the snapin(s) or a drive that was created within the customized shell.
  6. Create custom commands (functions and aliases) to make it easier to use the snapin(s).
  7. Prompt the user for connection-related information to establish a connection required for the cmdlets to work.

There are definitely other possibilities of how these customized consoles might be used, but this list gives you the general idea.  Most of these customizations are helpful because it gives the PowerShell newcomer a starting point; however, more than half of them can give users the wrong impression and cause them to ask the questions listed above when they use other PowerShell consoles.  Let’s look at some examples.

One thing in common among each of the customized consoles is that they load the PowerShell snapin(s) relevant to the product that the shell customization came with.  The Exchange Management Shell loads the two snapins that come with Exchange, so that users don’t have to do this to use Exchange cmdlets:

Add-PSSnapin -Name `
Microsoft.Exchange.Management.PowerShell.Admin
Add-PSSnapin -Name `
Microsoft.Exchange.Management.PowerShell.Support

This might not seem like a big issue, however in practice it seems to give users the false impression that they can simply call the cmdlets they need from any PowerShell script or console, which ultimately results in head-scratching when commands or a script don’t work somewhere else.  And as indicated, this is common among each of the customized consoles, so the IIS PowerShell Management Console, System Center Operations Manager Command Shell, Windows PowerShell with PowerGadgets, and others all do the same thing, loading their respective snapin automatically.

Another customization that seems to be common is for consoles to provide custom commands (aliases and functions) that are only available in that particular console.  VMware does this in their VMware VI Toolkit (for Windows).  When you open that console, you are presented with a message that shows you four commands as useful starting points, three of which only work in the VMware VI Toolkit (for Windows) shell (FYI, the commands I’m referring to are Get-VICommand, Get-VC and Get-ESX).  You can imagine that causes confusion when someone is trying to use one of these commands in another console.  Of course VMware isn’t the only one that does this.  The Exchange Management Shell creates three commands that are only available by default in that console (Get-ExCommand, quickref and Get-ExBlog) and the System Center Operations Manager Command Shell creates 10 commands that are only available by default in that console.  I won’t bother listing all of those here because I’m sure you get the picture by now.  Trying to use these commands in other consoles without adding them to the profile or explicitly creating them results in an error indicating that the command was not found and invariably some head scratching for the individual who is trying to run then.

A third type of customization is to check for the presence of a connection and then to prompt users for connection information if a local connection was not detected.  The System Center Operations Manager 2007 Command Shell provides connection management like this in its console.  The connection that is established is not usable outside of that console, nor is it documented accordingly, so users need to be aware that their scripts will have to include commands to make the required connections in order to work in any PowerShell environment.

There are surely going to be other examples of this as different teams customize their console environment to meet their needs.  As a PowerShell end user, when these customized consoles are very convenient, what can we do to make sure we’re aware of what customizations are being made?

Fortunately the console customizations are easy to discover.  Every customized console uses PowerShell’s command line parameters to perform the customizations.  This means viewing the properties of any of the shortcuts used to launch a customized console allows you to see what customizations are being performed by examining the command line parameters for powershell.exe.  The customizations that you need to look at are the PowerShell console file that is used (as identified by the -psconsolefile argument) and the script that is executed (as identified by the -command argument).  The PowerShell console file (psc1 file) is an xml document that defines which snapins should be automatically loaded by PowerShell when it starts.  The snapins identified in this file are silently added to the PowerShell session when it is opened.  The command argument identifies the PowerShell script or PS1 file that will be run after the snapins are loaded.  This script is used to customize the look of the console, create custom commands, manage connections, etc.

Now that you have this information, all you need to do is make sure you are aware of the customizations in the console you are using, particularly those that are not portable to other consoles, so that you don’t make incorrect assumptions when you write your scripts.

Before I close this off, I have a request that I’d like to put out there for snapin developers.  If you’re creating a customized PowerShell console when your snapin is installed, please make an effort to make those customizations self documenting.  I don’t want you to hurt the end user experience your after, but I think a bit of carefully worded output that identifies your console customizations as customizations that are specific to that console would go a long way to educate beginner PowerShell users about how consoles can be customized and what they need to be aware of when switching from one console to the next.  Without that information, users simply aren’t getting the information that they need to use your commands in other console that they might use.  And of course, if you are making custom function that are only available in your customized console, ask yourself, should those function only be available in one console, or should they be available all the time.  Often times I bet the answer is the latter, so please consider making cmdlets for those function you feel are necessary for the right experience when using your snapin.  Otherwise you’re just making it more difficult for your users to have the experience that you want them to have.

Thanks for reading!

Kirk out.

Share this post:

How to navigate in a PowerShell provider without a PSDrive

Here’s an interesting PowerShell trick that I hadn’t come across before.  I may be mistaken, but I don’t believe this is documented anywhere either.

In PowerShell, you can create PowerShell drives (PSDrives) to provide you with fast access to specific locations in the PowerShell providers that you are using.  That is very convenient, but if you are writing scripts that use the drives you create and that you share with others, you need to make sure the PSDrive is created for your script to work.  Or do you?

In PowerShell, you can set the current location to any location on a provider using the following syntax:

Set-Location PSDriveName:RelativePath

In this syntax, RelativePath refers to the path relative to the root of the PSDrive identified by PSDriveName.

Alternatively, you can also set the current location to any location on a provider using the following syntax:

Set-Location PSProviderName::AbsolutePath

In this syntax, AbsolutePath refers to the absolute path to the location on the respective drive (which includes the root if necessary) and PSProviderName refers to the name of the provider, with or without the PowerShell snapin name prefix.

Let’s look at a few specific examples.

If you wanted to set the current location to C:\Windows, you could do any of the following:

  1. Set-Location C:\Windows
  2. Set-Location FileSystem::C:\Windows
  3. Set-Location `
       Microsoft.PowerShell.Core\FileSystem::C:\Windows

How about the registry?  Lets say you wanted to browse HKEY_USERS in your registry so that you could work with the default user configuration.  To do this, you would have to do any of the following:

  1. New-PSDrive HKU Registry HKEY_USERS
    Set-Location HKU:\.DEFAULT
  2. Set-Location Registry::HKEY_USERS\.Default
  3. Set-Location `
       Microsoft.PowerShell.Core\Registry::HKEY_USERS\.Default

What if you wanted to view the root of the registry?  I have no idea how to do that using a PSDrive, but you can do it using the provider path like this:

Set-Location Registry::

The syntax looks a little unusual, but it can be useful.  Executing Get-ChildItem from there gives you all of the hives that are available in the registry.  If you want to do a search across an entire registry, this seems to be the right place to start.

So far all of these examples have been for the providers that are included in the PowerShell 1.0 RTM release.  But the syntax described above works for providers from registered snapins as well.  Using the IIS 7.0 PowerShell Provider Tech Preview 1 we can do the same sort of thing.  For example, I can browse the application pools like this:

Set-Location `
  
WebAdministration::\\POSHOHOLIC\AppPools

Note that in this example, POSHOHOLIC is the name of the computer (and it must be in uppercase for it to work in Tech Preview 1 of this provider).  And yes, I could have used IIsProviderSnapin\WebAdministration for the provider name here as well.

One last example from a registered snapin is from the SQL Server 2008 CTP.  You can navigate to the root of this provider like this:

Set-Location SqlServer::SQLSERVER:

This puts you in the same location as if you set the location to SQLSERVER:.  Note again, in this example, the second SQLSERVER must be in uppercase for it to work in the February 2008 CTP version of this provider.  And the syntax here is a little odd as well…the second SQLSERVER really shouldn’t be necessary IMHO.  But it’s there, so we have to use it.

I should also mention that these provider-based paths should work wherever you can use a path.  I tested out a few of them and was satisfied enough to feel confident that the paths will work anywhere.

By now you most likely get the idea of this.  You can continue to write your scripts to create PSDrives and access everything by drive name, or you can work directly with the provider-based paths so that you don’t have to create PSDrives and so that you can access provider roots that are otherwise unavailable.

Kirk out.

Share this post:

 

Use PowerGUI to manage SQL Server 2008

SQL Server 2008 marks the first release for SQL Server that includes PowerShell support.  This is just the beginning of a trend for all Microsoft Server products now that PowerShell is part of their Common Engineering Criteria beginning in fiscal year 2009.  I just spent the past week or so experimenting with PowerShell and SQL Server, first using SMO directly and then using the snapins that are part of SQL Server 2008.

I’m still testing the waters in many places but so far I’m pretty happy with the PowerShell support in SQL Server 2008.  Back when they first announced support, it didn’t sound all that impressive but now that I’ve dug in and started using it myself I’ve found that it is much more than I thought it would be.  SQL Server 2008 is still in CTP, so there are still bugs and still changes coming, but overall this looks like a nice addition to PowerShell, and one that should get even better through service packs as time goes on.

While working with the SQL provider and cmdlets I put together my first-attempt at a SQL Server PowerPack for PowerGUI.  This PowerPack is pretty lightweight at this point, allowing you to browse through the SQL Server instances you have, add connections to other servers, open tables and views and view their contents, as well as a few other miscellaneous things.  It requires the SQL Server 2008 client tools, however it seems to work fine with SQL Server 2005 (and presumably SQL Server 2000 since it uses SMO and WMI under the covers) once you have the SQL Server 2008 client tools installed.  You can download the PowerPack here.

Over the next little while I will be continuing to enhance this PowerPack, so if you work with SQL Server and PowerShell and have any feedback or enhancement requests for this PowerPack, please let me know through comments, email (see my about page), or the PowerGUI forums.

Thanks,

Kirk out.

Share this post :

 

Public beta of Quest AD cmdlets v1.1 now available

Quest Software (my employer, for the record) has just released the first public beta build of version 1.1 of the ActiveRoles Management Shell for Active Directory (aka Quest AD cmdlets).  If you haven’t looked at these cmdlets yet, they fulfill the scripting needs of AD administrators using PowerShell today by providing them with cmdlets to facilitate management of Active Directory.

You’ll quickly notice once you download the beta that the Quest AD cmdlet team has been hard at work too, with 40 cmdlets available in this beta, now including support for security and permission management!  More fun commands to play with!

If you want to download the latest beta, you can find it here.  And feedback is welcome and appreciated on the PowerGUI community site in the AD forums.

Kirk out.

Share this post :

How to create a PowerPack

A little while back Marco Shaw invited me to present at one of the PowerShell Virtual User Group meetings he runs regularly.  I was quite looking forward to presenting, and I was going to demonstrate how you can extend the PowerGUI administration console as well as how you can share these extensions by exporting them in PowerPacks and making them available to the PowerShell community.  Creating PowerPacks is a large part of what I do at work every day, and I get a lot of questions about how to do it, so I was looking forward to being able to answer those questions in my demonstration.

Unfortunately I had some challenges in front of me at the time and I ended up cancelling my presentation (sorry Marco!).  Still, I really wanted to show how PowerGUI can be extended and how PowerPacks are made, so I recently recorded a screencast that contains pretty much everything I was hoping to show off in my presentation.  Are you interested in learning how you can extend PowerGUI and how you can create your own PowerPacks?  You can check out the screencast/tutorial I made here.

Are there other screencasts/tutorials you would like to see for PowerShell and/or PowerGUI?  Let me know.  If comments don’t work for you, you can find my contact information in my about page.

And lastly, are there things you would like to see in the PowerPacks that come with PowerGUI?  Are there PowerPacks that you would like to see that aren’t published yet?  Let me know that as well!

Kirk out.

Share this post :

whence in PowerShell

[Update: Thanks to Joel Bennett for the comments; my original version didn’t account for external scripts and applications having the same precedence, but this revised version does]

Recently a fellow MVP pointed out that PowerShell doesn’t have a whence command readily available.  whence is a command from the Korn Shell.  When a name is provided to the whence command, it returns the way in which that name will be interpreted by the shell.

While PowerShell doesn’t have that functionality out of the box today, you can easily add it with the addition of a simple function to your PowerShell profiles.  Here’s my interpretation of the whence command in a PowerShell function:

function whence {
    param(
        [
string[]]$command,
        [
Switch]$ReturnAll

    )

    # Put any additional arguments in $command
    if ($args.Count -gt 0) {
        $command += [string[]]$args
   
}

    # Read the current path environment variable
    $path = $env:Path.Trim(;).Split(;)

    # Store an array of all command types that have equal
    # precedence
    $equalPrecedence = ExternalScript,Application


    #
Set the command precedence expression for sorting
    $cmdPrecedence = {
        if ($equalPrecedence -contains $_.CommandType) {
            [
Management.Automation.CommandTypes]`
                $equalPrecedence[0]
        }
else {
            $_.CommandType
        }
    }

    # Set the path precedence expression for sorting
    $pathPrecedence = {
        if ($equalPrecedence -contains $_.CommandType) {
            [
Array]::IndexOf(
                $path,
                [
IO.Path]::GetDirectoryName($_.Definition)
            )
+ 1
        }
else {
            1
        }
    }

    # Filter out all but the first command if appropriate
    foreach ($item in $command) {
        if ($ReturnAll) {
            Get-Command -Name $item `
                |
Sort-Object `
                    -Property $cmdPrecedence,$pathPrecedence `
        }
else {
            Get-Command -Name $item `
                |
Sort-Object `
                    -Property $cmdPrecedence,$pathPrecedence `
                |
Select-Object -First 1
        }
    }
}

This function supports retrieving multiple commands at once, whether they are passed in as an array or not.  For example, you can retrieve the commands for Get-Help and Get-Command by calling either of the following:

  1. whence Get-Help,Get-Command
  2. whence Get-Help Get-Command

It also supports retrieving all results for a specific command so that you can determine why the command you want isn’t executing when you try to call it by name.  Here’s an exaggerated example, to illustrate the value here:

PS C:\> whence Get-Help -ReturnAll

CommandType    Name         Definition
———–    —-         ———-
Alias          get-help     get-help
Function       Get-Help     begin {…
Cmdlet         Get-Help     Get-Help [[-Na…
ExternalScript Get-Help.ps1 C:\Get-Help.ps1
Application    get-help.exe C:\get-help.exe

If I had multiple executable applications, such as get-help.exe and get-help.cmd, this would return them in their precedence order.  If I was trying to execute any command here other than the alias, this would help me figure out why it wasn’t executing when I was simply calling get-help.To use this command in your PowerShell environment, simply copy the function into your PowerShell profile and it will be available the next time your profile is run.

Enjoy!

Kirk out.

Technorati Tags: ,,

Share this post :

2008 Scripting Games Statistics

Now that the 2008 Scripting Games are over, I was wondering how the various scripting languages broke down in terms of individual participation.  I contacted fellow MVP Marco Shaw about this a few weeks ago because last year he wrote a script that would generate a nice chart using PowerGadgets showing the breakdown of the 2007 Scripting Games participation by division for each country.  He had been working on running his old script against this year’s results, and was kind enough to let me have his work in progress to experiment with myself (thanks Marco!).

After tweaking the script off and on (more off than on) over the past few weeks I’ve managed to get the results I was looking for.  The following screenshot shows two charts from the results of each of the last two years of the Scripting Games, all generated using PowerGadgets.  The charts on the left show the breakdown of individual participation by country for the top 10 countries (where the top 10 countries are defined by those with the most unique participants across all divisions), sorted alphabetically.  The charts on the right show the number of unique participants in each division.  The 2007 results are on the top, and the 2008 results are on the bottom.

ScriptingGamesStatisticsDashboard

The results are pretty interesting.  Not surprisingly, the charts show that PowerShell is growing in popularity.  Last year there were 1/3 as many participants in the PowerShell categories as in the VBScript categories.  This year that gap has narrowed, with PowerShell participation climbing to just under ½ of the VBScript participation.  The charts also show that there were only two changes in the top 10 participating countries since last year, and that VBScript wasn’t the scripting language of choice in all top 10 countries in either year.

In addition to the charts that are output, my updated version of Marco’s script also outputs some general statistical information for the years that it is being run against.  From this I can see that the number of individual participants has increased from 510 in 2007 to 709 in 2008, with the number of active participants (where an active participant is defined as one that participated in 5 or more events) increasing from 378 in 2007 to 563 in 2008.

The script used to generate these results can be found here.

All in all, the Scripting Games seem to be increasing in popularity year over year which is likely a trend that will continue as PowerShell and other scripting languages continue to gain traction.  It will be interesting to see how things pan out next year!

Kirk out.

Share it:

P.S. One of the many things I was involved in while I wasn’t blogging during the month of February was the 2008 Scripting Games.  A while back Scripting Guy Greg Stemp invited me to be a guest commentator for this years games (thanks Greg!) and I was assigned Advanced Windows PowerShell Event #5.  While I unfortunately didn’t have time to participate in the other events this year, I did find some spare time during a train trip to Toronto so I wrote my solution for the event on the train.  The games are all done for this year, but if you’re interested in my solution, it can be found here.

P.P.S. I’m trying out using Windows Live SkyDrive as the site from which to share ps1 files.  If you have any problem viewing the script file I’ve linked to in this article, please let me know.

Learn about PowerShell at Ottawa Code Camp 2008!

2008 marks the first year that Ottawa will be hosting a Code Camp event.  A code camp event is a free one day event by developers for developers.  It’s a great place to spend a Saturday learning about developer-related material from your peers.  This years event takes place on April 5, 2008 at the Algonquin College on the corner of Baseline and Woodroffe.

I’ll be presenting a PowerShell session at the Ottawa Code Camp 2008 event, titled: “What is PowerShell and what opportunties does it provide to a developer?”.  It will run about 60-70 minutes long, which isn’t much time considering what I want to present, so it will likely be a fun presentation as I try to pack in a lot of information in a little bit of time.  I’m hoping to whet your PowerShell appetite that you didn’t know you had while I show you what PowerShell is and how you can use PowerShell for rapid prototyping of .NET code, test-driven development, and support purposes.

Of course there are many other sessions worth attending too.  All sessions at the Code Camp will be a great place to start learning new technologies and ask questions.

You can find out more about the event, the speakers, the sessions, and how to register on the official Ottawa Code Camp site.

I hope to see you there!

Kirk out.

Share it:

PowerShell Deep Dive: Using $MyInvocation and Invoke-Expression to support dot-sourcing and direct invocation in shared PowerShell scripts

When creating PowerShell script (ps1) files to share with the community, there are a few different ways you can configure their intended use.  You can configure a ps1 file so that it contains one or more functions and/or variables that must be loaded into PowerShell before they can be used.  This loading is done via a technique called dot sourcing.  Alternatively you can make the body of the ps1 file be the script itself that you want to share with the community without encapsulating it in a function.  Using this configuration, your script consumers will be required to invoke the script using the absolute or relative path to your ps1 file, prefixing it with the call operator (&) and wrapping it in quotation marks if the path contains a space.  Let’s look at each of these in more detail and some advantages to each approach.

Dot-sourcing a ps1 file is like running the PowerShell script it contains inline in the current scope.  You can pass in parameters when you dot-source a ps1 file, or you can dot-source it by itself.  To dot-source a ps1 file you must use the full absolute or relative path to that file.  Aside from the handling of any parameters, the PowerShell script inside the ps1 file is run as if you typed it in manually into the current scope.  An advantage to this approach is that the variables and functions within the ps1 file that use the default scope will be declared in the current scope and therefore they will be available afterwards without requiring users to know the location of the script file.  This allows users to dot-source a ps1 file in their profile and have the functions and or variables they contain available to them in every PowerShell session they open.  If you had a ps1 file with the path ‘C:\My Scripts\MyScript.ps1’, you would dot-source it like this:

. ‘C:\My Scripts\MyScript.ps1’

Before I get to invoking scripts directly, I need to make an important note about dot-sourcing script files.  Users need to be careful when dot-sourcing script files, because while it is possible to dot-source a script that was intended to be invoked and have it appear to function the same as if you had invoked it, passing parameters and having the script within appear to run as expected, this is not a good practice.  Only dot-source ps1 files containing functions and variables you want available in your current session.  If the ps1 file you are using was intended to be invoked and not dot-sourced, steer clear of the dot-source operator.  Otherwise you risk leaving crumbs (variables and functions) of the script files you dot source behind in your current session, some of which may have been intended to be deleted when they went out of scope (secure strings used to temporarily store passwords, for example).  Since the current scope is the root scope, these won’t go out of scope until you close PowerShell.  I have seen users dot-source ps1 files while passing parameters many times in the online community, and those users should be using the call operator instead — not a good idea.  Now back to invoking scripts directly…

Invoking a script directly is akin to calling a function.  You can pass in parameters when you invoke a ps1 file, or you can invoke the ps1 file by itself.  To invoke a ps1 file you must use the full absolute or relative path to that file.  If that path contains one or more spaces in it, it must be wrapped in quotation marks and the call operator (&) must be used.  Otherwise it will just be treated as a string and output to the console (note: it is a good practice to always use the call operator when invoking a script this way so that it doesn’t matter if spaces are in the path or not — it will just work).  When you invoke a ps1 file, a child scope is created and the contents of that ps1 file are executed within that child scope.  An advantage to this approach is that the script file doesn’t leave anything behind after it is run unless it explicitly declares a function or variable as global.  This keeps the PowerShell environment clean.  If you had a ps1 file with the path ‘C:\My Scripts\MyScript.ps1’, you would call it like this:

& ‘C:\My Scripts\MyScript.ps1’

Between these two approaches, there is no best practice indicating which is the right one to use.  It seems to simply be a matter of preference.  Unfortunately, for the most part it is the script author’s preference, not the script consumer’s.  For script consumers to get ps1 files they find online in the community working they way they want, they may have to modify the file to get it to dot-source correctly, or to run correctly when invoked using the call operator, or they may just copy and paste the script into their own ps1 file or profile to get it running the way they like.  The end result is that each time a ps1 file is updated by its author, the script consumer may have manual steps to take to get that update in their own environment.

What if ps1 files could be created so that they could support both of these configuration approaches.  What if they would always work as expected whether they were dot-sourced or invoked directly?  And what if you want the functionality that the ps1 file provides to work inside of a pipeline, whether you dot-source it and use a function call or invoke it directly inside your pipeline?  Fortunately, PowerShell’s a rich enough scripting language to allow you to do just that.

The first thing you need to do to make this work is to determine how the script file was used.  PowerShell includes a built-in variable called $MyInvocation that allows your script to look at the way it was used.  Among other things, $MyInvocation includes two properties you’ll need to understand when making this work: InvocationName and MyCommand.  InvocationName contains the name of the command that was used to invoke the script.  If you dot-sourced the script, this will contain ‘.’.  If you invoked the script using the call operator, this will contain ‘&’.  If you invoked the script using the path to the script itself, this will contain the exact path you entered, whether it was relative or absolute, UNC or local.  MyCommand contains information that describes the script file itself: the path under which it was found, the name of the script file, and the type of the command (always ExternalScript for ps1 files).  These two pieces of information can be used together to determine how the script was used.  For example, consider a script file called Test-Invocation.ps1 at the root of C on a computer PoShRocks that contains the following script:

if ($MyInvocation.InvocationName -eq &) {
   
Called using operator
}
elseif ($MyInvocation.InvocationName -eq .) {
   
Dot sourced
}
elseif ((Resolve-Path -Path `
    $MyInvocation
.InvocationName).ProviderPath -eq `
   
$MyInvocation.MyCommand.Path) {
   
Called using path $($MyInvocation.InvocationName)
}

Regardless of whether you dot-source Test-Invocation.ps1 or invoke it directly, and regardless of whether you use a relative local path, an absolute local path, or an absolute remote (UNC) path, this script will output how it was used.  Here are a few examples of how you might use this script, with the associated output:

PS C:\> . .\Test-Invocation.ps1
Dot sourced
PS C:\> . C:\Test-Invocation.ps1
Dot sourced
PS C:\> . \\PoShRocks\c$\Test-Invocation.ps1
Dot sourced
PS C:\> & .\Test-Invocation.ps1
Called using operator
PS C:\> & C:\Test-Invocation.ps1
Called using operator
PS C:\> & \\PoShRocks\c$\Test-Invocation.ps1
Called using operator
PS C:\> .\Test-Invocation.ps1
Called using path .\Test-Invocation.ps1
PS C:\> C:\Test-Invocation.ps1
Called using path C:\Test-Invocation.ps1
PS C:\> \\PoShRocks\c$\Test-Invocation.ps1
Called using path \\PoShRocks\c$\Test-Invocation.ps1

As you can see, each time our script knows exactly how it was used, so we can use that to make it behave appropriately in any situation.

Now that we’re armed with that knowledge, let’s add a function to our script that will do something simple, like output the definition of another PowerShell function.  First, we’ll need to write our function:

function Get-Function {
   
param(
       
[string]$name = $(throw Name is required)
   
)
   
if (-not $name) { throw Name cannot be empty }
   
if ($name -match [^a-z0-9-]) {
       
Write-Error Unsupported character found.
    }
elseif ($function = Get-Item -LiteralPath function:$name) {
   
function $name {
`t$($function.Definition)
}

    }
}

This function is pretty straightforward.  You call it passing in the name of a function and it outputs the function definition, including the name, to the console.

The next step is to follow up that function definition with a slightly modified version of our Test-Invocation.ps1 script.  Basically we just want to know if the file was invoked or dot-sourced.  If it was invoked, we want to automatically call our Get-Function function and pass the parameters used during the invocation directly through to the Get-Function function call.  If it was dot-sourced, we don’t want to do any additional work because the function will be imported into the current session so that we can use it without the script file, as intended.  This has the added benefit of preventing users from executing script through dot-sourcing that wasn’t intended to be executed.  Here’s the start of the additional script that we’ll need to put after our Get-Function call:

if ($MyInvocation.InvocationName -ne .) {
   
Get-Function # How do we pass arguments here?
}

This additional piece of script uses a simple if statement to compare $MyInvocation .InvocationName against the dot-source operator.  If they are equal, this portion of the script does nothing, allowing the function to be dot-sourced into the current session without invoking it.  If they are not equal, we know that the script was invoked either directly or using the call operator, so we need to call Get-Function so that the invocation uses the internal function automatically.  But as noted in the comment in the snippet above, how do we pass the arguments that were used during the invocation into the internal function?  There are two possible approaches that I can think of to resolve this.  We could use the param statement at the top of the script to identify the same parameters that are in the Get-Function function.  The problem with this approach is that it’s duplicating code unnecessarily, and I really don’t like duplicating code.  Another approach is to use Invoke-Expression inside of our if statement to pass the parameters received from the invocation of the script directly into the internal function.  The only special trick required in this approach is to only evaluate parameters that start with ‘-‘.  This is necessary so that the parameters of the internal function can be used by name, just like they could if you dot-sourced the script first and then invoked the function.  I think that’s a much better approach, so here’s our updated if statement:

if ($MyInvocation.InvocationName -ne .) {
   
Invoke-Expression Get-Function $($passThruArgs = $args; for ($i = 0; $i -lt $passThruArgs.Count; $i++) {if ($passThruArgs[$i] -match ‘^-‘) {$passThruArgs[$i]} else {`”`$passThruArgs[$i]`”}})
}

In this implementation, if the script file was invoked, Invoke-Expression is used to invoke the Get-Function function, passing arguments received by the script directly through to Get-Function.  And as just mentioned, I use the -match operator to determine if a given argument starts with -, in which case I evaluate it so that I end up calling Get-Function using named variables.  This is a trick that I find applies itself nicely to quite a few situations in PowerShell scripting I do.

At this point, we have a complete script file that can be invoked to execute the internal function directly or dot-sourced to import the internal function into PowerShell, all with a little help from $MyInvocation and Invoke-Expression.  This script can be seen below.

Get-Function.ps1 listing #1:

function Get-Function {
   
param(
        [
string]$name = $(throw Name is required)
    )
   
if (-not $name) { throw Name cannot be empty }
   
if ($name -match [^a-z0-9-]) {
       
Write-Error Unsupported character found in $name.
    }
elseif ($function = Get-Item -LiteralPath function:$name) {
       
function $name {
`t$($function.Definition)
}

    }
}
if ($MyInvocation.InvocationName -ne .) {
   
Invoke-Expression Get-Function $($passThruArgs = $args; for ($i = 0; $i -lt $passThruArgs.Count; $i++) {if ($passThruArgs[$i] -match ‘^-‘) {$passThruArgs[$i]} else {`”`$passThruArgs[$i]`”}})
}

Now, I’m sure you’re thinking that’s great, flexible, etc., but where’s the pipeline support that you mentioned would work as well?  Well, as mentioned earlier, this is also possible in PowerShell although it adds another layer of complexity to the script.  The nice part though is that it will work whether it is used in a pipeline as an invoked ps1 file or as an invoked function that was previously imported by dot-sourcing the ps1 file.  The trick is to use the Begin, Process and End blocks and the $_ variable both in the ps1 file at the root level and in the internal Get-Function function.

At the root scope of the script file, the Begin block is used to declare any functions and variables used in the script.  The process block actually calls the function that is being exposed through the script (in a pipeline if appropriate), and the End block is used for cleanup (although we don’t have any cleanup to do).  Similarly, inside the Get-Function function, the Begin block is used to check parameters that don’t support pipeline input, the Process block is used to check the state of some parameters and actually do the work (using the objects coming down the pipeline if appropriate), and the End block is used for cleanup (although again, we don’t have any).  The end result of adding these to our script and making a few modifications so that users can invoke the script file or the function with -? and get the syntax can be found in Get-Function.ps1 listing #2.

Get-Function.ps1 listing #2:

BEGIN {
  function Get-Function {
   
param(
      [
string]$name = $null
    )
    BEGIN {
      if (($name -contains -?) -or ($args -contains -?)) {
       
SYNTAX | Write-Host
        “Get-Function [-name] <string>
| Write-Host
       
break
      }
    }
    PROCESS {
     
if ($name -and $_) {
       
throw Ambiguous parameter set
      }
elseif ($name) {
       
$name | Get-Function
      }
elseif ($_) {
       
if ($_ -match [^a-z0-9-]) {
         
throw Unsupported character found.
        }
elseif ($function = Get-Item -LiteralPath function:$_) {
         
function $_ {
`t$($function.Definition)
}
        }
      }
else {
       
throw Name cannot be null or empty
      }
    }
    END {
    }
  }
}
PROCESS {
  if ($MyInvocation.InvocationName -ne .) {
   
if ($_) {
     
Invoke-Expression `$_ | Get-Function $($passThruArgs = $args; for ($i = 0; $i -lt $passThruArgs.Count; $i++) {if ($passThruArgs[$i] -match ‘^-‘) {$passThruArgs[$i]} else {`”`$passThruArgs[$i]`”}})
    }
else {
     
Invoke-Expression Get-Function $($passThruArgs = $args; for ($i = 0; $i -lt $passThruArgs.Count; $i++) {if ($passThruArgs[$i] -match ‘^-‘) {$passThruArgs[$i]} else {`”`$passThruArgs[$i]`”}})
    }
  }
}
END {
}

And there you have it.  Now you know how to create versatile ps1 files that you can share with the community that:

  1. Automatically discourage unrecommended usage (executing internal code and processing parameters when dot-sourcing script files not meant to be dot-sourced).
  2. Support importing functions and variables via dot-sourcing.
  3. Support direct invocation via the path and the call operator (if necessary).
  4. Output syntax when called with -?.
  5. Work in the pipeline as both a ps1 file and an imported function.

This all may seem very complicated at first, but once you learn how it works it’s really not that complicated at all.  And hopefully the consumers of your script will thank you for all of your hard work in making it possible.

Thanks for reading!

Kirk out.

Share it:

Quality, not quantity, for the most part

When I started this blog last year I set a personal goal to post something to this blog at least once a week.  I wasn’t that busy at the time and it figured like a reasonable thing to do.  Well it’s now been about 6 weeks since I last published anything on this blog, but not for lack of wanting.  Life simply became extraordinarily busy for all of February and the first part of March and there were simply too many higher priorities taking every minute of free time that I could muster for me to justify spending time writing something for my blog.

It’s not that writing a blog post is that complicated.  It’s just that I didn’t want to post just anything.  I tend to prefer posts that are a little less frequent but that hopefully offer a little more value to the reader than just reposting what’s already out there simply because I don’t have time to do anything else.  Quality, not quantity.  That reflects how I look at many things in life.  Perfectionism at its best.  It’s a gift…and a curse. 🙂

Well I think my preference for quality over quantity got the better of me and I’m sure I’ll be crazy busy like I was in February again in the future, so its time to rethink my approach to blogging.  I have lots of ideas on how to approach this, but I’ll need to experiment a little to see what works best.  Essentially I’m simply going to try and find a better balance between the meatier posts that I like to do to share the results my PowerShell research with you and lighter, shorter posts about what’s going on in the PowerShell space and about the cool things I’m working on in PowerGUI to maintain a better blog continuity going forward.  Hopefully you won’t see a break in posts like this happen again in the future.

If you stuck around, waiting for an update from me, thanks.  I’m going to do my best to make you happy that you did.

Kirk out.

Technorati Tags: , , ,

Share it: