Archive

Posts Tagged ‘PowerShell Deep Dive’

This April is “Learn More About PowerShell” Month with the 2012 Scripting Games, the 2012 Microsoft Management Summit, and the 2012 North American PowerShell Deep Dive!

March 29, 2012 Leave a comment

It’s hard to believe that April is almost here already.  Last week we had record high temperatures reaching 31°C (that’s 87.8°F for those of you living south of the border), and the night before last it was -16°C (or 3.2°F).  What wonderful consistency.  Maybe that’s why I like PowerShell so much, because it provides great consistency that just isn’t apparent in so many other places in life (that’s a swell tagline: “Use PowerShell, because it’s more consistent than the weather” Smile).  Anyway, I digress…back to the topic at hand.

This April is “Learn More About PowerShell” month!  Ok, so it’s not official (it’s not like I’m a mayor or anything), but with all of the opportunities to learn about Windows PowerShell in April, it seems like a fitting title, so I’m declaring it that anyway.  Now, where to begin.

2012 Scripting Games

The first Monday in April (that’s April 2, Monday next week) marks the official opening of the 2012 Scripting Games!  The Scripting Games are a great event, because they provide opportunities for beginner and advanced scripters alike to learn more about Windows PowerShell.  There are beginner and advanced divisions, with 10 events in each division.  You participate by visiting the official 2012 Scripting Games page starting on Monday April 2 to see the events that are published so far, and you have one week to submit a solution by publishing a script to the 2012 Scripting Games page on PoshCode for each event that you want to enter.  Note that at the time of this writing, the 2012 Scripting Games page on PoshCode shows information related to the 2011 Scripting Games, so for now just put a reminder in your calendar to check these two links out on April 2.

Once you submit a solution, you can move on to the next event if it is available.  All solutions will be judged by a great panel of expert judges, and once the events close there will be expert commentaries published so that you can learn how different community experts solve these problems with PowerShell scripts.  Watch for my expert commentary to Beginner Event 3 once that event has closed for submissions.

The 2012 Scripting Games will run until April 13, 2012, although you’ll have 7 days from the day that each event is posted, so there will still be some time to compete and get your entries in.  There are many prizes to be won, including grand prizes of full conference passes for TechEd North America 2012 (another great opportunity to learn more about PowerShell), software licenses for products like PowerWF, and more!  Also, don’t delay in getting your entries in, because you’ll barely have time once you’re done to pack your bags for the 2012 Microsoft Management Summit in Las Vegas if you’re going to that conference!

2012 Microsoft Management Summit

In just 2½ weeks from now, the 2012 Microsoft Management Summit (MMS) will start, and it’s going to be an amazing conference this year.  With the upcoming Microsoft System Center 2012 release, and with Windows 8 currently available as a Consumer Preview in the client and the server varieties (both of which include the pre-release version of PowerShell version 3), there are plenty of new opportunities to scale up your PowerShell prowess and scale out your scripting capabilities while learning how to get the most of these new products and platforms by leveraging PowerShell automation.

At the MMS 2012 conference, there are a total of 13 breakout sessions, 3 instructor led labs, and 5 self-paced labs where you can learn more about Windows PowerShell.  There is also a PowerShell booth that will be staffed by members of the Windows PowerShell team and a few PowerShell MVPs.  I’ll be working the PowerShell booth as will Aleksandar Nikolic, so please come see us and ask questions if you have any.  There will also be other booths for products like the Microsoft System Center 2012 release, which comes with even more PowerShell capabilities than before.  Additionally, there are many companies in the Expo hall that leverage PowerShell in their products and/or provide cmdlets to facilitate automation in their environments, such as NetApp, Veeam, Splunk and Devfarm Software (the company that I work for) to name but a few.  I’ll be working the Devfarm booth when I’m not in the PowerShell booth, so if you look around a little you’ll have a good chance of finding me.

If you’re going to MMS 2012, and you want to learn more about PowerShell, make sure you take advantage of these resources while you’re there.  The knowledge passed on to you through one breakout session, lab, or discussion with someone in the learning center or expo hall takes many, many hours to put together, and getting that knowledge first hand can be a huge timesaver for you in the long run!

PowerShell-related Content at MMS 2012

The following list identifies all of the PowerShell-related sessions and resources that have been announced so far for the MMS 2012 conference for your convenience.  To get the most value out of your conference, make sure you add the sessions, labs, and other items of interest to your schedule so that you don’t miss out on these great learning opportunities.  I have highlighted the sessions most interesting to me in bold in the list below.

Type and Level Title Speaker(s) Coordinates
Instructor-led Lab
300/Advanced
SV-IL306 Introduction to Windows PowerShell Fundamentals Dan Reger Monday, April 16,
12:00 PM to 1:15 PM
Venetian Ballroom A
Breakout Session
300/Advanced
SV-B317 Top 10 Things Every Systems Admin Needs to Know about Windows Server 2008 R2 SP1 Dan Stolts Monday, April 16,
3:00 PM to 4:15 PM
Venetian Ballroom G
Instructor-led Lab
300/Advanced
SV-IL307 What’s New in Windows PowerShell 3.0 Lucio Silveira Monday, April 16,
4:30 PM to 5:45 PM
Venetian Ballroom A
Breakout Session
300/Advanced
CD-B334 Understanding Console Extension for Configuration Manager 2007 and 2012 Matthew Hudson Tuesday, April 17,
10:15 AM to 11:30 AM
Venetian Ballroom G
Breakout Session
400/Expert
CD-B406 Configuration Manager 2012 and PowerShell: Better Together Greg Ramsey Tuesday, April 17,
11:45 AM to 1:00 PM
Venetian Ballroom G
Instructor-led Lab
300/Advanced
SV-IL304 Managing Windows Server “8” with Server Manager and PowerShell 3.0 Michael Leworthy Tuesday, April 17,
11:45 AM to 1:00 PM
Venetian Ballroom A
Instructor-led Lab
300/Advanced
SV-IL307 What’s New in Windows PowerShell 3.0 Lucio Silveira Tuesday, April 17,
2:15PM to 3:30PM
Venetian Ballroom A
Breakout Session
300/Advanced
SV-B319 Windows PowerShell for Beginners Jeffrey Snover,
Travis Jones
Tuesday, April 17,
4:00 PM to 5:15 PM
Murano 3301
Breakout Session
200/Intermediate
SV-B205 Overview of Server Management Technologies in Windows Server “8” Erin Chapple,
Jeffrey Snover
Wednesday, April 18,
10:15 AM to 11:30 AM
Murano 3301
Breakout Session
200/Intermediate
SV-B291 Manage Cisco UCS with System Center 2012 and PowerShell Chakri Avala Wednesday, April 18,
2:15 PM to 3:30 PM
Titian 2203
Instructor-led Lab
300/Advanced
SV-IL306 Introduction to Windows PowerShell Fundamentals Dan Reger Wednesday, April 18,
2:15 PM to 3:30 PM
Venetian Ballroom A
Breakout Session
300/Advanced
SV-B313 Windows Server 2008 R2 Hyper-V FAQs, Tips, and Tricks Janssen Jones Wednesday, April 18,
4:00 PM to 5:15 PM
Murano 3301
Instructor-led Lab
300/Advanced
SV-IL304 Managing Windows Server “8” with Server Manager and PowerShell 3.0 Michael Leworthy Thursday, April 19,
8:30 AM to 9:45 AM
Venetian Ballroom A
Breakout Session
400/Expert
SV-B405 Advanced Automation Using Windows PowerShell 2.0 Jeffrey Snover,
Travis Jones
Thursday, April 19,
10:15 AM to 11:30 AM
Veronese 2401
Breakout Session
300/Advanced
AM-B315 SharePoint as a Workload in a Private Cloud Adam Hall,
Michael Frank
Thursday, April 19,
10:15 AM to 11:30 AM
Titian 2206
Breakout Session
300/Advanced
SV-B312 Don Jones’ Windows PowerShell Crash Course Don Jones Thursday, April 19,
11:45 AM to 1:00 PM
Venetian Ballroom G
Breakout Session
300/Advanced
SV-B315 Managing Group Policy Using PowerShell Darren Mar-Elia Thursday, April 19,
11:45 AM to 1:00 PM
Murano 3301
Breakout Session
300/Advanced
FI-B322 Virtual Machine Manager 2012: PowerShell is your Friend, and Here’s Why Hector Linares,
Susan Hill
Thursday, April 19,
11:45 AM to 1:00 PM
Titian 2206
Breakout Session
400/Expert
SV-B406 PowerShell Remoting in Depth Don Jones Friday, April 20,
8:30 AM to 9:45 AM
Bellini 2001
Hands-on lab
300/Advanced
SV-L302 Active Directory Deployment and Management Enhancements N/A Hands-on lab, available in the HOL area
Hands-on lab
300/Advanced
SV-L304 Managing Windows Server “8” with Server Manager and Windows PowerShell 3.0 N/A Hands-on lab, available in the HOL area
Hands-on lab
300/Advanced
SV-L305 Managing Network Infrastructure with Windows Server “8” N/A Hands-on lab, available in the HOL area
Hands-on lab
300/Advanced
SV-L306 Introduction to Windows PowerShell Fundamentals N/A Hands-on lab, available in the HOL area
Hands-on lab
300/Advanced
SV-L307 What’s New in Windows PowerShell 3.0 N/A Hands-on lab, available in the HOL area

2012 North America PowerShell Deep Dive

As if all of these PowerShell learning opportunities weren’t already enough, there’s even more you can do in “Learn More About PowerShell” month.  At the end of April, a week after MMS is finished, the 2nd annual North American 2012 PowerShell Deep Dive conference will start.  This conference is second to none when it comes to learning more about PowerShell.  The sessions are fantastic, and the conversations perhaps even more so.  What makes this conference unique is the focus on shorter, 35-minute sessions that quickly drill into a specific topic and give you a ton of information on that topic.  There are also short, 5-minute lightning rounds which give speakers an opportunity to quickly show off one of their favorite aspects of PowerShell.  The 35-minute format, 5-minute lightning rounds, and the depth of the content in these sessions are unique to this conference, and you won’t get the same value for PowerShell content anywhere else.  Add to that the evening script club-style events and it’s really an experience that is second to none.  I highly recommend you consider attending if you’re already using PowerShell and want to take your skills to new heights.  You can still register for this great event on the registration page for The Experts Conference (TEC).

This conference takes place in sunny San Diego from April 29th until May 2nd, and it gives you 3 days of 100% PowerShell content.  I’m fortunate enough to be attending this conference as well, and I’ll be giving sessions about proxy functions and about WMI and PowerShell.  If you do attend, please make a point to say hello and introduce yourself if I haven’t met you already.

Here’s a quick look at the content that is being presented at the PowerShell Deep Dive this year:

Title Speaker(s) Date
FIM PowerShell Workshop Craig Martin Sunday, April 29, 2012
Keynote Jeffrey Snover Monday, April 30, 2012
8:00 AM to 10:00 AM
When old API’s save the day (pinvoke and native windows dlls) Tome Tanasovski Monday, April 30, 2012
10:30 AM to 11:05 AM
Get Your Game On! Leveraging Proxy Functions in Windows PowerShell Kirk “Poshoholic” Munro Monday, April 30, 2012
11:10 AM to 11:45 AM
Using Splunk Reskit with PowerShell to revolutionize your script process Brandon Shell Monday, April 30, 2012
1:00 PM to 2:15 PM
Lightning Round Determined at event Monday, April 30, 2012
2:20 PM to 3:05 PM
Remoting Improvement in Windows PowerShell V3 Krishna Vutukuri Monday, April 30, 2012
3:10 PM to 3:45 PM
New Hyper-V PowerShell Module in Windows Server 8 Adam Driscoll Monday, April 30, 2012
4:15 PM to 5:30 PM
Formatting in Windows PowerShell Jim Truher Tuesday, May 1, 2012
8:00 AM to 8:35 AM
PowerShell and WMI: A Love Story Kirk “Poshoholic” Munro Tuesday, May 1, 2012
8:40 AM to 9:15 AM
PowerShell as a Web Language James Brundage Tuesday, May 1, 2012
9:45 AM to 11:00 AM
PowerShell V3 in Production Steve Murawski Tuesday, May 1, 2012
11:15 AM to 11:50 AM
Lightning Round Determined at event Tuesday, May 1, 2012
11:55 AM to 12:30 AM
How Microsoft Uses PowerShell for Testing Automation and Deployment of FIM Kinnon McDonell Tuesday, May 1, 2012
1:45 PM to 3:00 PM
Job Types in Windows PowerShell 3.0 Travis Jones Tuesday, May 1, 2012
3:15 PM to 3:50 PM
Creating a Corporate PowerShell Module Tome Tanasovski Tuesday, May 1, 2012
3:55 PM to 4:30 PM
Cmdlets over Objects (CDXML) Richard Siddaway Wednesday, May 2, 2012
8:00 AM to 8:35 AM
Build your own remoting endpoint with PowerShell V3 Aleksandar Nikolic Wednesday, May 2, 2012
8:40 AM to 9:15 AM
PowerShell Workflows and the Windows Workflow Foundation for the IT Pro Steve Murawski Wednesday, May 2, 2012
9:45 AM to 11:00 AM
Incorporating Microsoft Office into Windows PowerShell Jeffery Hicks Wednesday, May 2, 2012
11:15 AM to 11:50 AM
TBD Bruce Payette Wednesday, May 2, 2012
11:55 AM to 12:30 PM

Wow, that’s a lot of PowerShell!  With all of these opportunities, whether you’re trying to learn PowerShell without incurring a huge expense, or travelling to conferences to learn more about technologies there, there’s definitely something for everyone in what looks to be an awesome “Learn More About PowerShell” month.

Good luck, wherever your learning adventures take you!

Kirk out.

PowerShell Deep Dive Conference: April 17-19, 2011 in Las Vegas

March 14, 2011 4 comments

In case you haven’t heard already, there is a huge opportunity coming up to learn a lot more about PowerShell very quickly and interact directly with dozens of PowerShell experts face to face at the same time.  Next month marks the first ever PowerShell-specific conference, the PowerShell Deep Dive.  This conference will be held in the Red Rock Resort in Las Vegas, Nevada from April 17-19, 2011, and it will be an amazing experience for anyone interested in learning more about PowerShell.  The Deep Dive sessions will all be presented on April 18 and 19, following the welcome reception on the night of April 17.

Don’t be too intimidated by the name “Deep Dive” though.  The sessions will be a deep dive into PowerShell, that’s true, but there is a half-day 300-level Windows PowerShell Pre-Deep Dive Crash Course with Don Jones on April 17, 2011 that can help bring you up to speed if you’re close but not quite there yet.

Also, if you act now by emailing TEC2011@quest.com and sign up before the end of March, your Deep Dive conference fee will only cost you $850 US.  For the depth of knowledge covered and the calibre of the presentations and the attendees who will be attending, this conference is going to be worth every penny.

Speaking of attendees, you really should check out who’s already confirmed they will be attending this event.  Here’s a list of only a few of the speakers and attendees who have signed up so far:

    What’s incredible is that this list is only showing some of the amazing talent that will be at this event.  I would have recommended it as a must-attend event even with only a small fraction of the superstars I have listed above attending, but with this line-up, plus many, many more PowerShell superstars, this is going to be one truly memorable experience.
    I’ll be attending as well of course (I wouldn’t miss it!), and while there I will be presenting a full session on “Managing Hyper-V with PowerShell” and a Deep Dive talk on “Defining domain specific vocabularies using Windows PowerShell” as well.

Have I sold you on the idea yet?  If you want to learn more, head on over to the PowerShell Deep Dive page and read more about the event, or if you’ve already decided send an email to TEC2011@quest.com today to make sure you can take advantage of the $850 US pricing before the end of March!

Hope to see you in Vegas!

Kirk out.

PowerShell Deep Dive: Understanding Get-Alias, wildcards, escape characters, quoting rules, literal vs. non-literal paths, and the timing of string evaluation

February 18, 2009 1 comment

The following question recently came up on a mailing list I follow:

When I am trying to get the definition for the alias "?", I need to escape it because if not it works like a wildcard.  That is ok.

But why do I have to use SINGLE quotes, and why do DOUBLE quotes fail to escape?  I thought it should be the other way around.

For example, this works:

Get-Alias ‘`?’

This does not work:

Get-Alias "`?"

Why?

This question was prompted by Aleksandar’s blog post about the problem.  There are quite a few things to consider here to understand what is going on.

First, as the author of the question pointed out, in the Get-Alias cmdlet the question mark character is a wildcard character.  That means if you simply call Get-Alias with a question mark (Get-Alias ?), you will get all one-character aliases because the question mark will match any character.  That will happen regardless of whether you use single quotes, double quotes, or no quotes.  So the question mark must be escaped to tell Get-Alias to treat it as an actual question mark, and not a wildcard question mark.

Second, in PowerShell a single-quoted string is a literal string, so characters are taken as is.  A double-quoted string is a special string where the contents are evaluated to determine what the actual string is (see Get-Help about_quoting_rules for more details).  In a double-quoted string, any variables or subexpressions (identified by their prefix of $) are evaluated and the results are placed in the string.  Also, any escaped characters are evaluated and the results are placed in the string.  The standard way to escape a character is to precede it with a backtick (`).

Third, if you escape any non-escape character by preceding it with a backtick, the result is simply that non-escape character.  The backtick is discarded.

Fourth, there is a difference between escaping escape characters when evaluating a double-quoted string and escaping wildcard characters in cmdlet that accept strings containing wildcards.  Both are escaped the same way, but understanding the timing of their evaluation and knowing what characters they consider to be escape characters is very important.  Double-quoted string evaluation happens outside of a cmdlet, before it is called.  Wildcard string evaluation happens inside of a cmdlet, once it is called.

And lastly, Get-Alias works just like the Path parameter set variant of Get-Item (that is to say, it works just like Get-Item when you call Get-Item with the Path parameter), returning multiple results when unescaped wildcard characters are in the search string and there are multiple matching aliases.  Get-Alias is a little simpler to use on a general basis, but if it wasn’t there you could just use Get-Item instead.

Now that we’re armed with that knowledge, lets break down the command we’re having a hard time with.

Get-Alias "`?"

Here are the results of that command in PowerShell:

image

What’s happening here?  When this command is executed, the first thing PowerShell has to do is evaluate the double-quoted string and get the result string that comes out of that evaluation, before the cmdlet is called.  As I explained above, any non-escape character that is escaped in a double-quoted string is simply evaluated as the non-escape character.  In our string, the question mark is being escaped, but that isn’t an escape character when it comes to string parsing (use the command Get-Help about_escape_character to get the list of escape characters in double-quoted strings), so the result of the evaluation is simply ‘?’.  When that string is then passed to Get-Alias, the Get-Alias cmdlet thinks we’re looking for any single-character aliases, so that’s what it gives us in the output.

What we need to do is make sure that the results of the evaluation of the double-quoted string are such that the question mark is preceded by a backtick so that it is escaped when it is passed into the Get-Alias cmdlet (string evaluation timing is important, remember?).  To do that, we need to know how to generate a single backtick in the result returned from a double-quoted string evaluation.  The backtick is an escape character itself as well as the character used to escape other characters (we learned this when we looked at the about_escape_character helpfile, mentioned earlier), so we can create a backtick in a double-quoted string by escaping it with itself, like this: "“".

Now that we know how to escape the backtick so that it is passed into the Get-Alias cmdlet, we can change our problematic command we were using by escaping the backtick in the double-quotes and get the results we wanted in the first place:

image

It is important to note that a non-quoted string is treated as a double-quoted string in PowerShell when it is evaluating a command that it is about to run, so if you wanted to do this without quotes at all, you would still have to escape the backtick, like this:

image

The last thought I want to leave you with is about Get-Item.  I mentioned earlier that Get-Alias works just like the Path parameter set variant of Get-Item.  Both of these need to have wildcard characters escaped if you want to find an alias that contains a wildcard character.  What I didn’t mention is that Get-Item has something that Get-Alias does not: a LiteralPath parameter set variant.  The easiest way to retrieve an alias containing a wildcard without worrying about escaping any characters is to simply use Get-Item with the alias PSDrive, like this:

image 

This method requires the use of the LiteralPath parameter name and the alias PSDrive name, but it gives you exactly what you want, without having to worry about most of what I tried to show you here, and it’s easy.  Still, I had to show you the details first…a solid foundation in understanding how these things work goes a long way when writing and troubleshooting PowerShell scripts.  If you want it even easier, you could create a simple Get-LiteralAlias function that removes the need to use the Alias PSDrive name and calls Get-Item using the LiteralPath parameter set variant behind the scenes.  You could also create a Get-Alias function that has a Literal switch parameter so that you could simply indicate whether you wanted to call Get-Alias or Get-LiteralPath behind the scenes.  With PowerShell, there are definitely no shortage of options when you don’t have the functionality you are looking for the way you expect it to be there.  I’ll leave the exercise of creating the functions up to you, if you think it’s worth it (after all, there is only one default alias with a wildcard character in its name).

Armed with all of this knowledge, it becomes completely obvious why an unusual alias created with the New-Alias command (which doesn’t support wildcard characters) like this:

$foo = ‘bar’
New-Alias "“?$foo" ‘bar’

can be retrieved using either of the following Get-Alias commands:

Get-Alias "““?$foo"
Get-Alias "““`?$foo"

Right? :)

Kirk out.


Share this post:

PowerShell Deep Dive: Using $MyInvocation and Invoke-Expression to support dot-sourcing and direct invocation in shared PowerShell scripts

March 18, 2008 13 comments

When creating PowerShell script (ps1) files to share with the community, there are a few different ways you can configure their intended use.  You can configure a ps1 file so that it contains one or more functions and/or variables that must be loaded into PowerShell before they can be used.  This loading is done via a technique called dot sourcing.  Alternatively you can make the body of the ps1 file be the script itself that you want to share with the community without encapsulating it in a function.  Using this configuration, your script consumers will be required to invoke the script using the absolute or relative path to your ps1 file, prefixing it with the call operator (&) and wrapping it in quotation marks if the path contains a space.  Let’s look at each of these in more detail and some advantages to each approach.

Dot-sourcing a ps1 file is like running the PowerShell script it contains inline in the current scope.  You can pass in parameters when you dot-source a ps1 file, or you can dot-source it by itself.  To dot-source a ps1 file you must use the full absolute or relative path to that file.  Aside from the handling of any parameters, the PowerShell script inside the ps1 file is run as if you typed it in manually into the current scope.  An advantage to this approach is that the variables and functions within the ps1 file that use the default scope will be declared in the current scope and therefore they will be available afterwards without requiring users to know the location of the script file.  This allows users to dot-source a ps1 file in their profile and have the functions and or variables they contain available to them in every PowerShell session they open.  If you had a ps1 file with the path ‘C:\My Scripts\MyScript.ps1′, you would dot-source it like this:

. ‘C:\My Scripts\MyScript.ps1′

Before I get to invoking scripts directly, I need to make an important note about dot-sourcing script files.  Users need to be careful when dot-sourcing script files, because while it is possible to dot-source a script that was intended to be invoked and have it appear to function the same as if you had invoked it, passing parameters and having the script within appear to run as expected, this is not a good practice.  Only dot-source ps1 files containing functions and variables you want available in your current session.  If the ps1 file you are using was intended to be invoked and not dot-sourced, steer clear of the dot-source operator.  Otherwise you risk leaving crumbs (variables and functions) of the script files you dot source behind in your current session, some of which may have been intended to be deleted when they went out of scope (secure strings used to temporarily store passwords, for example).  Since the current scope is the root scope, these won’t go out of scope until you close PowerShell.  I have seen users dot-source ps1 files while passing parameters many times in the online community, and those users should be using the call operator instead — not a good idea.  Now back to invoking scripts directly…

Invoking a script directly is akin to calling a function.  You can pass in parameters when you invoke a ps1 file, or you can invoke the ps1 file by itself.  To invoke a ps1 file you must use the full absolute or relative path to that file.  If that path contains one or more spaces in it, it must be wrapped in quotation marks and the call operator (&) must be used.  Otherwise it will just be treated as a string and output to the console (note: it is a good practice to always use the call operator when invoking a script this way so that it doesn’t matter if spaces are in the path or not — it will just work).  When you invoke a ps1 file, a child scope is created and the contents of that ps1 file are executed within that child scope.  An advantage to this approach is that the script file doesn’t leave anything behind after it is run unless it explicitly declares a function or variable as global.  This keeps the PowerShell environment clean.  If you had a ps1 file with the path ‘C:\My Scripts\MyScript.ps1′, you would call it like this:

& ‘C:\My Scripts\MyScript.ps1′

Between these two approaches, there is no best practice indicating which is the right one to use.  It seems to simply be a matter of preference.  Unfortunately, for the most part it is the script author’s preference, not the script consumer’s.  For script consumers to get ps1 files they find online in the community working they way they want, they may have to modify the file to get it to dot-source correctly, or to run correctly when invoked using the call operator, or they may just copy and paste the script into their own ps1 file or profile to get it running the way they like.  The end result is that each time a ps1 file is updated by its author, the script consumer may have manual steps to take to get that update in their own environment.

What if ps1 files could be created so that they could support both of these configuration approaches.  What if they would always work as expected whether they were dot-sourced or invoked directly?  And what if you want the functionality that the ps1 file provides to work inside of a pipeline, whether you dot-source it and use a function call or invoke it directly inside your pipeline?  Fortunately, PowerShell’s a rich enough scripting language to allow you to do just that.

The first thing you need to do to make this work is to determine how the script file was used.  PowerShell includes a built-in variable called $MyInvocation that allows your script to look at the way it was used.  Among other things, $MyInvocation includes two properties you’ll need to understand when making this work: InvocationName and MyCommand.  InvocationName contains the name of the command that was used to invoke the script.  If you dot-sourced the script, this will contain ‘.’.  If you invoked the script using the call operator, this will contain ‘&’.  If you invoked the script using the path to the script itself, this will contain the exact path you entered, whether it was relative or absolute, UNC or local.  MyCommand contains information that describes the script file itself: the path under which it was found, the name of the script file, and the type of the command (always ExternalScript for ps1 files).  These two pieces of information can be used together to determine how the script was used.  For example, consider a script file called Test-Invocation.ps1 at the root of C on a computer PoShRocks that contains the following script:

if ($MyInvocation.InvocationName -eq &) {
   
Called using operator
}
elseif ($MyInvocation.InvocationName -eq .) {
   
Dot sourced
}
elseif ((Resolve-Path -Path `
    $MyInvocation
.InvocationName).ProviderPath -eq `
   
$MyInvocation.MyCommand.Path) {
   
Called using path $($MyInvocation.InvocationName)
}

Regardless of whether you dot-source Test-Invocation.ps1 or invoke it directly, and regardless of whether you use a relative local path, an absolute local path, or an absolute remote (UNC) path, this script will output how it was used.  Here are a few examples of how you might use this script, with the associated output:

PS C:\> . .\Test-Invocation.ps1
Dot sourced
PS C:\> . C:\Test-Invocation.ps1
Dot sourced
PS C:\> . \\PoShRocks\c$\Test-Invocation.ps1
Dot sourced
PS C:\> & .\Test-Invocation.ps1
Called using operator
PS C:\> & C:\Test-Invocation.ps1
Called using operator
PS C:\> & \\PoShRocks\c$\Test-Invocation.ps1
Called using operator
PS C:\> .\Test-Invocation.ps1
Called using path .\Test-Invocation.ps1
PS C:\> C:\Test-Invocation.ps1
Called using path C:\Test-Invocation.ps1
PS C:\> \\PoShRocks\c$\Test-Invocation.ps1
Called using path \\PoShRocks\c$\Test-Invocation.ps1

As you can see, each time our script knows exactly how it was used, so we can use that to make it behave appropriately in any situation.

Now that we’re armed with that knowledge, let’s add a function to our script that will do something simple, like output the definition of another PowerShell function.  First, we’ll need to write our function:

function Get-Function {
   
param(
       
[string]$name = $(throw Name is required)
   
)
   
if (-not $name) { throw Name cannot be empty }
   
if ($name -match [^a-z0-9-]) {
       
Write-Error Unsupported character found.
    }
elseif ($function = Get-Item -LiteralPath function:$name) {
   
function $name {
`t$($function.Definition)
}

    }
}

This function is pretty straightforward.  You call it passing in the name of a function and it outputs the function definition, including the name, to the console.

The next step is to follow up that function definition with a slightly modified version of our Test-Invocation.ps1 script.  Basically we just want to know if the file was invoked or dot-sourced.  If it was invoked, we want to automatically call our Get-Function function and pass the parameters used during the invocation directly through to the Get-Function function call.  If it was dot-sourced, we don’t want to do any additional work because the function will be imported into the current session so that we can use it without the script file, as intended.  This has the added benefit of preventing users from executing script through dot-sourcing that wasn’t intended to be executed.  Here’s the start of the additional script that we’ll need to put after our Get-Function call:

if ($MyInvocation.InvocationName -ne .) {
   
Get-Function # How do we pass arguments here?
}

This additional piece of script uses a simple if statement to compare $MyInvocation .InvocationName against the dot-source operator.  If they are equal, this portion of the script does nothing, allowing the function to be dot-sourced into the current session without invoking it.  If they are not equal, we know that the script was invoked either directly or using the call operator, so we need to call Get-Function so that the invocation uses the internal function automatically.  But as noted in the comment in the snippet above, how do we pass the arguments that were used during the invocation into the internal function?  There are two possible approaches that I can think of to resolve this.  We could use the param statement at the top of the script to identify the same parameters that are in the Get-Function function.  The problem with this approach is that it’s duplicating code unnecessarily, and I really don’t like duplicating code.  Another approach is to use Invoke-Expression inside of our if statement to pass the parameters received from the invocation of the script directly into the internal function.  The only special trick required in this approach is to only evaluate parameters that start with ‘-‘.  This is necessary so that the parameters of the internal function can be used by name, just like they could if you dot-sourced the script first and then invoked the function.  I think that’s a much better approach, so here’s our updated if statement:

if ($MyInvocation.InvocationName -ne .) {
   
Invoke-Expression Get-Function $($passThruArgs = $args; for ($i = 0; $i -lt $passThruArgs.Count; $i++) {if ($passThruArgs[$i] -match ‘^-‘) {$passThruArgs[$i]} else {`”`$passThruArgs[$i]`”}})
}

In this implementation, if the script file was invoked, Invoke-Expression is used to invoke the Get-Function function, passing arguments received by the script directly through to Get-Function.  And as just mentioned, I use the -match operator to determine if a given argument starts with -, in which case I evaluate it so that I end up calling Get-Function using named variables.  This is a trick that I find applies itself nicely to quite a few situations in PowerShell scripting I do.

At this point, we have a complete script file that can be invoked to execute the internal function directly or dot-sourced to import the internal function into PowerShell, all with a little help from $MyInvocation and Invoke-Expression.  This script can be seen below.

Get-Function.ps1 listing #1:

function Get-Function {
   
param(
        [
string]$name = $(throw Name is required)
    )
   
if (-not $name) { throw Name cannot be empty }
   
if ($name -match [^a-z0-9-]) {
       
Write-Error Unsupported character found in $name.
    }
elseif ($function = Get-Item -LiteralPath function:$name) {
       
function $name {
`t$($function.Definition)
}

    }
}
if ($MyInvocation.InvocationName -ne .) {
   
Invoke-Expression Get-Function $($passThruArgs = $args; for ($i = 0; $i -lt $passThruArgs.Count; $i++) {if ($passThruArgs[$i] -match ‘^-‘) {$passThruArgs[$i]} else {`”`$passThruArgs[$i]`”}})
}

Now, I’m sure you’re thinking that’s great, flexible, etc., but where’s the pipeline support that you mentioned would work as well?  Well, as mentioned earlier, this is also possible in PowerShell although it adds another layer of complexity to the script.  The nice part though is that it will work whether it is used in a pipeline as an invoked ps1 file or as an invoked function that was previously imported by dot-sourcing the ps1 file.  The trick is to use the Begin, Process and End blocks and the $_ variable both in the ps1 file at the root level and in the internal Get-Function function.

At the root scope of the script file, the Begin block is used to declare any functions and variables used in the script.  The process block actually calls the function that is being exposed through the script (in a pipeline if appropriate), and the End block is used for cleanup (although we don’t have any cleanup to do).  Similarly, inside the Get-Function function, the Begin block is used to check parameters that don’t support pipeline input, the Process block is used to check the state of some parameters and actually do the work (using the objects coming down the pipeline if appropriate), and the End block is used for cleanup (although again, we don’t have any).  The end result of adding these to our script and making a few modifications so that users can invoke the script file or the function with -? and get the syntax can be found in Get-Function.ps1 listing #2.

Get-Function.ps1 listing #2:

BEGIN {
  function Get-Function {
   
param(
      [
string]$name = $null
    )
    BEGIN {
      if (($name -contains -?) -or ($args -contains -?)) {
       
SYNTAX | Write-Host
        “Get-Function [-name] <string>
| Write-Host
       
break
      }
    }
    PROCESS {
     
if ($name -and $_) {
       
throw Ambiguous parameter set
      }
elseif ($name) {
       
$name | Get-Function
      }
elseif ($_) {
       
if ($_ -match [^a-z0-9-]) {
         
throw Unsupported character found.
        }
elseif ($function = Get-Item -LiteralPath function:$_) {
         
function $_ {
`t$($function.Definition)
}
        }
      }
else {
       
throw Name cannot be null or empty
      }
    }
    END {
    }
  }
}
PROCESS {
  if ($MyInvocation.InvocationName -ne .) {
   
if ($_) {
     
Invoke-Expression `$_ | Get-Function $($passThruArgs = $args; for ($i = 0; $i -lt $passThruArgs.Count; $i++) {if ($passThruArgs[$i] -match ‘^-‘) {$passThruArgs[$i]} else {`”`$passThruArgs[$i]`”}})
    }
else {
     
Invoke-Expression Get-Function $($passThruArgs = $args; for ($i = 0; $i -lt $passThruArgs.Count; $i++) {if ($passThruArgs[$i] -match ‘^-‘) {$passThruArgs[$i]} else {`”`$passThruArgs[$i]`”}})
    }
  }
}
END {
}

And there you have it.  Now you know how to create versatile ps1 files that you can share with the community that:

  1. Automatically discourage unrecommended usage (executing internal code and processing parameters when dot-sourcing script files not meant to be dot-sourced).
  2. Support importing functions and variables via dot-sourcing.
  3. Support direct invocation via the path and the call operator (if necessary).
  4. Output syntax when called with -?.
  5. Work in the pipeline as both a ps1 file and an imported function.

This all may seem very complicated at first, but once you learn how it works it’s really not that complicated at all.  And hopefully the consumers of your script will thank you for all of your hard work in making it possible.

Thanks for reading!

Kirk out.

Share it:

Follow

Get every new post delivered to your Inbox.

Join 52 other followers