PowerShell Deep Dive: Using $MyInvocation and Invoke-Expression to support dot-sourcing and direct invocation in shared PowerShell scripts

When creating PowerShell script (ps1) files to share with the community, there are a few different ways you can configure their intended use.  You can configure a ps1 file so that it contains one or more functions and/or variables that must be loaded into PowerShell before they can be used.  This loading is done via a technique called dot sourcing.  Alternatively you can make the body of the ps1 file be the script itself that you want to share with the community without encapsulating it in a function.  Using this configuration, your script consumers will be required to invoke the script using the absolute or relative path to your ps1 file, prefixing it with the call operator (&) and wrapping it in quotation marks if the path contains a space.  Let’s look at each of these in more detail and some advantages to each approach.

Dot-sourcing a ps1 file is like running the PowerShell script it contains inline in the current scope.  You can pass in parameters when you dot-source a ps1 file, or you can dot-source it by itself.  To dot-source a ps1 file you must use the full absolute or relative path to that file.  Aside from the handling of any parameters, the PowerShell script inside the ps1 file is run as if you typed it in manually into the current scope.  An advantage to this approach is that the variables and functions within the ps1 file that use the default scope will be declared in the current scope and therefore they will be available afterwards without requiring users to know the location of the script file.  This allows users to dot-source a ps1 file in their profile and have the functions and or variables they contain available to them in every PowerShell session they open.  If you had a ps1 file with the path ‘C:\My Scripts\MyScript.ps1’, you would dot-source it like this:

. ‘C:\My Scripts\MyScript.ps1’

Before I get to invoking scripts directly, I need to make an important note about dot-sourcing script files.  Users need to be careful when dot-sourcing script files, because while it is possible to dot-source a script that was intended to be invoked and have it appear to function the same as if you had invoked it, passing parameters and having the script within appear to run as expected, this is not a good practice.  Only dot-source ps1 files containing functions and variables you want available in your current session.  If the ps1 file you are using was intended to be invoked and not dot-sourced, steer clear of the dot-source operator.  Otherwise you risk leaving crumbs (variables and functions) of the script files you dot source behind in your current session, some of which may have been intended to be deleted when they went out of scope (secure strings used to temporarily store passwords, for example).  Since the current scope is the root scope, these won’t go out of scope until you close PowerShell.  I have seen users dot-source ps1 files while passing parameters many times in the online community, and those users should be using the call operator instead — not a good idea.  Now back to invoking scripts directly…

Invoking a script directly is akin to calling a function.  You can pass in parameters when you invoke a ps1 file, or you can invoke the ps1 file by itself.  To invoke a ps1 file you must use the full absolute or relative path to that file.  If that path contains one or more spaces in it, it must be wrapped in quotation marks and the call operator (&) must be used.  Otherwise it will just be treated as a string and output to the console (note: it is a good practice to always use the call operator when invoking a script this way so that it doesn’t matter if spaces are in the path or not — it will just work).  When you invoke a ps1 file, a child scope is created and the contents of that ps1 file are executed within that child scope.  An advantage to this approach is that the script file doesn’t leave anything behind after it is run unless it explicitly declares a function or variable as global.  This keeps the PowerShell environment clean.  If you had a ps1 file with the path ‘C:\My Scripts\MyScript.ps1’, you would call it like this:

& ‘C:\My Scripts\MyScript.ps1’

Between these two approaches, there is no best practice indicating which is the right one to use.  It seems to simply be a matter of preference.  Unfortunately, for the most part it is the script author’s preference, not the script consumer’s.  For script consumers to get ps1 files they find online in the community working they way they want, they may have to modify the file to get it to dot-source correctly, or to run correctly when invoked using the call operator, or they may just copy and paste the script into their own ps1 file or profile to get it running the way they like.  The end result is that each time a ps1 file is updated by its author, the script consumer may have manual steps to take to get that update in their own environment.

What if ps1 files could be created so that they could support both of these configuration approaches.  What if they would always work as expected whether they were dot-sourced or invoked directly?  And what if you want the functionality that the ps1 file provides to work inside of a pipeline, whether you dot-source it and use a function call or invoke it directly inside your pipeline?  Fortunately, PowerShell’s a rich enough scripting language to allow you to do just that.

The first thing you need to do to make this work is to determine how the script file was used.  PowerShell includes a built-in variable called $MyInvocation that allows your script to look at the way it was used.  Among other things, $MyInvocation includes two properties you’ll need to understand when making this work: InvocationName and MyCommand.  InvocationName contains the name of the command that was used to invoke the script.  If you dot-sourced the script, this will contain ‘.’.  If you invoked the script using the call operator, this will contain ‘&’.  If you invoked the script using the path to the script itself, this will contain the exact path you entered, whether it was relative or absolute, UNC or local.  MyCommand contains information that describes the script file itself: the path under which it was found, the name of the script file, and the type of the command (always ExternalScript for ps1 files).  These two pieces of information can be used together to determine how the script was used.  For example, consider a script file called Test-Invocation.ps1 at the root of C on a computer PoShRocks that contains the following script:

if ($MyInvocation.InvocationName -eq &) {
   
Called using operator
}
elseif ($MyInvocation.InvocationName -eq .) {
   
Dot sourced
}
elseif ((Resolve-Path -Path `
    $MyInvocation
.InvocationName).ProviderPath -eq `
   
$MyInvocation.MyCommand.Path) {
   
Called using path $($MyInvocation.InvocationName)
}

Regardless of whether you dot-source Test-Invocation.ps1 or invoke it directly, and regardless of whether you use a relative local path, an absolute local path, or an absolute remote (UNC) path, this script will output how it was used.  Here are a few examples of how you might use this script, with the associated output:

PS C:\> . .\Test-Invocation.ps1
Dot sourced
PS C:\> . C:\Test-Invocation.ps1
Dot sourced
PS C:\> . \\PoShRocks\c$\Test-Invocation.ps1
Dot sourced
PS C:\> & .\Test-Invocation.ps1
Called using operator
PS C:\> & C:\Test-Invocation.ps1
Called using operator
PS C:\> & \\PoShRocks\c$\Test-Invocation.ps1
Called using operator
PS C:\> .\Test-Invocation.ps1
Called using path .\Test-Invocation.ps1
PS C:\> C:\Test-Invocation.ps1
Called using path C:\Test-Invocation.ps1
PS C:\> \\PoShRocks\c$\Test-Invocation.ps1
Called using path \\PoShRocks\c$\Test-Invocation.ps1

As you can see, each time our script knows exactly how it was used, so we can use that to make it behave appropriately in any situation.

Now that we’re armed with that knowledge, let’s add a function to our script that will do something simple, like output the definition of another PowerShell function.  First, we’ll need to write our function:

function Get-Function {
   
param(
       
[string]$name = $(throw Name is required)
   
)
   
if (-not $name) { throw Name cannot be empty }
   
if ($name -match [^a-z0-9-]) {
       
Write-Error Unsupported character found.
    }
elseif ($function = Get-Item -LiteralPath function:$name) {
   
function $name {
`t$($function.Definition)
}

    }
}

This function is pretty straightforward.  You call it passing in the name of a function and it outputs the function definition, including the name, to the console.

The next step is to follow up that function definition with a slightly modified version of our Test-Invocation.ps1 script.  Basically we just want to know if the file was invoked or dot-sourced.  If it was invoked, we want to automatically call our Get-Function function and pass the parameters used during the invocation directly through to the Get-Function function call.  If it was dot-sourced, we don’t want to do any additional work because the function will be imported into the current session so that we can use it without the script file, as intended.  This has the added benefit of preventing users from executing script through dot-sourcing that wasn’t intended to be executed.  Here’s the start of the additional script that we’ll need to put after our Get-Function call:

if ($MyInvocation.InvocationName -ne .) {
   
Get-Function # How do we pass arguments here?
}

This additional piece of script uses a simple if statement to compare $MyInvocation .InvocationName against the dot-source operator.  If they are equal, this portion of the script does nothing, allowing the function to be dot-sourced into the current session without invoking it.  If they are not equal, we know that the script was invoked either directly or using the call operator, so we need to call Get-Function so that the invocation uses the internal function automatically.  But as noted in the comment in the snippet above, how do we pass the arguments that were used during the invocation into the internal function?  There are two possible approaches that I can think of to resolve this.  We could use the param statement at the top of the script to identify the same parameters that are in the Get-Function function.  The problem with this approach is that it’s duplicating code unnecessarily, and I really don’t like duplicating code.  Another approach is to use Invoke-Expression inside of our if statement to pass the parameters received from the invocation of the script directly into the internal function.  The only special trick required in this approach is to only evaluate parameters that start with ‘-‘.  This is necessary so that the parameters of the internal function can be used by name, just like they could if you dot-sourced the script first and then invoked the function.  I think that’s a much better approach, so here’s our updated if statement:

if ($MyInvocation.InvocationName -ne .) {
   
Invoke-Expression Get-Function $($passThruArgs = $args; for ($i = 0; $i -lt $passThruArgs.Count; $i++) {if ($passThruArgs[$i] -match ‘^-‘) {$passThruArgs[$i]} else {`”`$passThruArgs[$i]`”}})
}

In this implementation, if the script file was invoked, Invoke-Expression is used to invoke the Get-Function function, passing arguments received by the script directly through to Get-Function.  And as just mentioned, I use the -match operator to determine if a given argument starts with -, in which case I evaluate it so that I end up calling Get-Function using named variables.  This is a trick that I find applies itself nicely to quite a few situations in PowerShell scripting I do.

At this point, we have a complete script file that can be invoked to execute the internal function directly or dot-sourced to import the internal function into PowerShell, all with a little help from $MyInvocation and Invoke-Expression.  This script can be seen below.

Get-Function.ps1 listing #1:

function Get-Function {
   
param(
        [
string]$name = $(throw Name is required)
    )
   
if (-not $name) { throw Name cannot be empty }
   
if ($name -match [^a-z0-9-]) {
       
Write-Error Unsupported character found in $name.
    }
elseif ($function = Get-Item -LiteralPath function:$name) {
       
function $name {
`t$($function.Definition)
}

    }
}
if ($MyInvocation.InvocationName -ne .) {
   
Invoke-Expression Get-Function $($passThruArgs = $args; for ($i = 0; $i -lt $passThruArgs.Count; $i++) {if ($passThruArgs[$i] -match ‘^-‘) {$passThruArgs[$i]} else {`”`$passThruArgs[$i]`”}})
}

Now, I’m sure you’re thinking that’s great, flexible, etc., but where’s the pipeline support that you mentioned would work as well?  Well, as mentioned earlier, this is also possible in PowerShell although it adds another layer of complexity to the script.  The nice part though is that it will work whether it is used in a pipeline as an invoked ps1 file or as an invoked function that was previously imported by dot-sourcing the ps1 file.  The trick is to use the Begin, Process and End blocks and the $_ variable both in the ps1 file at the root level and in the internal Get-Function function.

At the root scope of the script file, the Begin block is used to declare any functions and variables used in the script.  The process block actually calls the function that is being exposed through the script (in a pipeline if appropriate), and the End block is used for cleanup (although we don’t have any cleanup to do).  Similarly, inside the Get-Function function, the Begin block is used to check parameters that don’t support pipeline input, the Process block is used to check the state of some parameters and actually do the work (using the objects coming down the pipeline if appropriate), and the End block is used for cleanup (although again, we don’t have any).  The end result of adding these to our script and making a few modifications so that users can invoke the script file or the function with -? and get the syntax can be found in Get-Function.ps1 listing #2.

Get-Function.ps1 listing #2:

BEGIN {
  function Get-Function {
   
param(
      [
string]$name = $null
    )
    BEGIN {
      if (($name -contains -?) -or ($args -contains -?)) {
       
SYNTAX | Write-Host
        “Get-Function [-name] <string>
| Write-Host
       
break
      }
    }
    PROCESS {
     
if ($name -and $_) {
       
throw Ambiguous parameter set
      }
elseif ($name) {
       
$name | Get-Function
      }
elseif ($_) {
       
if ($_ -match [^a-z0-9-]) {
         
throw Unsupported character found.
        }
elseif ($function = Get-Item -LiteralPath function:$_) {
         
function $_ {
`t$($function.Definition)
}
        }
      }
else {
       
throw Name cannot be null or empty
      }
    }
    END {
    }
  }
}
PROCESS {
  if ($MyInvocation.InvocationName -ne .) {
   
if ($_) {
     
Invoke-Expression `$_ | Get-Function $($passThruArgs = $args; for ($i = 0; $i -lt $passThruArgs.Count; $i++) {if ($passThruArgs[$i] -match ‘^-‘) {$passThruArgs[$i]} else {`”`$passThruArgs[$i]`”}})
    }
else {
     
Invoke-Expression Get-Function $($passThruArgs = $args; for ($i = 0; $i -lt $passThruArgs.Count; $i++) {if ($passThruArgs[$i] -match ‘^-‘) {$passThruArgs[$i]} else {`”`$passThruArgs[$i]`”}})
    }
  }
}
END {
}

And there you have it.  Now you know how to create versatile ps1 files that you can share with the community that:

  1. Automatically discourage unrecommended usage (executing internal code and processing parameters when dot-sourcing script files not meant to be dot-sourced).
  2. Support importing functions and variables via dot-sourcing.
  3. Support direct invocation via the path and the call operator (if necessary).
  4. Output syntax when called with -?.
  5. Work in the pipeline as both a ps1 file and an imported function.

This all may seem very complicated at first, but once you learn how it works it’s really not that complicated at all.  And hopefully the consumers of your script will thank you for all of your hard work in making it possible.

Thanks for reading!

Kirk out.

Share it:

13 thoughts on “PowerShell Deep Dive: Using $MyInvocation and Invoke-Expression to support dot-sourcing and direct invocation in shared PowerShell scripts

      1. Actually Mike they are valid single-quotes. They just look like back ticks. Seems to be a font issue. You can copy and paste any of the code in the article though, and the single quotes will be treated as single quotes and the back ticks will be treated as back ticks. Sorry for the confusion.

  1. This has confused me….And I am not a powershell novice.
    What is it supposed to do?
    I created a test-invoke.ps1 file exactly like Get-Function.ps1 listing #1 above.
    When I run it using either .\test-invoke.ps1 or & C:\Users\kentd\test-invoke.ps1
    I get the output below. This output makes sense as as all I am doing is calling the script. There are no $args. If I add any args to the call (e.g. – joe), there is no change to the output.

    Name is required
    At C:\Users\kentd\test-invoke.ps1:4 char:32
    + param ([string]$name = $(throw <<<< 'Name is required') )
    + CategoryInfo : OperationStopped: (Name is required:String) [], RuntimeException
    + FullyQualifiedErrorId : Name is required

    As you can see, I am a bit confused….Can you help?

    thanks
    kent

    1. Hi Kent,

      The difference in execution can be seen when you compare dot-sourcing the script to invoking the script using the call operator.

      For example, if you created a Test-Invoke.ps1 script file, compare this:

      # Note you must include the leading dot to dot-source the file contents
      . .\Test-Invoke.ps1

      to this:

      # In this case the call operator is actually not necessary, I just include it for completeness.
      & .\Test-Invoke.ps1

      In your comment it seems like you were comparing the same thing: using the call operator and invoking the script using the relative path.

      Let me know if you get different results by using dot-sourcing.

      Thanks,

      Kirk out.

  2. Thanks Kirk.
    This cleared up a large misconception that I didn’t know I had. I thought dot source was .\scriptName.ps1, not . .\scriptName.ps1.

    chagrinedly,
    kent

  3. Thanks…very helpful

    What about modular option? doesn’t powershell support somthing simalar to python modules…if so do you know where to find some examples?

    thanks

    1. Hi Larry,

      At the time that this post was written, PowerShell was only available in version 1 which is why I came up with this solution. In version 2, the module feature was added to PowerShell. Modules can be loaded/unloaded on demand using Import-Module and Remove-Module, allowing you to load libraries of functions without worrying about whether or not you dot-source them or invoke them directly. This is the feature you are looking for.

      As for examples, I recommend the following:

      1. Download and install PowerGUI (you can find that here: http://www.powergui.org).
      2. Download and install the Module Management Add-on for PowerGUI (you can find that here: http://powergui.org/entry.jspa?externalID=2983&categoryID=21).
      3. In the PowerGUI Script Editor, select File | New | Module to create a new module. This will give you the framework for your module.
      4. Now add your functions to your module by putting them in your psm1 file or by dot-sourcing files containing functions.
      5. Save your files and load your module.

      If you follow these steps, you’ll have a module that you can load and unload on demand. All functions inside a module are public by default. If you want private functions, have a look at the PowerShell v2 snippets in PowerGUI (Edit | Insert Snippet, then open the PowerShell v2 container to see the snippets). There are snippets allowing you to insert whatever type of function you want into your module.

      Another alternative for module creation would be to open a script file where you have functions defined and then use the File | Convert to Module menu item (this is also part of the Module Management Add-on). That will give you a module that automatically contains the functions defined in your script file.

      Don’t hesitate to come back with more questions if you have any.

      Kirk out.

Leave a comment