Get your PowerShell game on for the Hacktoberfest challenge!

Every year the folks at DigitalOcean partner up with GitHub and create a Hacktoberfest challenge during the month of October (henceforth known as #Hacktober). The challenge is always about open source, trying to encourage people to participate more in open source projects. On the Hacktoberfest site, they list projects you can participate in, focusing on projects in Javascript, Bash, C++, Python, Ruby, etc., with project owners tagging issues on their project with #Hacktoberfest so that people participating in the challenge can find things to contribute to.

During the Hacktoberfest challenge over the past few years, if you wanted to do this with PowerShell-based projects it was more difficult. You would have had to find someone who had open source PowerShell modules or scripts that you could participate in, or you could even use your own projects when the Hacktoberfest challenge was just about commits to open source repositories, but it has evolved since then and now it’s all about submitting pull requests to other repositories so you need to start playing nice with other people’s projects. It’s much easier now though, because PowerShell Core is open source, the PowerShellGet and Package Management modules just went open source, much of the documentation is open source, there are other PowerShell modules that are already open source, and more modules are going open source every day. There’s even an open source PowerShell module for DigitalOcean automation (ok, shameless plug, I wrote that one). Also, it doesn’t matter if you’re a developer, a devops engineer, a module author, a casual scripter, a PowerShell user who wants to improve documentation, or a PowerShell community member who has an idea for an RFC that would help improve the language — all of these roles have opportunities for pull requests to be submitted to help make PowerShell better for everyone. Do you know how easy it is to submit a pull request for a PowerShell docs change? Brain. Dead. Simple.

Since PowerShell has so much open source goodness now, this year lets make some noise by raising the visibility of PowerShell open source projects, and show the DigitalOcean Hacktoberfest challenge what the PowerShell community can do! According to the Octoverse, Microsoft has already demonstrated themselves as the organization with the most contributors to open source projects. Let’s take that a step further, and raise the bar by submitting a ton of pull requests against PowerShell projects this month! If you’re up for the challenge, aside from a really cool t-shirt and great stickers for your laptop, you’ll also get the pride of having contributed to something truly great!

Want to get started?  Head on over to the Hacktoberfest site and click on the Start Hacking link to sign up. There are also a lot of great resources on that site if you haven’t done open source pull requests before — just scroll down to the bottom of the page. Then, throughout the month of #Hacktober, spend some time contributing to one of the many great open source PowerShell projects. Note that this isn’t limited to specific pull requests for issues tagged with #Hacktoberfest — any pull request will do.

Then, if you happen to be attending the IT/Dev Connections 2016 conference, consider coming to my Anatomy of a PowerShell Pull Request session where I’ll be talking a lot more about a lot of this kind of thing. And at any time, if your stuck, tap myself or any of the other members of the PowerShell community that are plugged into this open source movement via Twitter, or on the PowerShell Slack channel (you can sign up for that here) and ask for help! Lastly, if you’re a PowerShell open source project owner, consider tagging issues you really want people to look at in #Hacktober with the #Hacktoberfest tag so that they get more attention during this challenge (this isn’t a requirement — it is merely a facility to help community members discover issues they could submit a pull request for).

To keep visibility high and encourage others to participate, as you submit pull requests consider sending out a tweet about them with the #PowerShell and #Hacktober hashtags along with a link to this post so that others can discover the challenge as well!

Are you with me? Let’s make #Hacktober a milestone month for PowerShell open source project pull requests!

Kirk out.

PowerShell Script Analyzer

If you write PowerShell scripts or modules, you need to pay attention to this!

As part of the many improvements and updates that Microsoft has been working on for Windows 10 and PowerShell 5, they created a new module that is very important for the PowerShell ecosystem to take note of.  It’s so important, that you need to pay attention to it even if you’re not using PowerShell 5 much yet.

PowerShell Script Analyzer (PSScriptAnalyzer) is a module whose intent is to provide a detailed analysis of PowerShell script and module files.  This includes ps1, psm1, and psd1 files.  In computer programming, this is referred to as a linting tool.  A linting tool is simply a piece of software that identifies and flags suspicious usage of the software that is identified through static analysis of the source code.  PowerShell Script Analyzer performs this task by analyzing whatever ps1, psm1, and psd1 files you tell it to and using a set of built-in and user-defined rules that each identify specific undesirable or suspicious usage of the PowerShell language in those files.

This module is very important because any module that is published to the PowerShell Gallery in the future will automatically be analyzed with this module.

When PowerShell Script Analyzer was originally exposed to the community, it was as a built-in module that ships with PowerShell 5.0.  That was useful, however the PowerShell Script Analyzer team wanted to get more community involvement with this module.  To achieve that goal, they pulled the module from the April preview of PowerShell 5.0 and made it available as open source on GitHub instead (hooray for open source!).  That was a step in the right direction, but it still required PowerShell 5.0 to function, and if you’re anything like me, you still spend most of your time in PowerShell 3.0 or 4.0.  Since it’s open source though, that opens up some new possibilities, so recently I forked the project on GitHub and added downlevel support for PowerShell 3.0 or later.  I have a pull request to have this integrated into the main project that is currently being reviewed, so hopefully in a little while this PowerShell 3.0 support will be integrated into the main project and available on the PowerShell Gallery.  In the meantime you can find my fork here, and you can download the module built from that fork here: PSScriptAnalyzer for PowerShell 3.0 and later.  Now that this module includes support for PowerShell 3.0 or later, more community members can participate and provide feedback on this important module.

You should immediately start using the PSScriptAnalyzer module to perform static analysis of anything that you share with the community so that what you publish is clean.

Once you have this module on your system, you probably want to know how to use it.  This module currently includes two PowerShell cmdlets: Get-ScriptAnalyzerRule and Invoke-ScriptAnalyzer.  Get-ScriptAnalyzerRule will return a list of the rules that are in use by Script Analyzer to perform its analysis of your script files.  At this time there are 7 informational rules, 26 warning rules, and 4 error rules.  Invoke-ScriptAnalyzer will analyze one or more files that you identify using the rules that are in use by Script Analyzer, notifying you of any infractions to those rules so that you may correct them.

The final list of rules that are used in the RTM release of PSScriptAnalyzer will be decided upon based on community feedback.  Additionally, how those rules function is dependent on community feedback, and the list of rules that will ultimately block a module from being published to the PowerShell Gallery will also be chosen based on community feedback.

You should provide feedback to the PSScriptAnalyzer team about the rules that it uses, identifying bugs of course, but also identifying rules that shouldn’t be rules, rules that are missing, rules that have an incorrect severity, and any other changes that should be considered regarding the rules that are used.

If you have feedback to share, you should do so on the Issues and Discussions page of the PowerShell Script Analyzer project.  If you haven’t clicked on that link already, do so now.  You’ll see that there are a lot of discussions that are currently ongoing, and issues that are already being raised.  Given the importance of this module for the PowerShell community, if you have feedback to share, share it!  The PowerShell Script Analyzer team needs it.

The PowerShell Script Analyzer team is so eager to engage with the community that they have even started hosting live update meetings to the public.  These are currently planned for every three weeks, but that schedule may change as required.  The next meeting is currently scheduled for Tuesday, May 26th at 2PM EDT (link to calendar invite).  It would also be worthwhile to keep an eye on the Twitter accounts of PowerShell MVPs (you can find me here: @poshoholic) as well as on the PowerShell Team Blog and the PowerShell Slack channel (more on that in another post) for further announcements about these meetings.

With all of those details out of the way, you may be wondering how you can use this module.  That’s a good question, because documentation hasn’t been published for this module yet.  The module is very easy to use, but at first the information it provides can be very overwhelming so you should take small steps when you’re just getting started.  I’m going to use the remainder of this blog post to walk through what it is like using this module.  Don’t be intimidated.  This module is still under development, and the details you uncover when you start using it can be a little daunting.  If you’re having a hard time, reach out to the rest of the community for feedback and someone should be able to help you through it.

Remember, this module will be analyzing files that have not previously been analyzed, ever, identifying places where you might want to improve those scripts, so at first it may seem very noisy.  Also keep in mind that some of the things it identifies are completely benign – this is where community feedback comes in, so that we can help steer the team towards a final set of rules that really makes sense.

As an example, I’m going to use PSScriptAnalyzer to analyze my HistoryPx module.  First, I’ll change the current location to the root of my HistoryPx module (On my system I simply cd into Modules:\HistoryPx).  Then once I’ve done that, I’ll run an analysis on the psm1 file, by invoking the following command:

Invoke-ScriptAnalyzer -Path .\HistoryPx.psm1

Here’s what the results of that command look like:


This shows me that according to PSScriptAnalyzer, I have 8 rules that are being broken, and each of these rules has a severity of warning.  Now that I have some output, I need to review what it is telling me to see if I have anything that I actually need to change.  Here’s a breakdown of what these rules are trying to tell me:

The first two warnings are based on a rule that is called PSAvoidGlobalVars.  That rule simply indicates that you shouldn’t use global variables.  In HistoryPx, I refer to the $Error variable, and I use $global:Error to do so.  In this case, the warnings are benign, because the rule is too strict.  It should actually check to see if I am using non-builtin global variables, not global variables in general.  That is my opinion at least, and since it doesn’t do that right now, I’m going to open an issue with the PSScriptAnalyzer team on GitHub using the link I provided earlier (copied here).  A quick search revealed no open issues related to PSAvoidGlobalVars, so I opened one (here).

While typing the rest of this blog post, the PSScriptAnalyzer saw the issue I logged and responded thanking me for reporting it and indicating they would fix it soon.  This type of responsiveness makes supporting them and contributing to this module an absolute pleasure!

The third warning comes from a rule called PSAvoidUsingPositionalParameters.  When you share scripts, using positional parameters is discouraged because the person reading your script may not know what parameter is being used.  To address this problem, you should always use named parameters in your scripts.  PSScriptAnalyzer did its job by identifying where I had left out a positional parameter, so I need to fix that.  The warning indicates which file and what line number were being analyzed when the warning was found, so I simply open up HistoryPx.psm1, make the change by adding the missing “-Message” named parameter, and then move on to the next warning.

Warning number 4 indicates that I shouldn’t use the trap statement.  I use the trap statement in script modules I write because it ensures that terminating errors are properly treated as terminating errors if they are raised while the module is loading, which ensures that the module does not load in a semi-functional or broken state.  Therefore, this warning is benign for me and another issue I need to open with the PSScriptAnalyzer project (which I did, here).

Note: if you do come across something where you feel you need to open an issue, search the Issue page using the Filter box for the rule name and see if the issue has already been posted first.  If so, contribute by adding any comments you want to share to that issue rather than opening a new issue.  That will help the PSScriptAnalyzer team decide what to do with these issues once they are opened.

Moving along, warnings 5 through 7 all come from the PSAvoidUninitializedVariable rule, which identifies that I haven’t initialized a few variables before using them in my psm1 file.  However, that rule only identifies when variables are initialized in the same file – it does not identify when variables are initialized elsewhere, and in this case, as with all of my modules, I initialize these variables in a snippet that this module uses, so the warning is benign.  For these warnings, I can open another issue about the rule, but I’m a bit of an exception in this case.  Alternatively I can suppress the warnings if they are truly benign and not considered a bug by adding a System.Diagnostics.CodeAnalysis.SuppressMessageAttribute to my script module file.  For example, if I add this to the top of my psm1 file, warnings 5 through 7 go away:

[Diagnostics.CodeAnalysis.SuppressMessageAttribute(‘PSAvoidUninitializedVariable’, ”)]

This suppresses that rule in the current file for all variables.  I can suppress it for a single variable by replacing the empty single quotes with a variable name in single quotes, but that only suppresses the rule for one variable.  From what I can tell there is no way for me to suppress the rule for multiple variables (wildcards are not supported, and if I use multiple attributes, the last one I use wins).  There are some obvious bugs/design issues with this right now, but the point is, you should be able to suppress rules that you know you can safely ignore.

Be careful when you suppress rules.  If you suppress a rule, you want to suppress it in a minimalistic fashion, so that you still get the benefit of that rule in other places.

In this case, I’m not sure what the right action is, but the rule itself has issues, so I decided to open up an issue as a discussion topic on GitHub (here).

If you feel something is wrong but you don’t have a bug to log, opening a discussion on the Issue page is a great approach to take.

With that out of the way, I’m on to the last warning that was raised for my HistoryPx module: PSUseDeclaredVarsMoreThanAssignments.  This warning is nothing but confusing for me.  It is being raised because PowerShell Script Analyzer thinks I’m doing something wrong when I assign a script block to a property on a variable that it thinks is not initialized.  That would be wrong if the variable was actually not initialized, but the warning in this case seems simply like a bug, so I’m logging it (here).

At this point, I have passed through all the warnings generated by the analysis, made one change to correct one of them, and started logging some issues to deal with the others.  As development continues on PSScriptAnalyzer, this scale should tip dramatically to the point where you have more correctible warnings/errors for your scripts and much fewer issues to log.  For now though, it is a work in progress, but I suspect that over the next little while you’ll see a lot of improvements to address the issues that are being identified in the community.

With one file done, I can now check another file, or run analysis recursively over my entire module.  To perform recursive analysis, you would run this command from your module root folder:

Invoke-ScriptAnalyzer -Path . -Recurse

That will recursively analyze all files in your module so that you can get a high level view of everything that needs to be corrected.  The goal here is simple: continually fix the issues over time so that when you run this command recursively, you only see new issues that pop up (or ideally, no issues at all).  That will be the best way for you to maintain a high quality for any modules and scripts that you want to share with the community.

There are more things you can do with this module (such as create your own rules), but for now I just wanted to give you a taste of what it is like, and how the feedback process works so that you can start giving it a try and participating in the community effort to steer this in the right direction.

I hope to see you at the PowerShell Script Analyzer Community Meeting next Tuesday!

Kirk out.

A better approach to formatting in PowerShell

Even without a traditional user interface, it is important to separate the presentation layer from the data processing layer.

For all of its strengths, Windows PowerShell is not without its fair share of weaknesses.  One of the weaknesses in PowerShell that has been with the scripting language since version 1.0 is how it handles formatting.  When it comes to formatting data, you have two options: use one of the core Format-* cmdlets (Format-Table, Format-List, Format-Wide, or Format-Custom), or create a format ps1xml file that defines one or more formats for a specific type in an XML document.  The former only works for the script consumer, because the core Format-* cmdlets actually convert rich object data into format data, not bearing any resemblance to the original objects with properties and methods that you started with.  The latter works for the script author as well as the script consumer, however it is significantly more complicated to implement, and it has its share of limitations as well (for example, once a default format is defined for a type, a script author will not be able to have the results of their script rendered in another format by default without taking some extra steps that shouldn’t be necessary).  The end result of these limitations and complications is a formatting system that is not fully leveraged in the majority of PowerShell scripts that are shared among the community.

I recently found myself looking at this problem and asking myself one more time if I could do better (it is a problem that I had looked at in the past).  The variety of formats that are available is encouraging, and you can generate some pretty useful report-like output when you take the time to leverage the formatting system.  But something would have to be done to fix the user experience.  The end goal I was envisioning was quite clear: a distinct separation of the presentation layer from the data processing layer such that formatting information would never get in the way of data processing.

Fortunately, PowerShell is quite well set up for this type of change.  Every command that you run has its results sent through the Out-Default cmdlet.  Out-Default internally knows what to do with whatever you throw at it, whether that be object data or format data (which really is object data as well, but I’m making a distinction here).  When you return an object from a command to the console, if that object is not format data, Out-Default internally will look at the object type, identify the default format for that type, convert the object into format data and then output that format data to the console.  If Out-Default does receive format data instead because the object was already converted into format data, it simply renders that format data in the console.  Given that is the case, it should be possible to modify the core Format-* cmdlets so that they attach format data to the objects that they format instead of converting object data into format data, and it should also be possible to modify the core Out-Default cmdlet so that it detects format data when it is attached to an object and renders that format data directly to the console instead of looking at the object type to decide what format to use.  That, my friends, is exactly what I did.

FormatPx is a module made up of a nested binary module that defines five proxy cmdlets (Format-Table, Format-List, Format-Wide, Format-Custom, and Out-Default) plus one new cmdlet (Format-Default) and a script module that automatically applies the Force parameter whenever Format-Table, Format-List, or Format-Wide are used.  It changes how the core Format-* cmdlets work as described above, and it also makes it easier to get format information from types that include a custom format as their default.  Here’s a short screencast showing FormatPx in action:

(Note, do not adjust your computer, there is no audio in this screencast.)

Better PowerShell formatting with FormatPx

That’s a decent overview of what you get with the FormatPx module – a separation of the formatting layer from the data processing layer, giving you much more control over the presentation of your script results without ps1xml file complexity while still allowing script consumers to view the results in whatever format they like.  What do you think?

Kirk out.

My favorite PowerShell one-liner

Happy Monday everyone!  I thought it might be a fun way to start the week by sharing my favorite PowerShell one-liner.  One-liners are nostalgic for me, because I learned a lot of cool programming tricks from one-liners in issues of Compute magazine a long time ago.  The PowerShell one-liners that grab my interest the most are those that can do many different things in the simplest pipeline possible.

So far, eight years into using PowerShell, this is my favorite PowerShell one-liner:


At first glance, can you tell what this does?  Take a minute to think about it, I’ll wait.




Ok, now that you’ve thought about it a bit, you might have realized that it is not that easy to figure out everything that this does, so let me break it down into parts.

gci –r -force

If your familiar with PowerShell, this part is pretty straightforward.  gci is an alias for Get-ChildItem, and this command tells PowerShell to get a recursive (-r) directory listing of all files and folders, including any hidden files or folders (-force), starting from the current location.

measure -sum PSIsContainer,Length -ea 0

measure is an alias for Measure-Object, a cmdlet that can perform calculations on a collection of objects that it is passed from a pipeline.  In this case, we’re going to aggregate (-sum) two values from properties in this collection: PSIsContainer and Length. We’re also going to hide any errors we get from trying to access folders that we don’t have access to (-ea 0).  How Measure-Object measures the objects it receives is the most interesting part of this pipeline.

You can probably guess why we would want to calculate the sum of the Length property, but why would you want to calculate the sum of the PSIsContainer property?  PSContainer is a boolean value that indicates whether or not an item returned from Get-ChildItem is a container.  A value of $true indicates it is a folder, and a value of $false indicates it is a file.  When you calculate the sum of a bunch of boolean values in PowerShell, PowerShell first converts those boolean values into their integer equivalent.  $false implictly converts into a value of 0, and $true implicitly converts into a value of 1.  Given this is the case, by aggregating the PSIsContainer property, you’re effectively calculating a sum of the number of containers in the result set.

$di,$fi =

PowerShell allows you to assign a multi-valued result set to different variables by separating the variables with a comma.  In such an assignment, the first object returned will be assigned to the first variable, the second assigned to the second variable, and so on until the last variable which will contain all remaining objects.  In this case, we’re assigning two variables, and each will contain a single value.  Since we’re aggregating two properties (PSIsContainer and Length), Measure-Object will return two objects containing measurement information: one for PSIsContainer and one for Length.  The measurement information for PSIsContainer will be assigned to $di and the measurement information for Length will be assigned to $fi.

Also, in addition to calculating the aggregate for the two properties, Measure-Object also counts the number of objects that it processes to perform that calculation and includes that count in the result as well.  In cases where an object does not have a property that is being measured, that object will not be included in the count, so folders, which do not have a Length property, will not influence the values returned in the second measurement information object that is assigned to $fi.

Putting it all together

When I run the aforementioned one-liner on my PowerShell folder, here are the values I get from $di and $fi, respectively:

Count    : 6516
Average  :
Sum      : 2012
Maximum  :
Minimum  :
Property : PSIsContainer

Count    : 4504
Average  :
Sum      : 917710488
Maximum  :
Minimum  :
Property : Length

For the $di object, the Sum property identifies how many directories were found in the search.  This was determined by aggregating the integer value of the PSIsContainer property on the objects that were measured.  The Count property identifies how many items altogether were processed (directories and files, both of which have a PSIsContainer property).

For the $fi object, the Sum property identifies the total size of all files that were found in the search, and the Count property identifies how many files were found in the search.

Put this all together, and you have a PowerShell one-liner that gives you folder stats, identifying the number of files and folders in a folder as well as the total size of all files in that folder.  Neat, huh?  I think the reason why I like this PowerShell one-liner the most is because it demonstrates how a little creativity can be applied to get useful information that is not available in a native cmdlet from several sources in a simple and elegant manner.

What’s your favorite one-liner, and why?

Maybe I should start a meme with this.  Jeffrey Hicks, Shay Levy, Ed Wilson, June Blender, and Hal Rottenberg, tag, you’re it! Smile

Kirk out.

Raise your PowerShell game with HistoryPx, DebugPx and TypePx

Recently I’ve been taking some of the most useful tricks and tools that I use in PowerShell to help me get my work done more easily and packaging them up in open-source modules.  I find these modules very useful in the work that I do, and wanted to share them with the community.  Each of these modules works on PowerShell 3.0 or later.  Here are the modules I am referring to:

DebugPx Includes ifdebug (conditional debugging when invoking commands with –Debug) and breakpoint (visual breakpoints in scripts; makes debugging from any host and with any editor much easier) cmdlets. link
HistoryPx Seamlessly upgrades the history capabilities in PowerShell, adding rich extended history information that you don’t get by default. link
TypePx Defines dozens of useful type extensions for common .NET types.  Brings foreach and where method support to PowerShell 3.0.  Includes type acceleration commands. link

The links in the table above will take you to blog posts that provide some more context about what these modules do.  If you’d like to raise your PowerShell game by using some cool new functionality in PowerShell 3.0 or later today, please give these modules a try and let me know what you think!

Happy Halloween!

Kirk out.

Making history more fun with PowerShell

History is boring.

While I don’t believe that is true (I actually find history quite fascinating), it is certainly how I would have responded if asked about history while taking it in high school.  I’m not the only one who felt this way back then either.  When I asked my wife for her opinion about it, she told me that it was in her history class where she learned to write upside down with her non-dominant hand.

My problem with history is the presentation of it.  It can be fun and include a lot of contextual detail that is presented with zeal and that connects with the audience; or, like my high school history class, it can be presented as a timeline with boring details attached to it.

PowerShell automatically tracks the commands that you invoke in a history table that you can view at any time with the Get-History cmdlet.  If you invoke Get-History (or its alias, h) after you’ve been using PowerShell for a bit, you should see results similar to the following:

native powershell history

That’s not entirely useless, because it shows commands that we ran earlier, but as far as presentation goes it’s pretty boring, right?  Now, this is only the default format for history in PowerShell, and as with most data in PowerShell this screenshot doesn’t paint the full picture.  Each entry also includes other fields that are not shown in the default format, as can be seen here:

native powershell history expanded

The problem is, that’s all you get: a timeline with boring details attached to it.  The ExecutionStatus property tells me whether a command completed or failed, but the usefulness of that information is very limited because a value of Failed only means the command failed with a terminating error; if a command failed with a non-terminating error, then ExecutionStatus will simply indicate that the command completed, which is not very useful knowledge to have.  Start and end execution times are useful, but that means I need to do math to determine how long various commands took, or re-run the commands inside of Measure-Command.  There’s no zeal behind the presentation of this information.

Fortunately, PowerShell is extensible, so I decided I should be able to do better.  Here’s my view of what PowerShell history should be, as provided by HistoryPx:

powershell extended history from HistoryPx

HistoryPx is an open-source PowerShell module that transparently integrates into the PowerShell environment where it is loaded.  You simply load the module by importing it with Import-Module into an environment running PowerShell 3.0 or later, and once it is loaded it will start tracking extended history information for every command you run.  Extended history information includes the command duration, whether or not the command was successful, any output returned by the command, and any errors that were added to the $error log by the command.  When you invoke Get-History, the new default format for extended history information presents most of these details, or you can pipe the results to Format-List * to get all properties where you can see the core history properties as well as the extended information added by HistoryPx.

In addition to providing extended history information in PowerShell, the HistoryPx module includes one other feature that is quite useful.  It defines a double-underscore ($__) variable in the global scope that will always contain the output of the last command that was executed.  When the last command doesn’t have any output, $__ will be assigned a value of $null.  This comes in handy when you invoke a command that takes time to complete but forget to capture the results in a variable.

If you want to give this module a try, head on over to the HistoryPx page on GitHub where you can learn how to install or update it in your environment and read additional details provided in the readme file on GitHub.  If you are concerned about memory usage with this module, your questions are addressed in the readme as well.  This is only the first release of this module, and I am already starting to track features that I want to add to another release.  If you do give it a try, please let me know what you think, either in the comments on this blog post or in GitHub, and please log any issues you find on GitHub as well.


Kirk out.

Transform repetitive script blocks into invocable snippets with SnippetPx

The more you write code, the more you notice patterns in the code you write.  This goes for PowerShell, C#, Ruby, and any other programming or scripting language under the sun.  Patterns are opportunities.  Opportunities to generalize them and define them as design patterns.  Opportunities to isolate blocks of reusable code in such a way that they can be reused so that you can follow the DRY principle in your work.  This article is about the latter of those two.

In recent months I have been working on trying to reduce duplication in my own work by keeping my modules and functions as DRY as possible.  One challenge with keeping code DRY in PowerShell is in deciding which is the most appropriate method to do so.  There are many opportunities to keep your code DRY in PowerShell.  You can create:

  • cmdlets
  • advanced functions
  • basic functions
  • unnamed functions (aka script blocks)
  • script files
  • type extensions for the Extended Type System (ETS)
  • classes with properties and methods, either in a .NET assembly that is imported into PowerShell, or if you’re using PowerShell 5.0 or later, in PowerShell itself
    Despite each of these extension points being available to PowerShell, they don’t always fit the scenarios you need them to, perhaps because they are not appropriate for the intended purpose, because they have some limitation that you can’t work around, or perhaps for some other reason.  For example, I find myself writing all of these, and there are certain pieces of code that I want to set up for easy sharing in many of these types of extensions without being an extension point itself.  When you have logic that you might use anywhere that you could write PowerShell, how do you set that up in such a way that you can consume it in all of those locations easily regardless of the machine you are running on, without taking a dependency on physical file paths?  That last point is important, because standalone ps1 files may be one possible answer to this need, except invoking them requires knowing where they are, and when you invoke them you must decide whether to dot source them or call them with the call operator, which in turn means you must know the implications of such a decision.  Plus, when their use spans all of PowerShell (any script, any module, any function), where do you put them without having to burden the consumer with extra setup work?  And how can you create more of these while making them discoverable, and able to be added to or removed from a system with ease?
    Snippets are a great answer to these questions.  Snippet is a programming term that has been around for many years, and it generally refers to a small region of re-usable code.  They are also a lot more than a fancied-up bit of copy/paste functionality.  They have parameters, or fields, that control how they run.  They can surround text you have selected, or simply insert into the current cursor location.  Most importantly however, for me at least, is that snippets can be invocable, and that, my friends, is key because when you’re trying to maintain a DRY code base, you don’t want to inject boilerplate code blocks in many locations…you want to invoke them.

With snippets being a great solution to this problem, I decided to try to build a snippet-based solution that would allow for discoverable, invocable snippets.  I wanted this solution to be able to find snippets regardless of what computer you were running on, as long as you followed a few simple rules.  I wanted this solution to keep snippet definitions as simple as the creation of a ps1 file.  And I wanted this solution to allow for snippets to be invoked in the current scope by default, and in a child scope as an option.

Enter SnippetPx.  SnippetPx is a module that provides two very simple commands, plus a handful of snippets.  The two commands are Get-Snippet, which retrieves the snippets that are discoverable on the local computer, and Invoke-Snippet, which allows a snippet to be invoked by name, with or without parameters, in the current scope or in a child scope.  With this module, any ps1 file that is stored in a “snippets” folder inside of the current user’s Documents\WindowsPowerShell folder, the system %ProgramFiles%\WindowsPowerShell folder, or as a subfolder under the root of any module that is discoverable via PSModulePath will be discoverable as a snippet.  You can see some examples by looking in the “snippets” folder in the SnippetPx module itself, or by looking in another module that includes some snippets such as the TypePx module.  Also, any snippet that is discoverable by this module is invocable by the Invoke-Snippet cmdlet.

Since creating this module, it has quickly become a core module that my other modules take a dependency on in order to keep my code DRY.  That effort has already paid off for myself because it has allowed me to update a block of code that is defined in a snippet and only have to make that change once, while all other locations where that snippet is invoked simply run with the new code change.  I encourage you to give it a try and see if it helps you remove what might otherwise be repetitive code that is more difficult to maintain than it should be.

If you would like to give SnippetPx a try, you can download Snippet from GitHub (yes, it is open source, along with the binary module where Invoke-Snippet and Get-Snippet are implemented) or from the PowerShell Resource Gallery (aka the PowerShellGet public repository).  Feedback is more than welcome through any means by which you want to communicate with me: the Issues page for this project on GitHub, social media channels, comments on this blog, etc.  In the meantime, I will continue to identify opportunities to create more snippets that will be useful to others and push them out either as updates to SnippetPx or in the modules where those snippets are used.

One more thing: if you do decide that you want to create some snippets of your own, there are some useful details in the Notes section of the help documentation for the Get-Snippet and Invoke-Snippet cmdlets.  I strongly recommend giving that a read before you dive into creating invocable snippets, as it provides some additional detail on how snippets are discovered as well as recommendations on the naming of your snippet ps1 files.

Thanks for listening!

Kirk out.