The Powershell V1 to V2 Conversion

This post is about my experience converting the CodePlex project, SQL Server Powershell Extensions (SQLPSX) Powershell V1 function libraries into PowerShell V2 Advanced functions within modules. 

In order to provide context for people reading this blog post a quick timeline is needed:

  • Powershell V1 was released in November 2006
  • SQLPS, the SQL Server Powershell host that ships with SQL Server 2008, is based on Powershell V1
  • Powershell V2 was released in October 2009
  • Everything you write in Powershell V1 should work in V2
  • SQLPSX is a CodePlex project I started for working with SQL Server and Powershell. The first release was July 2008 and it has frequently updated since. A Powershell V2 release was published on 12/31/2009
And with that hopefully the rest of this post makes sense. Let’s take a look at my top six list of Powershell V2 improvements over V1 for script developers: 


Modules allow a script developer to package functions, scripts, format files into something very easy to distribute. In Powershell V1 I would create a function library which is just a script file with related functions. The function library would then need to be sourced to use:
. ./librarySmo.ps1
There were several problems with this approach:
  • Handling related related script files and separate function libraries is difficult — usually solved by creating an initialization script and detailed instructions.
  • Loading assemblies
  • Appending format files
Modules make handling the distribution of a set of related files much easier. We simply place the module which is nothing more than the same old function library with .psm1 extension into a directory under DocumentWindowsPowerShellModules and optionally add a second special file called module manifest (more on this later). As an example I have sqlserver module in a directory DocumentWindowsPowerShellModulessqlserver. I can then import a module instead of sourcing the functions:
import-module sqlserver
The module and manifest file contain the necessary information about processing assemblies, related script files, and nested modules. So, converting function libraries to modules involves little more than renaming .ps1 files to the module file extension .psm1 and placing the file into it’s own directory under DocumentsWindowsPowershellModules. But, if that’s all you are going to do there is little value in creating modules. Moving from Powershell V1 scripts to V2 modules should also include taking advantage of many of the Powershell V2 features described in this blog post.
A word about binary modules: SQLPSX is mostly implemented as Powershell script modules there are however a couple of compiled cmdlets used for parsing and formatting of T-SQL scripts: Test-SqlScript and Out-Sqlscript. Converting compiled snapin dll’s to a module is just as easy as script based function libraries, you simply copy the snapin dll and any required assemblies to its own directory under DocumentsWindowsPowershellModules. This is exactly what I’ve done with the SQLParser module. I’ve also added a module manifest (psd1 file).
This brings us to module manifests which are basically processing instructions for moduels. Module manifests (psd1) files are created by new-modulemanifest cmdlet allow us to do several things:
  • Make functions private through by pattern matching the FunctionsToExport property. As an example in the SQLServer module I specify FunctionsToExport = ‘*-SQL*’ — This tell Powershell to only export function that match -SQL prefix. I have several helper functions that I don’t want to export, so I simply use a different prefix or none at all to avoid exporting.
  • Import assemblies automatically by making use of the RequiredAssemblies property
  • Nest modules i.e. import child modules with NestedModules property

The manifest files themselves are really easy to create. After you’ve created a module (.psm1), run new-modulemanifest and enter the information when prompted.

Simplified Error Checking

The try/catch error handling added to Powershell V2 is so much easier to work with and understand than its predecessor in Powershell V1 trap and thow. The construct is especially handy when dealing with SMO errors that sometimes use nested error objects.
Both validatescript and validateset reduce input validation code I needed to write. I think this is best illustrated by a couple of examples from SQLPSX functions
The param section below uses ValidateSet to ensure values are either Data or Log:
    [Parameter(Position=0, Mandatory=$true)] $sqlserver,           
    [ValidateSet("Data", "Log")]           
    [Parameter(Position=1, Mandatory=$true)] [string]$dirtype           
This second param section uses ValidateScript to check that the input object namespace is an SMO object.
  [Parameter(Position=0, Mandatory=$true, ValueFromPipeline = $true)]            
  [ValidateScript({$_.GetType().Namespace -like "Microsoft.SqlServer.Management.Smo*"})] $smo,            
  [Parameter(Position=1, Mandatory=$false)] [Microsoft.SqlServer.Management.Smo.ScriptingOptions]$scriptingOptions=$(New-SqlScriptingOptions)            
Between ValidateSet and ValidateScript I’m able to handle most input validation checks that in Powershell V1 would have required more code.


OK, so this items really isn’t about Powershell V2 rather it’s a change in process for me. As part of the conversion I wanted to adopt a testing framework and perform more rigorous testing. I first heard of a Powershell based xUnit testing framework on the Powerscripting podcast episode 80 in which Jon and Hal interviewed  Klaus Graefensteiner about his CodePlex project PSUnit. So, I decided to try PSUnit and I’ve been very happy with the results. Following the directions on the PSUnit site it is a cinch to setup. PSUnit integrates with Powershell ISE. A menu item is added to execute Unit tests:


It should be noted that although I’m using PSUnit to test Powershell functions this doesn’t mean that’s all its good for. In fact the purpose of the PSUnt is to perform full unit testing of your .NET applications. You can test just about anything (.NET, COM, etc). For my purposes I’m interested in testing my own Powershell functions.  As a script developer the easiest thing you can do with PSUnit is to create a test function for each of your functions and verify the output object is the type you expected. Here’s an example test function for Get-SqlServer:

function Test.Get-SqlServer([switch] $Category_GetSql)            
    $Actual = Get-SqlServer "$env:computernamesql2K8"            
    Write-Debug $Actual            
    Assert-That -ActualValue $Actual -Constraint {$ActualValue.GetType().Name -eq 'Server'}            

Althought most of the test functions I’ve created verify the object type. Of course you can develop more complex assertions.  This approach works very well for SQLPSX functions that return SMO objects like server, database, table, etc. The samples and documentation for PSUnit have additional examples. Once you create test functions you can easily test and repeat in a matter of minutes. The first time I ran through a complete test I had a failure rate around 10% of all functions. This means that 10% of the function never really worked. I thought I had tested everything, but without a framework in place things get missed. As part of the release I made sure every function tested and passed 100%. I really like the HTML reports PSUnit generates. Sample output from a test of the SQLServer module is available here. All SQLPSX test scripts are available in the source code area under "Test Scripts".
Big thanks to Klaus for creating PSUnit, I’m looking forward to seeing the soon-to-be release version 2.

Process from Pipeline

Embracing the pipeline is part of writing Powershell scripts to be well, more Powershell-like. In Powershell V1 I adopted a style of writing functions created Keith Hill as described in his blog post titled "Writing CMDLETs in PowerShell". The post shows us how to write functions to accept both command argument and pipeline input. Powershell V2 makes creating a function to accept both command argument and pipeline even easier. As example let’s look at a Powershell V1 function and the equivalent Powershell V2 function:

Powershell V1 function:
function Get-SqlScripter            
    param($smo, $scriptingOptions=$(Set-SqlScriptingOptions))            
        function Select-SqlScripter ($smo, $scriptingOptions=$(Set-SqlScriptingOptions))            
        } #Select-SqlScripter            
        if ($_)            
            if ($_.GetType().Namespace -like "Microsoft.SqlServer.Management.Smo*")            
            { Write-Verbose "Get-SqlScripter $($_.Name)"            
              Select-SqlScripter $_ $scriptingOptions }            
            { throw 'Get-SqlScripter:Param `$smo must be an SMO object.' }            
        if ($smo)            
        { $smo | Get-SqlScripter -scriptingOptions $scriptingOptions }            
Powershell V2 function:
function Get-SqlScripter            
    [Parameter(Position=0, Mandatory=$true, ValueFromPipeline = $true)]            
    [ValidateScript({$_.GetType().Namespace -like "Microsoft.SqlServer.Management.Smo*"})] $smo,            
    [Parameter(Position=1, Mandatory=$false)] [Microsoft.SqlServer.Management.Smo.ScriptingOptions]$scriptingOptions=$(New-SqlScriptingOptions)            
    { $smo.Script($scriptingOptions) }            
The functions can be called from the pipeline:
Get-SqlDatabase "Z002sql2k8" "pubs" | Get-SqlTable -name "authors" | Get-SqlScripter
OR as a command line argument
$table = Get-SqlDatabase "Z002sql2k8" "pubs" | Get-SqlTable -name "authors"
Get-SqlScripter $table
Both functions perform the same function, however the Powershell V2 function is much simpler due to the use of "ValueFromPipeLine" this tells Powershell to accept input from the pipeline and the command line without a lot of extra coding.


The ability to add comment-based to a function is huge benefit in usability. Prior to Powershell V2’s release I contemplated creating compiled cmdlets just so help would be available–I’m glad I waited. There are two ways to create help for scripts you can either use comment-based help or use an external MAML file (for compiled cmdlets MAML files are your only option). I briefly toyed with the idea of using External MAML files for scripts however there are limitations in needing to specify an absolute path plus MAML files are bit unwieldy to create. My advice if you’re going to create help for scripts or functions use comment-based help. The syntax for comment based help is very simple. Here’s an example comment-based help from SQLPSX:
Scripts an SMO object.
The Get-SqlScripter function  calls the script method for an SMO object(s).
    You can pipe SMO objects to Get-SqlScripter
    Get-SqlScripter returns an array System.String.
Get-SqlDatabase "Z002sql2k8" "pubs" | Get-SqlTable | Get-SqlScripter
This command scripts out all user tables in the pubs database.
Get-SqlDatabase "Z002sql2k8" "pubs" | Get-SqlTable -name "authors" | Get-SqlScripter
This command scripts out the authors table.
$scriptingOptions = New-SqlScriptingOptions
$scriptingOptions.Permissions = $true
$scriptingOptions.IncludeIfNotExists = $true
Get-SqlDatabase "Z002sql2k8" "pubs" | Get-SqlTable | Get-SqlScripter -scriptingOptions $scriptingOptions
This command scripts out all users tables in the pubs database and passes a scriptingOptions.
function Get-SqlScripter
I can then use get-help Get-SqlScripter -full to show help output with examples. I wish I could use comment-based help instead of MAML for compiled cmdlets!

new-object -property hashtable

One of great things about Powershell is the discoverability of objects. If you create a new object you can instantly see the objects properties and methods using Get-Member. Only one problem, the discoverability aspect tends to break down when the creators of the object model you’re using make bad design decisions, case in point the Microsoft.SqlServer.Replication.ScriptOptions. This enumeration uses a FlagsAttribute to allow bitwise combination of attributes. If this sounds confusing, it is. Fortunatley Powershell V2 adds a very clean way of creating objects that allow you to specify a hashtable as input. We can leverage this feature to create a more intuitive replication script options object.
First I created a file replscriptopts.ps1 with a hashtable of all the replication scrpting options, a subset is included below:
Deletion = $false
Creation = $true
DisableReplicationDB = $false
EnableReplicationDB = $false
IncludeAgentProfiles = $false
Next I create a function which creates an object from the file:
function New-ReplScriptOptions
new-object PSObject -property (&"$scriptRootreplscriptopts.ps1") | add-member scriptproperty ScriptOptions `
  $scriptOptions = [Microsoft.SqlServer.Replication.ScriptOptions]::None            
  $this | get-member -type NoteProperty | where {$this.($} |             
          foreach {$scriptOptions = $scriptOptions -bor [Microsoft.SqlServer.Replication.ScriptOptions]::($}            
} -passthru
The function, new-replscriptoptions creates a new object using a hashtable as input. The add-member portion adds a scriptproperty that calculates the bitwise representation of all properties where the value is set to true. So, rather than the bizare bitwise enumeration we started out with we now have a discoverable object.

I can then create a replication script options object and set the properties I wanted turned on to true and then use the object to script out my replication.
$scriptOpt = new-replscriptopts
$scriptOpt.Deletion = $true
$scriptOpt.Creation = $true
#Returns bitwise combination of properties



A few issues I ran into during the conversion and remembered to write down…
  • Cannot find type for custom attribute ‘Parameter ‘. Please make sure the assembly containing this type is loaded. Used this post from Richard Siddaway to resolve
  • Be careful with strongly typing parameters. For the most part it’s a good thing to strongly type variables, but I’ve found a few cases where it is isn’t. I have several functions where I add PSNoteProperties to a strongly type object. If I then pipe the output to be used by another function whiich is also strongly typed the noteproperties are striped away leaving just the original object. The solution is to not strongly type the parameter.
  • The position binder is supposed to be optional, however if I specify a parameterset this seems to be required in order to use positional.
  • I wasn’t able to do anything more than simple pattern matching with FunctionsToExport in the module manifest. This might be OK, but being able to explicitly list functions to export would be nice. What I ended up doing here is being very careful about adopting a standard prefix within a module.
  • By default all functions within a module are exported (this means they are available for use), however aliases are not. I spent a day wrestling with this issue and posted a StackOverFlow question. Although I agree aliases can sometime confuse things, not exporting alias by default I explicitly create within a module is counter-intuitive to the approach taken with functions. My thought is that if I didn’t want my aliases exported why would I create in my module? I’m sure this was a well-intentioned design decision, but it’s probably a little over thought.

This entry was posted in PowerShell. Bookmark the permalink.

One Response to The Powershell V1 to V2 Conversion

  1. Mike says:

    Thank you for posting this. I have started slowly moving to the new features in 2.0, and I’m sure that your experiences will help me and others make the transition more quickly and easily.

Comments are closed.