The Powershell V1 to V2 Conversion

This post is about my experience converting the CodePlex project, SQL Server Powershell Extensions (SQLPSX) Powershell V1 function libraries into PowerShell V2 Advanced functions within modules. 

In order to provide context for people reading this blog post a quick timeline is needed:

  • Powershell V1 was released in November 2006
  • SQLPS, the SQL Server Powershell host that ships with SQL Server 2008, is based on Powershell V1
  • Powershell V2 was released in October 2009
  • Everything you write in Powershell V1 should work in V2
  • SQLPSX is a CodePlex project I started for working with SQL Server and Powershell. The first release was July 2008 and it has frequently updated since. A Powershell V2 release was published on 12/31/2009
And with that hopefully the rest of this post makes sense. Let’s take a look at my top six list of Powershell V2 improvements over V1 for script developers: 


Modules allow a script developer to package functions, scripts, format files into something very easy to distribute. In Powershell V1 I would create a function library which is just a script file with related functions. The function library would then need to be sourced to use:
. ./librarySmo.ps1
There were several problems with this approach:
  • Handling related related script files and separate function libraries is difficult — usually solved by creating an initialization script and detailed instructions.
  • Loading assemblies
  • Appending format files
Modules make handling the distribution of a set of related files much easier. We simply place the module which is nothing more than the same old function library with .psm1 extension into a directory under DocumentWindowsPowerShellModules and optionally add a second special file called module manifest (more on this later). As an example I have sqlserver module in a directory DocumentWindowsPowerShellModulessqlserver. I can then import a module instead of sourcing the functions:
import-module sqlserver
The module and manifest file contain the necessary information about processing assemblies, related script files, and nested modules. So, converting function libraries to modules involves little more than renaming .ps1 files to the module file extension .psm1 and placing the file into it’s own directory under DocumentsWindowsPowershellModules. But, if that’s all you are going to do there is little value in creating modules. Moving from Powershell V1 scripts to V2 modules should also include taking advantage of many of the Powershell V2 features described in this blog post.
A word about binary modules: SQLPSX is mostly implemented as Powershell script modules there are however a couple of compiled cmdlets used for parsing and formatting of T-SQL scripts: Test-SqlScript and Out-Sqlscript. Converting compiled snapin dll’s to a module is just as easy as script based function libraries, you simply copy the snapin dll and any required assemblies to its own directory under DocumentsWindowsPowershellModules. This is exactly what I’ve done with the SQLParser module. I’ve also added a module manifest (psd1 file).
This brings us to module manifests which are basically processing instructions for moduels. Module manifests (psd1) files are created by new-modulemanifest cmdlet allow us to do several things:
  • Make functions private through by pattern matching the FunctionsToExport property. As an example in the SQLServer module I specify FunctionsToExport = ‘*-SQL*’ — This tell Powershell to only export function that match -SQL prefix. I have several helper functions that I don’t want to export, so I simply use a different prefix or none at all to avoid exporting.
  • Import assemblies automatically by making use of the RequiredAssemblies property
  • Nest modules i.e. import child modules with NestedModules property

The manifest files themselves are really easy to create. After you’ve created a module (.psm1), run new-modulemanifest and enter the information when prompted.

Simplified Error Checking

The try/catch error handling added to Powershell V2 is so much easier to work with and understand than its predecessor in Powershell V1 trap and thow. The construct is especially handy when dealing with SMO errors that sometimes use nested error objects.
Both validatescript and validateset reduce input validation code I needed to write. I think this is best illustrated by a couple of examples from SQLPSX functions
The param section below uses ValidateSet to ensure values are either Data or Log:
    [Parameter(Position=0, Mandatory=$true)] $sqlserver,           
    [ValidateSet("Data", "Log")]           
    [Parameter(Position=1, Mandatory=$true)] [string]$dirtype           
This second param section uses ValidateScript to check that the input object namespace is an SMO object.
  [Parameter(Position=0, Mandatory=$true, ValueFromPipeline = $true)]            
  [ValidateScript({$_.GetType().Namespace -like "Microsoft.SqlServer.Management.Smo*"})] $smo,            
  [Parameter(Position=1, Mandatory=$false)] [Microsoft.SqlServer.Management.Smo.ScriptingOptions]$scriptingOptions=$(New-SqlScriptingOptions)            
Between ValidateSet and ValidateScript I’m able to handle most input validation checks that in Powershell V1 would have required more code.


OK, so this items really isn’t about Powershell V2 rather it’s a change in process for me. As part of the conversion I wanted to adopt a testing framework and perform more rigorous testing. I first heard of a Powershell based xUnit testing framework on the Powerscripting podcast episode 80 in which Jon and Hal interviewed  Klaus Graefensteiner about his CodePlex project PSUnit. So, I decided to try PSUnit and I’ve been very happy with the results. Following the directions on the PSUnit site it is a cinch to setup. PSUnit integrates with Powershell ISE. A menu item is added to execute Unit tests:


It should be noted that although I’m using PSUnit to test Powershell functions this doesn’t mean that’s all its good for. In fact the purpose of the PSUnt is to perform full unit testing of your .NET applications. You can test just about anything (.NET, COM, etc). For my purposes I’m interested in testing my own Powershell functions.  As a script developer the easiest thing you can do with PSUnit is to create a test function for each of your functions and verify the output object is the type you expected. Here’s an example test function for Get-SqlServer:

function Test.Get-SqlServer([switch] $Category_GetSql)            
    $Actual = Get-SqlServer "$env:computernamesql2K8"            
    Write-Debug $Actual            
    Assert-That -ActualValue $Actual -Constraint {$ActualValue.GetType().Name -eq 'Server'}            

Althought most of the test functions I’ve created verify the object type. Of course you can develop more complex assertions.  This approach works very well for SQLPSX functions that return SMO objects like server, database, table, etc. The samples and documentation for PSUnit have additional examples. Once you create test functions you can easily test and repeat in a matter of minutes. The first time I ran through a complete test I had a failure rate around 10% of all functions. This means that 10% of the function never really worked. I thought I had tested everything, but without a framework in place things get missed. As part of the release I made sure every function tested and passed 100%. I really like the HTML reports PSUnit generates. Sample output from a test of the SQLServer module is available here. All SQLPSX test scripts are available in the source code area under "Test Scripts".
Big thanks to Klaus for creating PSUnit, I’m looking forward to seeing the soon-to-be release version 2.

Process from Pipeline

Embracing the pipeline is part of writing Powershell scripts to be well, more Powershell-like. In Powershell V1 I adopted a style of writing functions created Keith Hill as described in his blog post titled "Writing CMDLETs in PowerShell". The post shows us how to write functions to accept both command argument and pipeline input. Powershell V2 makes creating a function to accept both command argument and pipeline even easier. As example let’s look at a Powershell V1 function and the equivalent Powershell V2 function:

Powershell V1 function:
function Get-SqlScripter            
    param($smo, $scriptingOptions=$(Set-SqlScriptingOptions))            
        function Select-SqlScripter ($smo, $scriptingOptions=$(Set-SqlScriptingOptions))            
        } #Select-SqlScripter            
        if ($_)            
            if ($_.GetType().Namespace -like "Microsoft.SqlServer.Management.Smo*")            
            { Write-Verbose "Get-SqlScripter $($_.Name)"            
              Select-SqlScripter $_ $scriptingOptions }            
            { throw 'Get-SqlScripter:Param `$smo must be an SMO object.' }            
        if ($smo)            
        { $smo | Get-SqlScripter -scriptingOptions $scriptingOptions }            
Powershell V2 function:
function Get-SqlScripter            
    [Parameter(Position=0, Mandatory=$true, ValueFromPipeline = $true)]            
    [ValidateScript({$_.GetType().Namespace -like "Microsoft.SqlServer.Management.Smo*"})] $smo,            
    [Parameter(Position=1, Mandatory=$false)] [Microsoft.SqlServer.Management.Smo.ScriptingOptions]$scriptingOptions=$(New-SqlScriptingOptions)            
    { $smo.Script($scriptingOptions) }            
The functions can be called from the pipeline:
Get-SqlDatabase "Z002sql2k8" "pubs" | Get-SqlTable -name "authors" | Get-SqlScripter
OR as a command line argument
$table = Get-SqlDatabase "Z002sql2k8" "pubs" | Get-SqlTable -name "authors"
Get-SqlScripter $table
Both functions perform the same function, however the Powershell V2 function is much simpler due to the use of "ValueFromPipeLine" this tells Powershell to accept input from the pipeline and the command line without a lot of extra coding.


The ability to add comment-based to a function is huge benefit in usability. Prior to Powershell V2’s release I contemplated creating compiled cmdlets just so help would be available–I’m glad I waited. There are two ways to create help for scripts you can either use comment-based help or use an external MAML file (for compiled cmdlets MAML files are your only option). I briefly toyed with the idea of using External MAML files for scripts however there are limitations in needing to specify an absolute path plus MAML files are bit unwieldy to create. My advice if you’re going to create help for scripts or functions use comment-based help. The syntax for comment based help is very simple. Here’s an example comment-based help from SQLPSX:
Scripts an SMO object.
The Get-SqlScripter function  calls the script method for an SMO object(s).
    You can pipe SMO objects to Get-SqlScripter
    Get-SqlScripter returns an array System.String.
Get-SqlDatabase "Z002sql2k8" "pubs" | Get-SqlTable | Get-SqlScripter
This command scripts out all user tables in the pubs database.
Get-SqlDatabase "Z002sql2k8" "pubs" | Get-SqlTable -name "authors" | Get-SqlScripter
This command scripts out the authors table.
$scriptingOptions = New-SqlScriptingOptions
$scriptingOptions.Permissions = $true
$scriptingOptions.IncludeIfNotExists = $true
Get-SqlDatabase "Z002sql2k8" "pubs" | Get-SqlTable | Get-SqlScripter -scriptingOptions $scriptingOptions
This command scripts out all users tables in the pubs database and passes a scriptingOptions.
function Get-SqlScripter
I can then use get-help Get-SqlScripter -full to show help output with examples. I wish I could use comment-based help instead of MAML for compiled cmdlets!

new-object -property hashtable

One of great things about Powershell is the discoverability of objects. If you create a new object you can instantly see the objects properties and methods using Get-Member. Only one problem, the discoverability aspect tends to break down when the creators of the object model you’re using make bad design decisions, case in point the Microsoft.SqlServer.Replication.ScriptOptions. This enumeration uses a FlagsAttribute to allow bitwise combination of attributes. If this sounds confusing, it is. Fortunatley Powershell V2 adds a very clean way of creating objects that allow you to specify a hashtable as input. We can leverage this feature to create a more intuitive replication script options object.
First I created a file replscriptopts.ps1 with a hashtable of all the replication scrpting options, a subset is included below:
Deletion = $false
Creation = $true
DisableReplicationDB = $false
EnableReplicationDB = $false
IncludeAgentProfiles = $false
Next I create a function which creates an object from the file:
function New-ReplScriptOptions
new-object PSObject -property (&"$scriptRootreplscriptopts.ps1") | add-member scriptproperty ScriptOptions `
  $scriptOptions = [Microsoft.SqlServer.Replication.ScriptOptions]::None            
  $this | get-member -type NoteProperty | where {$this.($} |             
          foreach {$scriptOptions = $scriptOptions -bor [Microsoft.SqlServer.Replication.ScriptOptions]::($}            
} -passthru
The function, new-replscriptoptions creates a new object using a hashtable as input. The add-member portion adds a scriptproperty that calculates the bitwise representation of all properties where the value is set to true. So, rather than the bizare bitwise enumeration we started out with we now have a discoverable object.

I can then create a replication script options object and set the properties I wanted turned on to true and then use the object to script out my replication.
$scriptOpt = new-replscriptopts
$scriptOpt.Deletion = $true
$scriptOpt.Creation = $true
#Returns bitwise combination of properties



A few issues I ran into during the conversion and remembered to write down…
  • Cannot find type for custom attribute ‘Parameter ‘. Please make sure the assembly containing this type is loaded. Used this post from Richard Siddaway to resolve
  • Be careful with strongly typing parameters. For the most part it’s a good thing to strongly type variables, but I’ve found a few cases where it is isn’t. I have several functions where I add PSNoteProperties to a strongly type object. If I then pipe the output to be used by another function whiich is also strongly typed the noteproperties are striped away leaving just the original object. The solution is to not strongly type the parameter.
  • The position binder is supposed to be optional, however if I specify a parameterset this seems to be required in order to use positional.
  • I wasn’t able to do anything more than simple pattern matching with FunctionsToExport in the module manifest. This might be OK, but being able to explicitly list functions to export would be nice. What I ended up doing here is being very careful about adopting a standard prefix within a module.
  • By default all functions within a module are exported (this means they are available for use), however aliases are not. I spent a day wrestling with this issue and posted a StackOverFlow question. Although I agree aliases can sometime confuse things, not exporting alias by default I explicitly create within a module is counter-intuitive to the approach taken with functions. My thought is that if I didn’t want my aliases exported why would I create in my module? I’m sure this was a well-intentioned design decision, but it’s probably a little over thought.

Posted in PowerShell | 1 Comment

The Black Art of PowerShell V2 Version Numbers

Last week while helping someone in the SQLPSX forums having an issue importing modules I suspected they had a CTP version of Powershell, but being the skeptical person I am I needed proof. My first thought was there must be a simple built-in command to return the Powershell version number. In fact there there is with $PSVersionTable. This built-in variable was introduced in Powershell V2. If you run $PSVersionTable in Powershell V1 nothing will be returned. If you run $PSVersionTable on Powershell V2 you’ll get, a table of version information.
There’s one problem the version number information returned from $PSVersionTable will be different per OS platform and there isn’t single field that returns a consistent version number across platforms. For example on my x86 Vista the Powershell BuildVersion is 6.0.6002.18111. and on my x86 Windows 7 it is 6.1.7600.16385 yet both are RTM Powershell V2.
For someone coming from a SQL Server background this is suprising . I’ve run SQL Server on x86, x64 and IA-64 platforms with various operating systems, however @@version returns one version number regardless of platform. Of course there are other pieces of data to show the H/W platform and OS available if needed, but in most cases I just want to see the base version number.
Armed with this information I tweeted a question on Friday, Jan 22nd — "How do you tell if somone has a CTP version of  Powershell?" Powershell MVP, Max Trinidad quickly picked up on tweeter thread, tested a few things, blogged and involved other MVPs. The result is a blog post that helped me help someone else. Within a couple of days there were more blog posts on finding CTPs versions…
As a result of this exercise I learned several things:

The Powershell community and product team are awesome

OK, I already knew this, but how cool is it that you can tweet a question and have bunch people mobilize to help!

Powershell version numbering is wacked

Let’s fix this in Version 3. Please vote for my connection item to add a version property consistent across all platforms to $psversiontable

Tweeter moves faster than blogs. Blogs move faster than support articles

If I have a question that can easily be expressed in under 140 characters and isn’t too obscure I’ll use Tweeter. Usually I get some really good answers. There used to be a time when the first step to troubleshooting problems with Microsoft products was to search or going back in the real olden days the TechNet CD’s. Today the idea of looking at a KB article is often an afterthought done when Google/Bing turn up nothing. It would seem even product teams teams would rather blog than initiate a knowledge article. Not that this is a bad thing, personally I’d rather have the information delivered faster in a blog post. Support articles are generated by customer’s calls. When several customers call about the same issue a KB article is published. So, by putting the information out in a blog this may reduce customer calls which then mean no KB article. One last thought on KB articles, not only are they slow to produce, but because they are purely text based they simply haven’t kept pace with how people like to see information . Some of the most helpful blog posts I’ve found for troubleshooting an issue include screen prints or maybe even a video. It is more and more often that I’ll find the answer to problem or setup question in some helpful person’s blog.

For some reason which I don’t fully understand the Powershell team can’t pull down the Powershell CTP download

The problem with not pulling CTP releases is that people will mistakenly grab a CTP instead of a release version. This problem is compounded by search engines returning the Powershell Version 2 CTP download when searching for "Powershell Version 2 Download." I don’t know if Powershell bloggers who included links to CTP downloads in posts prior to release contribute to the search engine problem or not, but to be safe my suggestion — don’t include links to CTP releases in future posts for unreleased products. The CTP releases are generally pulled from download shortly after a product is released. I don’t know what is common practice as far as timing, but I do know I can’t find old CTP releases of SQL Server (or maybe I’m not looking hard enough). If you have an insight into CTP releases, please comment.

Posted in PowerShell

SQL Saturday #32

I presented a one hour session at SQL Saturday #32 in Tampa on Powershell ETL: "In this sesssion we will look at performing common data loading tasks with Powershell. A basic understanding of PowerShell is helpful, but not necessary. Specific topics covered include importing structured files, XML, WMI objects and ADO.NET data sources."
SQL Saturday’s and Code Camps usually host a Powershell track and Tampa SQL Saturday was no exception. Aaraon Nelson and Ron Damron also presented complimentary Powershell sessions on Powershell for Data Professionals and Database Hardening via Powershell respectively. Between the three of use we had a half day of Powershell on Saturday!
My thanks to everyone in attendence. I hope to see many of you at our first Tampa Powershell User Group meeting on March 11, 2010. Feel free to post questions and comments. The presentation and supporting materials for the Powershell ETL session are available here:
Posted in PowerShell

Hello SMO (F#) World!

Reading The F# Survival Guide has motivated me to write my version of an F# "Hello World!" utility. What I mean by that is to write something simple that I’ve written in other programming languages as a learning exercise. In my world of databases I use SMO (pronounced smoh or S-M-O). One of the easiest things I can do is  write some code to script out SQL Servers tables.
I’m going to use the F# command-style interactive console, fsi.exe that ships with F#. The only installation needed is F# and SMO version 10 that is included with SQL Server 2008 Management Studio. On my machine using the Oct 2009 CTP version the path to fsi.exe is C:Program FilesFSharp- To run the interactive console open a command windows and navigate to the bin directory and run fsi.exe. Once in the interactive console you can either type or paste the F# code to run. Let’s take a look at the code and then I’ll provide a short explaination:
#I @"C:Program FilesMicrosoft SQL Server100SDKAssemblies";;
#r "Microsoft.SqlServer.Smo.dll";;
#r "Microsoft.SqlServer.ConnectionInfo.dll";;
open Microsoft.SqlServer.Management.Smo
open Microsoft.SqlServer.Management.Common
let svr = Server(@"Z002SQL2K8")
let db = svr.Databases.["pubs"]
for t in db.Tables do
    for s in t.Script() do
    printfn "%s" s;;


  • The first three lines are not comments, they are used to resovle the assembly path and reference the SMO assemblies. These lines are specific to the interactive console if you’re using Visual Studio you would add references as you would normally.
  • F# is case sensitive
  • Whitespace is important
  • You use a dot before brackets to access an element, which is different than other languages
  • Double semi-colons terminate a command in the interactive console
  • The @ sign is used for verbatim strings (here-strings) — used to escape special characters.
  • The above example isn’t very F#-like which favors functions and recursion over imperative looping, but this just a simple example
  • Although it may not look like it, F# is strongly typed. It uses type inference to determine type. You can explicitly type items

EDIT Jan 24, 2010: Tony Davis blogged about this post in his article Life at the F# end providing a revised solution that is more F#-like as follows:

   |> Seq.cast
   |> Seq.collect (fun (t:Table) -> t.Script() |> Seq.cast)
   |> Seq.iter (fun s -> printfn "%s" s);;

You’ll need to read the article for an explanation of the F# code. Tony also suggests F# as a common scripting language for both developers and administrators. My thought on the subject is that Powershell is the common scripting language for administrators, but perhaps F# may have a niche use case for administrators needing better scale–I would love to see more practical examples of F# administration scripts. Be sure to read the comments section in which I respond with my reasons for exploring F# out of a need to achieve some concurrency missing  from Powershell. Oh, and I also appologize for making someones’ teeth itch with my use of imperative looping in F# Open-mouthed


Posted in SQL Server

Finding Invalid SQL Logins

As many of you know the system stored procedure sp_validatelogins is used for finding invalid logins. Although sp_validatelogins is useful there’s one problem — the output isn’t always accurate. You see when you add a a Windows account to SQL Server the SID as well as the domain (or computer name) slash account name are stored in master database, if the account is renamed in Active Directory or in the case of local users on the local system, the account stills retains access to SQL Server. How is this possible? That’s because the SID is unchanged and that is what SQL Server uses. When you run sp_validatelogins the account name is validated but not the SID and a valid but rename account is returned.
So, what we need to do is make sp_validateLogins accurate by resolving the SID against Active Directory or the local system. As add bonus we should return the rename account name. Fortunately this is pretty easy with a little Powershell script. The following is a standalone excerpt from SQL Server PowerShell Extensions, edited to work with Microsoft’s sqlps:
function Get-InvalidLogins           
    foreach ($r in Invoke-SqlCmd -ServerInstance $ServerInstance -Database ‘master’ -Query ‘sp_validatelogins’)           
        $NTLogin = $r.‘NT Login’           
        $SID = new-object security.principal.securityidentifier($r.SID,0)           
        $newAccount = $null           
        trap { $null; continue } $newAccount = $SID.translate([])           
       if ($newAccount -eq $null) {            
        $isOrphaned = $true           
        $isRenamed = $false           
       else {           
        $isOrphaned = $false           
        $isRenamed = $true           
        if ($NTLogin -ne $newAccount) {           
        new-object psobject |           
        add-member -pass NoteProperty NTLogin $NTLogin |           
        add-Member -pass NoteProperty TSID $SID |           
        add-Member -pass NoteProperty Server $ServerInstance |           
        add-Member -pass NoteProperty IsOrphaned $isOrphaned |           
        add-Member -pass NoteProperty IsRenamed $isRenamed |           
        add-Member -pass NoteProperty NewNTAccount $newAccount           
} #Get-InvalidLogins
To use the script simply copy and paste the function defintion into a sqlps session or alternatively you can add the function to your Windows Powershell profile.
Next simply call the function specifying a SQL Server instance:
Get-InvalidLogins "Z002SQL2K8"


Credits and History 

The original idea for the code came from a blog post which uses a CLR solution.  In my pre-Powershell days (2006) I created this Perl script.
Posted in PowerShell

PowerShell V2 Remoting

To use remoting you’ll need to install Powershell V2, unless you’re running Windows 2008 R2 or Windows 7, you’ll need to grab the latest version of the Windows Management Framework. The Framework includes PowerShell V2 + WINRM. What’s WINRM? It’s the service included in Windows 2008 and Vista that provides the remoting infrastructue used by PowerShell V2. The installation simply updates the binaries to Windows 2008 R2 and Windows 7 levels. The other thing to note is that Windows 2003 and XP are also now supported. Make sure you have V2 on both your server and client. From a security standpoint, remoting is disabled by default, requires an elevated administrator session to enable and once enabled only allows administrators to connect by default. If you’re interested in security I suggest you reading about internals of WINRM and the underlying protocol, WS-MAN. In my environment I use SCCM to push out Microsoft updates to servers and since PowerShell V2 is available as a Windows Update, SCCM can push out the PowerShell V2. Note: Unlike PowerShell V1, V2 requests a reboot, so schedule accordingly.
In PowerShell V1 and V2 some cmdlets and .NET classess have always supported the concept of remote connections. For instance Get-WMIObject takes a computername parameter that does not rely on the new remoting infrastructure; when using the SMO Server class you can specify a remote SQL Server instance; and the same is true with ADO.NET. There are however cases where command-line programs don’t provide native support for remoting. In these situations the remoting capabilities of PowerShell V2 is going to be very useful.
Before setting up remoting, like you I searched the internet looking for blogs, article and documentation. Unfortunately I wasn’t that lucky when it comes to to finding PowerShell remoting topics. There are blog posts related to WS-MAN, WINRM, WINRS, the WS-MAN provider with different fringe use cases some which will lead you down a rabbit hole describing how to set up remoting in some obscure non-PowerShell, CTP or non-default manner. The only thing I want to do is enable plain vanilla remoting between domain attached computers.
This is unfortunate because setting up remoting is, as we will soon see, very simple. What’s surprising is where I found the best documentation on remoting and PowerShell in general, my own PowerShell console using Get-Help. This was one of those duh moments.  We know that PowerShell has a verb-noun naming convention and you can discover commands based on naming convention. You can then  use get-command to see what you might be looking for and then use get-help to view the documentation. But, what if you’re not sure of the command to execute and have a question on concepts? That’s when you should, take a look the about_* topics. For remoting specifically look at get-help about_remote_FAQ (to see all about topics run get-help about_*):
Reading through about_remote_FAQ you’ll see a heading entiled "HOW TO CONFIGURE YOUR COMPUTER FOR REMOTING", which is exactly what I was looking for.
From the server computer, run (NOTE: You only have to run this command one time to enable remoting. It must be run from an elevated prompt):
PS C:> enable-psremoting
You should see the following output

WinRM Quick Configuration
Running command "Set-WSManQuickConfig" to enable this machine for remote
management through WinRM service.
 This includes:
    1. Starting or restarting (if already started) the WinRM service
    2. Setting the WinRM service type to auto start
    3. Creating a listener to accept requests on any IP address
    4. Enabling firewall exception for WS-Management traffic (for http only).

Do you want to continue?
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help
(default is "Y"):Y
WinRM already is set up to receive requests on this machine.
WinRM has been updated for remote management.
Created a WinRM listener on HTTP://* to accept WS-Man requests to any IP on thi
s machine.


Then execute the following command to test remoting (this will connect to the local host):
PS C:> new-pssession
You should see the following, showing an open local connection:

 Id Name            ComputerName    State    ConfigurationName     Availability
 — —-            ————    —–    —————–     ————
  1 Session1        localhost       Opened   Microsoft.PowerShell     Available
Having completed the server-side setup, next take a look at get-help about_remote. This document will walk you through the three main remoting scenarios of interactive session, remote command, commands in a session:
From the client machine (i.e. your workstation), start an interactive session with the server, in this case Z002  is the remote server.
PS C:> enter-pssession Z002
[Z002]: PS C:> $env:computername
When finish close the remote connection:
[Z002]: PS C:> Exit-PSSession

To execute a non-interactive remote command use the invoke-command cmdlet, specifying a remote server:

PS C:> invoke-command -computer Z002 {$env:computername}

The final remoting method is to create a session. This is useful when you want to execute a of series of commands. Create a session object and specify the object ($s) as a parameter to invoke-command. If you have additional commands keep specifying $s as parameter:

$s = new-pssession Z002
PS C:> invoke-command $s {$env:computername}

Remoting between two domain attached computers is pretty easy. Although the same help topics you’ll find in get-help are available online, I’m purposely not linking to them. The help topics available within the PowerShell console are pretty good, so, put down Google/Bing and start using Get-Help!

Posted in PowerShell

Tampa Code Camp 2009

I presented a 1 hr session at Tampa Code Camp 2009: PowerShell for Developers In this sesssion we will look at the a main use cases for developer usage of PowerShell. An overview of the PowerShell development model will be provided. Specific topics covered include: ETL, Testing, deployment automation, and management API.
My thanks to everyone in attendence. Feel free to post questions and comments. The presentation and supporting materials are available here:
Posted in PowerShell