How to optimize and speed up your PowerShell scripts

My best practices, tips and tricks working with PowerShell

This is a blog post that is not deeply technical. I will not go deep into the tips and tricks, because you will learn this automatically when you have more experience & when you are ready.

I have a seperate blog post about best practices, tips and tricks for the Microsoft Graph API and PowerShell, but there are two sections which are quite useful for this blog post as well, but technical.
I’m sharing both sections and you can check if it’s useable right now, or maybe later.

Left filtering aka filter on as close as possible at the first cmdlet on a line.

Use Out-Null or $null when you do not need to receive to do something with the results

Have you completed your script and do you know what the input and output are?Then I would hide cmdlet results where I do nothing with the results.

You can do this with $null or Out-Null. From experience I know that $null is slightly faster, I don’t know why this is, but I suspect it has something to do with the Out-Null being a wrapper around $null.

For example, Connect-AzAccount outputs a result I do not necessarily need for my script.


Account            SubscriptionName TenantId
-------            ---------------- --------
[email protected] Subscription1    TenantId1

So, when I would call Connect-AzAccount, I’d do this:

$null = Connect-AzAccount

You can also catch the results and only show them when verbose logging is on. This way you have more control over the input and output, but write-verbose and logging will come later in this blog!

$ConnectAzAccountResult = Connect-AzAccount
Write-Verbose "SubscriptionName: $($ConnectAzAccountResult.Context.Subscription.Name)"

You shouldn’t do this when it’s sensitive data though, I’d say you should use $null then. Otherwise the results are in your memory stream.

You shouldn’t use a Where-Object in a foreach condition!

In PowerShell, the Where-Object cmdlet is used to filter objects based on a specified condition. foreach statement is used to loop through each object in a collection.

While you can technically use a Where-Object within a foreach loop, I don’t recommend it.

When you place a Where-Object inside a foreach loop, you’re essentially applying a filter to each object individually as you iterate through the collection. This can be inefficient, especially when dealing with large datasets, because the filtering operation is repeated for every object.

Combining a Where-Object with a foreach loop can make your code less clear and harder to understand.

This is bad because the Where-Object will loop over the $Processes every time the loop starts with a new object.

$Processes = Get-Process
foreach ($Process in $Processes | Where-Object {$_.ProcessName -eq 'Teams'}) {

By filtering the collection before the loop, you’re reducing the number of iterations and making the code more readable.

$Processes = Get-Process
$processes = $Processes | Where-Object {$_.ProcessName -like "Tea*"}
foreach ($Process in $processes) {

And left filtering would optimize this the most of course.

$Processes = Get-Process -Name "Tea*"
foreach ($Process in $processes) {

And you can also use this from a previous post!

Try and catch for error handling is extremely important

Using try and catch helps you control how your script handles errors, making it more robust and preventing it from doing this you do not want it to do!

It’s valuable when writing PowerShell scripts that interact with external resources (like the Graph API), user input, or even on the device it’s running, as it allows you to manage exceptions and provide feedback to the console, ticketing system, or user using your script.

An good example that includes all type of things is one of my latest blog posts.

#region Must have modules before starting runbook
$VerbosePreference = 'SilentlyContinue'
$i = 0
$MaxSeconds = 300
$tryEvery = 5
while (($i -ne $MaxSeconds) -and ($ImportSuccessful -ne $true)) {
    try {
        $null = Import-Module Az.Accounts -ErrorAction Stop
        $ImportSuccessful = $true
    catch {
        Write-Warning "Importing Az.Accounts failed, retrying in 5 seconds. Attempt $i of $MaxSeconds"
        $i = $i + $tryEvery
        Start-Sleep -Seconds $tryEvery
if ($ImportSuccessful -ne $true) {
    $ImportAttempts = $MaxSeconds / $tryEvery
    throw "Importing Az.Accounts failed after $ImportAttempts attempts and $MaxSeconds seconds, exiting script"
$VerbosePreference = 'Continue'
#endregion Must have modules before starting runbook

Without the try {} catch {} it would just try to import the module, fail, and continue your script, but now it will retry to import the module and will either succeed or fail, but eventually it will stop your script from running when it does fail because of the throw.

And the try {} catch {} block including the throw gives you more room for verbose and logging.

You should use a type of logging in your scripts

Logging makes troubleshooting so much easier, and I don’t mean logging to eg Log Analytics, but direct logging in your script that you can easily call, like Write-Verbose.

But why isn’t try {} catch {} enough? It doesn’t always have to be the case that an error occurs in your code, but it may be that the input or output does not match what you had in mind.

Using e.g. Write-Verbose you can see what the array count is, what the name is, or what the output actually is.

$Services = Get-Service -Name "Microsoft*" 
Write-Verbose "Services Count: $($Services.Count), Name: $($Services.Name), State: $($Services.Status)"

VERBOSE: Services Count: 1, Name: MicrosoftEdgeElevationService, State: Stopped

Now, let’s say I used State as property for Status, it’s empty:

VERBOSE: Services Count: 1, Name: MicrosoftEdgeElevationService, State:

Write-Verbose output can be especially useful during the development and testing phase, allowing you to diagnose and address any unexpected behavior.-

Use a good formatted structure

Well-formatted scripts enhance the readability, making it easier to understand and troubleshoot.
My way of structuring my code is seen in the try {} catch {} section, but what is more readable?

Get-Service | Where-Object {$_.Name -like "Microsoft*"} | Select-Object -Property Name, Status, DisplayName | Sort-Object -Property Status | ForEach-Object { $_.Name}
Get-Service -Name 'Microsoft*' |
Select-Object -Property Name, Status, DisplayName |
Sort-Object -Property Status |
ForEach-Object {
$Services = Get-Service -Name 'Microsoft*'
$Services = $Services | Select-Object -Property Name, Status, DisplayName
$Services = $Services | Sort-Object -Property Status
foreach ($Service in $Services) {

You would think that the latest version takes the longest, but on a test running this in a 1000x loop (While $i -ne 1000), it still came out better and that’s because the loop is not going over a loop, which means that the Get-Service works once, then Select-Object goes over each selected object, and then the Sort-Object will go over each service, and then a Select-Object, what make ForEach-Object even worse.

Since my text is hard to understand, I asked ChatGPT:

Each iteration of a loop that involves a pipeline introduces overhead due to the creation of new pipeline instances and the associated memory and processing requirements. This can impact the performance of your script, especially when dealing with large datasets or complex operations. It’s generally more efficient to process data within the loop itself rather than relying on pipelines.

And I’m not even going to start about spacing etc.. You should use Visual Studio Code with the PowerShell extension & use the formatting provided from the extension!

Do not use aliases in your PowerShell scripts!

While aliases can be quicker for interactive PowerShell sessions, I’d advise to avoid them in scripts and instead use the full cmdlet names for improved readability, portability, collaboration, and troubleshooting.

Using aliases in PowerShell can lead to several issues and should be avoided, in my opinion, for the following reasons:

  • Readability and Maintainability: Aliases are often abbreviations or shorthand versions of longer cmdlet names. While they can save typing effort, they can make the code harder to read and understand, especially for others who might be reviewing or maintaining your code.

For example, Get-Service has gsv Where-Object has Where and ?, so it can come down to this, but what is more readable?

Get-Service | Where-Object {$_.Name -like "Microsoft*"}

Get-Service | Where {$_.Name -like "Microsoft*"}

Get-Service | ? {$_.Name -like "Microsoft*"}

gsv | ? {$_.Name -like "Microsoft*"}

Using full cmdlet names improves code clarity and makes it easier to comprehend and modify in the future. If you automate your script properly, you only have to type it out completely once

  • Learning: PowerShell aliases are harder to read for beginners or for colleagues less familiar with the cmdlet names. Would you understand the last cmdlet line I used in my example? I’m someone who never uses aliases, I would have to use Get-Alias for gsv to see the cmdlet.
Get-Alias gsv

CommandType     Name
-----------     ---- 
Alias           gsv -> Get-Service
  • Troubleshooting: When encountering issues or errors in your code, using aliases can make it more difficult to find the problem.

Documentation refers to cmdlet names, not aliases. Error messages can differ, but sometimes show the full cmdlet name as well. Full cmdlet names will make troubleshooting easier.


The following methods represent my personal ways of working, optimizing, and speeding up my PowerShell scripts. While they have proven to be effective for me, I acknowledge that there may be better ideas and alternative approaches available.

I am open to feedback and suggestions, and I would love to hear them in the comments.

Code, structure, use logging, use try {} catch {}, but above all do what you want.
No code is bad as long as it does what you want & is not a security breach.

Published by

Bas Wijdenes

My name is Bas Wijdenes and I work as a PowerShell DevOps Engineer. In my spare time I write about interesting stuff that I encounter during my work.

3 thoughts on “How to optimize and speed up your PowerShell scripts”

  1. Some decent tips. However, this portion of your article is false

    This is bad because the Where-Object will loop over the $Processes every time the loop starts with a new object.

    $Processes = Get-Process
    foreach ($Process in $Processes | Where-Object {$_.ProcessName -eq ‘Teams’}) {

    By filtering the collection before the loop, you’re reducing the number of iterations and making the code more readable.

    $Processes = Get-Process
    $processes = $Processes | Where-Object {$_.ProcessName -like “Tea*”}
    foreach ($Process in $processes) {

    It’s doing the same amount of filtering/processing in each example. They are about the same speed, with one or the other being slightly faster on repeated tests.

    $processlist = Get-Process

    $count = 10000

    $totaltime = Measure-Command -Expression {1..$count |%{
    foreach($process in $processlist|Where-Object {$_.ProcessName -eq ‘Teams’}){

    Write-Host Inline filtering within foreach avg: $($totaltime.TotalSeconds / $count) -ForegroundColor Cyan

    $totaltime = Measure-Command -Expression {1..$count |%{
    $processes = $processlist |Where-Object {$_.ProcessName -eq ‘Teams’}
    foreach($process in $processes){

    Write-Host Storing in variable before foreach loop avg: $($totaltime.TotalSeconds / $count) -ForegroundColor Cyan

    1. for me it’s a 10 seconds difference, where without the Where-Object in the loop is 22 seconds and the one with is 31 seconds.

      But I agree it depends on when you run what, what amount of memory is free on the device you’re running it on and more.
      There are more dependencies then just code.

Leave a Reply

Your email address will not be published. Required fields are marked *