Power Automate - The Code Review Onion

david wyatt - Oct 21 - - Dev Community

I wrote a blog about code reviews almost 2 years ago now (Power Automate Code Reviews, so I thought it was time for an update.

And after looking at how my code reviews had changed over time I came up with the code review onion.

Code reviews have many layers like an onion, they also make you cry like an onion

code review onion

Code reviews have always been a hard sell, most Power Automate developers will not see the benefits. And that's because code reviews are actually about risk management, just like insurance. They are there to decrease the risk of something bad happening in the future (you know that the first person who enforced code reviews got burned badly).

What It Covers

Code reviews don't cover what most people expect, bugs. They are not there for bugs, that's testing (not to say they won't be flagged if seen though). Code reviews are actually for 3 main reasons:

  1. Reduce unexpected behaviour
  2. Readability
  3. Performance/Platform stability

3 main reasons

1. Reduce Unexpected Behaviour

This is the closest to spotting bugs, and what we are looking for is anything that could be inconsistent, and therefor difficult to spot in test. Any setting or design pattern that works, but not necessarily all the time.

The best example I can say is 'Get Items' Pagination and Threshold. If this is not turned on its not a bug, but it can generate unexpected behaviour, because it limits the return to 100 records. This means it will probably pass dev and test (as they may not stress test it fully), and therefor only get caught in production.

2. Readability / Maintainability

The thing about any project is it will always eventually out grow its developer. They will move on and the process will need updates. And this is where things like naming conventions and standard patterns are key, they ensure that anyone in the future can look at the code, understand it, and debug or increment on it.

Just imagine years in the future when the developer is no longer in the team and there is a critical bug. Its paramount to fix it quickly but when you open the process you can't understand what anything does, let alone why something isn't working.

3. Performance / Platform stability

Ensure that the process is optimized is critical for ensuring it is unable to impact other processes. This is probably the most important of the reasons, as at least the other two are limited to the process itself. Each account/license in the Power Platform as daily api limits (each action, even a condition, is a api call), at present 6k or 40k with premium license. As the limit is not per flow it means flows can impact each other.

A simple example of platform stability is API limits, if your process is going to put too much pressure on another system to the point it could bring it down.

The How

Code reviewing Power Automate flows is not as easy as it should be, and there are 3 main ways you can do it.

1. Code Review Environment

This is probably the simplest and safest way. You create a dedicated environment to complete your code reviews. This gets around the connections and sharing issues, and just requires an export of the solution. Once imported the reviewer can look over all the components and document any issues.

2. Code Review Security Role

In this approach we use the system customizer or a custom security role for the review. They then can access any solution and complete the code review, with the added benefits of adding comments directly on the flow (only in the classic ui at the moment, but that's the best ui for code reviews anyway). The only negative is the elevated permissions given to the reviewer and the risk of accidentally editing the flow you are reviewing. A custom role with read only and not edit will work, or a jit (just in time) role that can be used can also be a partial solution.

3. Code Review Tool

A code review tool allows you to see the contents of the solution without having to import it anywhere. An example is AutoReview. They may also included some automated testing, allowing issues to be flagged without having to find them.

review options

There is a grading required, as not all failures require fixing straight away. It could be that the 'juice is not worth the squeeze', and trying to fix it now would take so long that it would impact return on investment. So there are 3 grades in my code reviews:

Red
Urgent/critical, these need to be fixed before moving into production
e.g. Hard coded password

Amber
These are the 'should be fixed when most efficient'. This is generally when there is a next update or there is a red issue so might as well be fixed then.
e.g. A couple of actions not following naming convention

Green
Not worth the fix now but should be considered in future flows. These are more of a training flag, and can be used when there is such a minimum impact its not worth change.
e.g. Flow overly complex/large and should be broken into smaller flows

grades

The Onion

So lets get into the actual onion:

The key part of the onion for Power Automate is the 4 layers that should be looking at:

  1. Solutions
  2. Connections
  3. Exceptions
  4. Design

1. Solutions

The first place we need to look is in the solution, ensuring everything needed to support the flow is right. It covers:

  • Only allowed missing dependencies
    This is dependent on your org (I only allow custom connectors and Dataverse tables), but any unapproved missing dependencies should be flagged

  • Environment variables
    Should be no duplicates of the same value
    Follow naming convention showing solution and description
    No Default values (controversial but an example of unexpected behaviours)
    If excessive then they should be using object variables to group together

  • Connection References
    Should be maximum one of each connection
    Follow naming convention showing solution and what connection

  • Flows
    Naming should explain what flow does and link to the solution

2. Connections

Connections are the next level, as these expand beyond the flow.

  • No hardcoded passwords/secrets

  • Environment variables all for data sources except local Dataverse table

  • Object environment variables for groups of inputs

  • Secure inputs/outputs for all sensitive data ingress points

  • OneDrive/Excel by exception – use Dataverse/SharePoint

  • Trigger settings
    No condition straight after the flow trigger
    Only run when needed, no overly excessive schedule or lax trigger condition

  • Correct type
    Use business versions (Excel/OneDrive/Outlook)

  • Use HTTP versions by exception

  • List/Get items
    Pagination and Threshold on
    Query filter where possible

3. Exception Handling

Exception handling is key to ensure fail runs are controlled and issues easily identified. This is key to helping minimalize support teams time.

  • Main and Exception Scope
    This is required for a catch all errors approach.
    Retry exceptions handled separately, all others should bubble up

  • RunAfter should not just be failed but timeout as well

  • Always have a failed terminate in exception handling to flag failed run in the logs

  • Include failed run expression & link to easily find run in logs

  • Result() expression and filter, or Xpath() expression

  • Conversion file saves should exception handle to delete file

  • App called flows handle exception and pass back so that the App/parent flow can handle it

4. Design

This is the most subjective and where most of your green fails will be. Poorly optimized code can be marginal, if its only a odd api call here and there then is it worth changing this particular flow?

  • Naming standards for variables
    My preference: sString, bBoolean, iInteger, oObject, aArray

  • Naming standards for actions
    Keep action default name and add description of what it does

  • Variables/Composes - valid use case
    All inputs are global so variables are not needed for data storage
    Composes use api calls and separate the detail from where it is used so should be avoided

  • Don’t set variables with a Condition, use expression

  • Minimum nesting
    High nesting impacts ui making it hard to edit/read flows
    Condition within condition can be avoided with and conditions

  • Environment variables for configurable values
    If configurable should be in environment variable e.g. email addresses and email bodies

  • Limit loops and their size
    Avoid loops within loops as these can exponentially increase api calls
    User filter not loop with condition inside as wastes api calls on unnecessary items
    Move non loop items out of loop e.g. get template
    Use first() or [0] for single row returns

  • Size/Complexity limits
    Overly complex/long flows should be broken into multiple flows to help unit testing, reusability and slow ui


Code reviews are always an investment for the future, short term pain for long term gain. Having a robust process (ideally embedding in your alm/pipeline process) and easily available training will help mitigate that short term pain.

Presentation here if you want it (don't use Google Slides as formatting gets messed up), I'm also working on my first eBook which goes into the process in a little more detail, including examples, watch this space for when it's released.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .