Better - An AI powered Code Reviewer GitHub Action

Murtuzaali Surti - Sep 28 - - Dev Community

Code reviews have always been crucial in maintaining a standard and emphasizing on the best practices of code in a project. This is not a post about how developers should review the code, it's more about delegating a part of it to AI.

As Michael Lynch mentions in his post - "How to Do Code Reviews Like a Human" - we should let computers take care of the boring parts of the code review. While Michael emphasizes on a formatting tool, I would like to take it a step further and let artificial intelligence figure it out. I mean, why not take the advantage of the AI boom in the industry?

Now I am not saying that AI should be used in place of formatting tools and linters. Instead, it is to be used on top of that, to catch trivial stuff which might be missed by a human.

That's why I decided to create a github action which code reviews a pull request diff and generates suggestions using AI. Let me walk you through it.

🚨 Note

Getting the diff

To interact with the github API, I have used octokit, which is kind of an SDK or a client library for interacting with the github API in an idiomatic way.

In order for you to get the diff of the pull request raised, you need to pass the Accept header with the value application/vnd.github.diff along with the required parameters.

async function getPullRequestDetails(octokit, { mode }) {
    let AcceptFormat = "application/vnd.github.raw+json";

    if (mode === "diff") AcceptFormat = "application/vnd.github.diff";
    if (mode === "json") AcceptFormat = "application/vnd.github.raw+json";

    return await octokit.rest.pulls.get({
        owner: github.context.repo.owner,
        repo: github.context.repo.repo,
        pull_number: github.context.payload.pull_request.number,
        headers: {
            accept: AcceptFormat,
        },
    });
}
Enter fullscreen mode Exit fullscreen mode

If you are not familiar with github actions at all, here's a github actions 101 series by Victoria Lo and it's a good start.

Once I get the diff, I parse it and remove unwanted changes, and then return it in a schema shown below:

/** using zod */
schema = z.object({
    path: z.string(),
    position: z.number(),
    line: z.number(),
    change: z.object({
        type: z.string(),
        add: z.boolean(),
        ln: z.number(),
        content: z.string(),
        relativePosition: z.number(),
    }),
    previously: z.string().optional(),
    suggestions: z.string().optional(),
})
Enter fullscreen mode Exit fullscreen mode

Ignoring Files

Ignoring files is quite straightforward. The user input list requires a semicolon separated string of glob patterns. It's then parsed, concatenated with the default list of ignored files and de-duped.

**/*.md; **/*.env; **/*.lock;
Enter fullscreen mode Exit fullscreen mode
const filesToIgnoreList = [
    ...new Set(
        filesToIgnore
            .split(";")
            .map(file => file.trim())
            .filter(file => file !== "")
            .concat(FILES_IGNORED_BY_DEFAULT)
    ),
];
Enter fullscreen mode Exit fullscreen mode

The ignored files list is then used to remove the diff changes which refer to those ignored files. That gives you a raw payload containing only the changes you want.

Generating suggestions

Once I get the raw payload after parsing the diff, I pass it to the platform API. Here's an implementation of the OpenAI API.

async function useOpenAI({ rawComments, openAI, rules, modelName, pullRequestContext }) {
    const result = await openAI.beta.chat.completions.parse({
        model: getModelName(modelName, "openai"),
        messages: [
            {
                role: "system",
                content: COMMON_SYSTEM_PROMPT,
            },
            {
                role: "user",
                content: getUserPrompt(rules, rawComments, pullRequestContext),
            },
        ],
        response_format: zodResponseFormat(diffPayloadSchema, "json_diff_response"),
    });

    const { message } = result.choices[0];

    if (message.refusal) {
        throw new Error(`the model refused to generate suggestions - ${message.refusal}`);
    }

    return message.parsed;
}
Enter fullscreen mode Exit fullscreen mode

You might notice the use of response format in the API implementation. This is a feature provided by many LLM platforms, which allows you to tell the model to generate the response in a specific schema/format. It is especially helpful in this case as I don't want the model to hallucinate and generate suggestions for incorrect files or positions in the pull request, or add new properties to the response payload.

The system prompt is there to give the model more context on how it should do the code review and what are some things to keep in mind. You can view the system prompt here github.com/murtuzaalisurti/better.

The user prompt contains the actual diff, the rules and the context of the pull request. It is what kicks off the code review.

This github action supports both OpenAI and Anthropic models. Here's how it implements the Anthropic API:

async function useAnthropic({ rawComments, anthropic, rules, modelName, pullRequestContext }) {
    const { definitions } = zodToJsonSchema(diffPayloadSchema, "diffPayloadSchema");
    const result = await anthropic.messages.create({
        max_tokens: 8192,
        model: getModelName(modelName, "anthropic"),
        system: COMMON_SYSTEM_PROMPT,
        tools: [
            {
                name: "structuredOutput",
                description: "Structured Output",
                input_schema: definitions["diffPayloadSchema"],
            },
        ],
        tool_choice: {
            type: "tool",
            name: "structuredOutput",
        },
        messages: [
            {
                role: "user",
                content: getUserPrompt(rules, rawComments, pullRequestContext),
            },
        ],
    });

    let parsed = null;
    for (const block of result.content) {
        if (block.type === "tool_use") {
            parsed = block.input;
            break;
        }
    }

    return parsed;
}
Enter fullscreen mode Exit fullscreen mode

Adding Comments

Finally, after retrieving the suggestions, I sanitize them and pass them to the github API to add comments as a part of the review.

I chose the below way to add comments because by creating a new review, you can add all comments in one go instead of adding a single comment at a time. Adding comments one by one may also trigger rate limiting because adding comments triggers notifications and you don't want to spam users with notifications.

function filterPositionsNotPresentInRawPayload(rawComments, comments) {
    return comments.filter(comment =>
        rawComments.some(rawComment => rawComment.path === comment.path && rawComment.line === comment.line)
    );
}

async function addReviewComments(suggestions, octokit, rawComments, modelName) {
    const { info } = log({ withTimestamp: true }); // eslint-disable-line no-use-before-define
    const comments = filterPositionsNotPresentInRawPayload(rawComments, extractComments().comments(suggestions));

    try {
        await octokit.rest.pulls.createReview({
            owner: github.context.repo.owner,
            repo: github.context.repo.repo,
            pull_number: github.context.payload.pull_request.number,
            body: `Code Review by ${modelName}`,
            event: "COMMENT",
            comments,
        });
    } catch (error) {
        info(`Failed to add review comments: ${JSON.stringify(comments, null, 2)}`);
        throw error;
    }
}
Enter fullscreen mode Exit fullscreen mode

Conclusion

I wanted to keep the github action open-ended and open to integrations and that's why you can use any model of your choice (see the list of supported models), or you can fine tune and build your own custom model on top of the supported base models and use it with this github action.

If you encounter any token issues or rate limiting, you might want to upgrade your model limits by referring to the respective platform's documentation.

So, what are you waiting for? If you have repository on github, try the action now - it's on the github action marketplace.

GitHub logo murtuzaalisurti / better

A code reviewer github action powered by AI, ready to be used in your workflow.

better

A code reviewer github action powered by AI, ready to be used in your workflow.

Why use it?

  • Standardize your code review process
  • Get feedback faster
  • Recognize patterns which result in bad code
  • Detection of common issues
  • Identify security vulnerabilities
  • Second opinion
  • For humans to focus on more complex tasks

Usage

1. Create a workflow

Create a workflow file inside .github/workflows folder (create if it doesn't exist) of your repository with the following content:

name: Code Review
on
    pull_request:
        types: [opened, reopened, synchronize, ready_for_review]
        branches:
            - main # change this to your target branch
    workflow_dispatch: # Allows you to run the workflow manually from the Actions tab

permissions: # necessary permissions
    pull-requests: write
    contents: read

jobs:
    your-job-name:
        runs-on: ubuntu-latest
        name: your-job-name
        steps:
            - name: step-name
              id: step-id
              uses: murtuzaalisurti/better@v2 # this is
Enter fullscreen mode Exit fullscreen mode
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .