Writing clean, well-tested code is a cornerstone of robust software development. Unit testing plays a crucial role in this process, ensuring individual code units function as intended. However, manually crafting unit tests can be a time-consuming and tedious task, especially when aiming for thorough code coverage. This can lead to bottlenecks in development workflows and potentially leave areas of your code untested, introducing the risk of bugs slipping through the cracks.
This article introduces CodiumAI Cover-Agent
, an innovative open-source tool that leverages the power of Artificial Intelligence (AI) to automate unit test generation. This translates to significant efficiency gains in your testing workflow.
In the following sections,explore how CodiumAI Cover-Agent works, its benefits and guide you on how to get started within your development environment.
What is CodiumAI Cover-Agent?
In February, Meta researchers published paper on “Automated Unit Test Improvement using Large Language Models at Meta” in which they introduced a tool called TestGen-LLM
, that created waves in the software development world, But Meta didn't release the TestGen-LLM code, so CodiumAI, a revolutionary open-source company decided to implement it as part of its cover-agent open-source tool..
Cover-Agent
is a new cutting age AI product at the forefront of integrating state-of-the-art test generation technology. It is more than just another AI-powered test generation tool. It's a revolutionary approach built on the TestGen-LLM, specifically designed for crafting high-quality tests. Unlike most LLMs like ChatGPT and Gemini, which are capable of generating test, but what makes cover-agent better is it’s ability to overcome common challenges these LLMs face from:
- Generating tests that do not work
- Do not add value( they test for the same functionality already covered by other tests) to
- Regression unit tests
Cover-Agent goes beyond basic code comprehension and increases code coverage. It integrates seamlessly with Visual Studio Code (VS Code), a popular development environment widely used by programmers. This means you can leverage CodiumAI Cover-Agent's capabilities directly within your existing VS Code workflow, eliminating the need to switch between different tools.
How does it Work?
This tool is part of a broader suite of utilities designed to automate the creation of unit tests for software projects. Utilizing advanced Generative AI models, it aims to simplify and expedite the testing process, ensuring high-quality software development. The system comprises several components:
- Test Runner: Executes the command or scripts to run the test suite and generate code coverage reports.
- Coverage Parser: Validates that code coverage increases as tests are added, ensuring that new tests contribute to the overall test effectiveness.
- Prompt Builder: Gathers necessary data from the codebase and constructs the prompt to be passed to the Large Language Model (LLM).
- AI Caller: Interacts with the LLM to generate tests based on the prompt provided.
What are the Benefits of Cover-Agent?
Cover-Agent eliminates a tedious yet critical task of increasing test coverage
Boost your test:
State of the art test generation technology starting with regression tests to ensure your codebase is robust and well tested.Leverages Advanced AI:
It is at the forefront of automated test generation with the new TestGen-LLM which focus entirely on test generation and overcoming it’s common pitfalls to ensure tests compile, run properly and add value. Tests are generated, integrated and validated without human interaction.Contribution and Collaboration:
Since it is an open-source project, developers are welcome to contribute and help enhance cover-agent.
How to Setup Cover-Agent
Install Cover-Agent:
pip install git+https://github.com/Codium-ai/cover-agent.git
Ensure OPENAI_API_KEY is set in your environment variables, as it is required for the OpenAI API.
Create the command to start generating tests:
cover-agent \
--source-file-path "path_to_source_file" \
--test-file-path "path_to_test_file" \
--code-coverage-report-path "path_to_coverage_report.xml" \
--test-command "test_command_to_run" \
--test-command-dir "directory_to_run_test_command/" \
--coverage-type "type_of_coverage_report" \
--desired-coverage "desired_coverage_between_0_and_100" \
--max-iterations "max_number_of_llm_iterations" \
--included-files "<optional_list_of_files_to_include>"
Command Arguments Explained
source-file-path: Path of the file containing the functions or block of code we want to test for.
test-file-path: Path of the file where the tests will be written by the agent. It’s best to create a skeleton of this file with at least one test and the necessary import statements.
code-coverage-report-path: Path where the code coverage report is saved.
test-command: Command to run the tests (e.g., pytest).
test-command-dir: Directory where the test command should run. Set this to the root or the location of your main file to avoid issues with relative imports.
coverage-type: Type of coverage to use. Cobertura is a good default.
desired-coverage: **Coverage goal. Higher is better, though 100% is often impractical.
**max-iterations: Number of times the agent should retry to generate test code. More iterations may lead to higher OpenAI token usage.
**additional-instructions: **Prompts to ensure the code is written in a specific way. For example, here we specify that the code should be formatted to work within a test class.
On running the command, the agent starts writing and iterating on the tests.
How to Use Cover-Agent
This is an introductory article to get you started with using cover-agent and for that purpose we will use a simple calculator.py app gotten from here. We will compare manual testing and Automated testing with Cover-Agent.
Manual Testing
def add(a, b):
return a + b
def subtract(a, b):
return a - b
def multiply(a, b):
return a * b
def divide(a, b):
if b == 0:
raise ValueError("Cannot divide by zero")
return a / b
This is the test_calculator.py placed in the test folder.
# tests/test_calculator.py
from calculator import add, subtract, multiply, divide
class TestCalculator:
def test_add(self):
assert add(2, 3) == 5
To see the test coverage, we need to install pytest-cov, a pytest extension for coverage reporting.
pip install pytest-cov
Run the coverage analysis with:
pytest --cov=calculator
The output shows:
Name Stmts Miss Cover
-----------------------------------
calculator.py 10 5 50%
-----------------------------------
TOTAL 10 5 50%
The output above shows that 5 of the 10 statements in calculator.py are not executed, resulting in just 50% code coverage. For huge projects this is going to pose a serious problem. Now let's see how Cover-Agent can enhance this process.
Automated Testing with Cover-Agent
To set up Codium Cover-Agent, follow these steps:
Install Cover-Agent:
pip install git+https://github.com/Codium-ai/cover-agent.git
Ensure OPENAI_API_KEY is set in your environment variables, as it is required for the OpenAI API.
Create the command to start generating tests:
cover-agent \
--source-file-path "calculator.py" \
--test-file-path "tests/test_calculator.py" \
--code-coverage-report-path "coverage.xml" \
--test-command "pytest --cov=. --cov-report=xml --cov-report=term" \
--test-command-dir "./" \
--coverage-type "cobertura" \
--desired-coverage 80 \
--max-iterations 3 \
--openai-model "gpt-4o" \
--additional-instructions "Since I am using a test class, each line of code (including the first line) needs to be prepended with 4 whitespaces. This is extremely important to ensure that every line returned contains that 4 whitespace indent; otherwise, my code will not run."
It generates the following code
import pytest
from calculator import add, subtract, multiply, divide
class TestCalculator:
def test_add(self):
assert(add(2, 3), 5
def test_subtract(self):
"""
Test subtracting two numbers.
"""
assert subtract(5, 3) == 2
assert subtract(3, 5) == -2
def test_multiply(self):
"""
Test multiplying two numbers.
"""
assert multiply(2, 3) == 6
assert multiply(-2, 3) == -6
assert multiply(2, -3) == -6
assert multiply(-2, -3) == 6
def test_divide(self):
"""
Test dividing two numbers.
"""
assert divide(6, 3) == 2
assert divide(-6, 3) == -2
assert divide(6, -3) == -2
assert divide(-6, -3) == 2
def test_divide_by_zero(self):
"""
Test dividing by zero, should raise ValueError.
"""
with pytest.raises(ValueError, match="Cannot divide by zero"):
divide(5, 0)
You can see that the agent also wrote tests for checking errors for edge cases.
Now it is time to test the coverage again
pytest --cov=calculator
Output:
Name Stmts Miss Cover
-----------------------------------
calculator.py 10 0 100%
-----------------------------------
TOTAL 10 0 100%
In this example we reached 100% test coverage and for larger code bases the procedure is relatively the same, check here for walkthrough on a larger code base.
Conclusion
CodiumAI's Cover-Agent empowers developers like you to streamline the unit testing process and achieve superior code coverage. This innovative tool leverages the power of the new cutting age TestGen-LLM technology, saving you valuable time and effort.
This shows CodiumAI’s commitment to making developer’s work life easier, check out their Pull Request Agent here.
So try out Cover-Agent, contribute to the open-source and be part of the future.
To read more on the working principles, development process and challenges faced while creating this tech, check out CodiumAI CEO’s blog post.
Connect with me on Linkedin and Twitter if you found this helpful