Skip to content

Latest commit

 

History

History
129 lines (79 loc) · 8.56 KB

README_en.md

File metadata and controls

129 lines (79 loc) · 8.56 KB

Quality, Tests, and Technical Debt Management

Read this document in Spanish

What’s this all about?

Throughout my professional career, working on different projects has given me a clear understanding of various basic concepts for a project and a team to function well technically.

Below, as a compilation, I will break them down point by point with examples from a development perspective:

From my point of view, code organization is fundamental in any project. It provides order, consistency, and ease of finding specific functionality. One strategy that has shown great results is the one followed by hexagonal architecture: separation by layers (ports - adapters - application - domain, or some of its variants). Knowing at which of these levels a code modification is being made is vital to prevent a project from spiraling out of control over time. The worst feeling as a developer is working in chaos.

Another premise is how to name classes, variables, methods, packages, etc. Following standards like Oracle's can help.

Although not fundamental, maintaining the same format throughout the code (spacing, maximum characters per line, import ordering, etc.) is very helpful. This uniformity, like in the previous point, provides clarity in the texts. There are different ways to achieve this: individually configuring each IDE through 'settings' and 'plugins' like google-plugin. However, aligning different types of IDEs can be challenging. Another option is using a Maven plugin (which is generic for everyone). In this project, I used the googleformatter-maven-plugin based on a Google standard that corrects these aspects once mvn install is executed, without applying any changes to the production code.

Tests everywhere

It goes without saying that something fundamental in a software project is that it is testable. And by this, I mean automatically.

Throughout my career, I have worked on different projects, and in some of them, there were no automated tests, or they were very scarce. The result was wasting countless hours manually testing functionality developed months ago every time a change was about to be deployed to production, discovering that a previously fixed error reappeared, or even seeing entire projects shut down for this reason. The rush to release new features and the pressure from some stakeholders who do not value this topic lead to a lack of tests, poor tests, or low-quality tests. However, it is our responsibility as developers (and I am not saying it is easy) to fight to do things right. This should be fulfilled regardless of whether or not we have QA specialists on hand. Delivering code should not depend on others. Developers must ensure that the code we deliver works and will work correctly.

Having a framework that streamlines the generation of unit tests is very useful for maintaining this dynamic. In addition to well-known technologies like JUnit and Mockito, I will mention two systems for generating test data that have greatly benefited me in the past:

Instancio

This is a library that quickly and randomly generates data for our POJOs/DTOs when the content of the data is not relevant.

Json Unit Test

This method relies on JSON libraries to generate test objects and check/compare their content. It is very useful when we need to control the data we will use, drastically reduces the code needed to prepare the test, and allows strict comparisons of an expected object without checking attribute by attribute.

Json test

Jacoco Plugin % Coverage

This Maven plugin helps ensure we do not forget to write unit tests. After running the mvn install command, it detects the coverage level of the new code and raises an alert if the minimum threshold is not met. If the team's dynamic is not to push code without its tests (i.e., everything in the same commit), this plugin is very helpful.

Jacoco before

Mutation Testing

One of the most significant weaknesses of a unit test is that modifying the code allows the test to pass without issue. Something should alert us that the behavior has changed, right? For this reason, tools like mutation testing help us develop more robust tests.

mvn -DwithHistory test-compile org.pitest:pitest-maven:mutationCoverage

The report generated by running the Maven plugin indicates how good our tests are, their robustness, and, if necessary, what we need to check.

/target/pit-reports/index.html

Jacoco before

Although it may seem that good unit tests ensure our application, nothing could be further from the truth. Each piece working well individually does not guarantee the same result when combined.

Integration failed

Embedded Dependencies

Perhaps one of the first steps to implement is incorporating embedded dependencies into our integration tests. In-memory databases or embedded message brokers can help verify that our system's behavior is correct.

Test Containers

A more advanced and optimal scenario would be to use the same database and message broker as in reality. For this, it is possible to implement integration tests where containers are deployed during the tests and shut down when they finish. These are called test containers. The only requirement is to have a properly configured container manager like Docker.

Even with a quality/QA/QE team that later verifies the developments, global tests are always recommended to ensure nothing is overlooked. Tools like Karate or Postman can be very helpful in checking a 'journey' involving different steps.

Postman test suite

Postman test suite summary

Static analyzers are a very useful tool for maintaining clean and quality code. They serve as a dashboard to visualize metrics on 'smells,' vulnerabilities, coverage, duplicate code, etc. If your project's CI/CD does not have one integrated, it is possible to deploy it with simple default rules on a Docker container. In this case, we will use the well-known SonarQube.

SonarQube before

None of the above points make sense if each team member works independently. Reaching agreements to work uniformly is critical for a project. Otherwise, we may encounter Frankenstein projects where each file is done differently depending on who made it. This also slows down development, error searching, and maintenance. Therefore, once a team is formed or when a new member joins, it should be made clear what internal rules to follow. If necessary, it is better to discuss a controversial rule than to encounter surprises or future conflicts among colleagues.

Bibliography