Fortran is the fastest language on earth, so they say. But can we prove it?
And despite its legendary speed when it comes to crunching numbers, Fortran is no exception when it comes to writing code: it's also very possible to write terribly slow pieces of code. This is where benchmarking different implementations of the same function can help developing better and faster algorithms.
This project aims at providing an easy interface to benchmark functions and subroutines while taking care of warming up the machine, collecting system information, computing statistics and reporting results.
To build that library you need
The following compilers are tested on the default branch of benchmark.f:
Name | Version | Platform | Architecture |
---|---|---|---|
GCC Fortran (MinGW) | 14 | Windows 10 | x86_64 |
Intel oneAPI classic | 2021.5 | Windows 10 | x86_64 |
fpp
and cpp
. Unit test rely on the the header file assertion.inc
. Since the whole framework fits in a single file, it has been added directly to the repo.
Linting, indentation, and styling is done with fprettify with the following settings
The repo can be build using fpm
For convenience, the repo also contains a response file that can be invoked as follows:
(For the Windows users, that command does not work in Powershell since '@' is a reserved symbol. One should use the '–' as follows: fpm --% @build
. This is linked to the following issue)
Building with ifort requires to specify the compiler name (gfortran by default)
Alternatively, the compiler can be set using fpm environment variables.
Besides the build command, several commands are also available:
The toml files contains two items that are worth commenting:
The _FPM
macro is used to differentiate the build when compiling with fpm or Visual Studio. This is mostly present to adapt the hard coded paths that differs in both cases.
The project was originally developed on Windows with Visual Studio 2019. The repo contains the solution file (Benchmark.f.sln) to get you started with Visual Studio 2019.
Running the benchmark could not be simpler.
benchmark.inc
into your codeThe first step is to create a test function. It can be a function or a subroutine (gfortran only handles subroutine. For more issues related to gfortran, see this article ) with any number of arguments between 0 and 7.
And then simply call the benchmark
macro.
This will generate this kind of table:
| Method Name | Mean | Standard Deviation | |__________________________|________________________|________________________| |test_function() | 217350.000 us| +/- 161306.626|
For more examples, please refer to the Documentation
The library takes care of everything else for you
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated. So, thank you for considering contributing to benchmark.f. Please review and follow these guidelines to make the contribution process simple and effective for all involved. In return, the developers will help address your problem, evaluate changes, and guide you through your pull requests.
By contributing to benchmark.f, you certify that you own or are allowed to share the content of your contribution under the same license.
Please follow the style used in this repository for any Fortran code that you contribute. This allows focusing on substance rather than style.
A bug is a demonstrable problem caused by the code in this repository. Good bug reports are extremely valuable to us—thank you!
Before opening a bug report:
A good bug report should include all information needed to reproduce the bug. Please be as detailed as possible:
This information will help the developers diagnose the issue quickly and with minimal back-and-forth.
If you have a suggestion that would make this project better, please create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
git checkout -b feature/AmazingFeature
)git commit -m 'Add some AmazingFeature'
)git push origin feature/AmazingFeature
)Fixes #<issue-number>
. If your PR implements a feature that adds or changes the behavior of benchmark.f, your PR must also include appropriate changes to the documentation and associated units tests.In brief,
Distributed under the MIT License.