Adds readme. Updates name in copyrights. Fixes some missing headers.

This commit is contained in:
2023-05-01 18:57:51 -07:00
parent 96e93d6cd6
commit ab4ac26aed
4 changed files with 54 additions and 186 deletions

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2023 headhunter45 Copyright (c) 2023 Tom Hicks <headhunter3@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -0,0 +1,15 @@
# TinyTest
TinyTest is a minimal testing library. The name might change soon, because I realized there were already multiple projects called TinyTest.
## Test Lifecycle
1. suite_setup_function() - This is called to allocate any suite level resources. This is called once when the suite begins.
2. This section may be executed in parallel. These functions may be called in parallel but execution will not proceed past this block until they have all finished.
3. test_setup_function() - This is called once for every test in tests. You may uske it to allocate resources or setup mocks, stubs, and spies.
4. function_to_test(...) - Thiis is called once for every test row in tests. Only one of these test functions will actually be run for each test in tests. They should return true if the test passed, return false if the test failed or there was an error, and be nullptr if they should be skipped. The execution function will be called with expected_output and the result of function_to_test(...). They can be used to test functions with side effects, especially void functions.
5. test_compare_function() - This is the highest priority compare function. If it is not nullptr then it will be called to evaluate the test results.
6. suite_compare_function() - This is the second highest priority compare function. If test_compare_function is nullptr and this is not nullptr then it will be called to evaluate the test results.
7. [](TResult expected, TResult actual) {return expected == actual; } - This is the lowest priority compare function. If all other compare functions are nullptr then this will be called to evaluate the test.
8. test_teardown_function() - This is called once for every test in tests. You must free/release any resources allocated by test_setup_function.
9. This ends the parallel test functions section all tests in this suite will have completed before execution proceeds.
10. Collect reports - This step is not visible to the user at this point, but data returned by all of the test functions is collected here. This is were you will eventually be able to format/log data for reports.
11. suite_teardown_function() - This is called after all tests calls in this suite have completed, all test_teardown_function calls have completed, and all test reports/logs have been written. You should free any resources allocated in suite_setup_function.

View File

@@ -1,3 +1,11 @@
/***************************************************************************************
* @file tinytest.cpp *
* *
* @brief Defines structs and functions for implementing TinyTest. *
* @copyright Copyright 2023 Tom Hicks <headhunter3@gmail.com> *
* Licensed under the MIT license see the LICENSE file for details. *
***************************************************************************************/
#define _XOPEN_SOURCE_EXTENDED #define _XOPEN_SOURCE_EXTENDED
#include "tinytest.h" #include "tinytest.h"
@@ -14,37 +22,6 @@ using std::string;
using std::vector; using std::vector;
} // End namespace } // End namespace
// Test lifecycle
// suiteSetupFn(); - This is called to allocate any suite level resources. This
// is called once when the suite begins. These functions may be called in
// parallel but execution will not proceed past this block until they have all
// finished.
// testSetupFn(); - This is called once for every test in tests. You may use
// it to allocate resources or setup mocks, stubs, and spies. testFn(...); -
// This is called once for every test to execute the test. Only one of these
// test functions will actually be run for each test in tests. They should
// return true if the test passed, return false if the test failed or there
// was an error, and be nullptr if they should be skipped. The executed
// function will be called with expectedOutput and the result of testFn(...).
// They can be used to test functions with side effects, especially void
// functions. maybe_compare_function; - This is the highest priority compare
// function. If it is not nullptr then it will be called.
// suite_compare_function; - This is the second highest priority compare
// function. If maybe_compare_function is nullptr and this is not nullptr then
// it will be called.
// [](TResult expected, TResult actual) { return expected, actual; } - This is
// the lowest priority compare function. If all other compare functions are
// nullptr then this will be called to evaluate the test. testTeardownFn(); -
// This is called once for every test in tests. You must free/release any
// resources allocated by testSetupFn.
// This ends the parallel test functions section all tests will have completed
// before execution proceeds. Collect reports - Ths step is not visible to the
// user at this point, but data returned by all of the test functions is
// collected here. This is where you will eventually be able to format/log data
// for reports. suiteTeardownFn(); - This is called after all test calls have
// completed, all testTeardownFn calls have completed, and all test reports/logs
// have been written. You should free any resources allocated in suiteSetupFn.
// TODO: Add TShared(*)(string /*test_name*/, UUID /*testRunId*/) // TODO: Add TShared(*)(string /*test_name*/, UUID /*testRunId*/)
// allocateSharedData to the test tuple to make some shared data that can be // allocateSharedData to the test tuple to make some shared data that can be
// used in a thread safe way by setup, teardown, and evaluate steps of the test. // used in a thread safe way by setup, teardown, and evaluate steps of the test.
@@ -55,94 +32,9 @@ using std::vector;
// setup. Test setup functions may allocate additional resources. If they do // setup. Test setup functions may allocate additional resources. If they do
// then the allocated resources they should be freed by test teardown // then the allocated resources they should be freed by test teardown
// function. Suite and/or Test compare functions may consume this shared data, // function. Suite and/or Test compare functions may consume this shared data,
// but it will not be shared with the execution of testFn. // but it will not be shared with the execution of function_to_test.
// This function is called to execute a test suite. You provide it with some
// configuration info, optional utility callback functions, and test data (input
// parameters for each call to testFn and the expected result). It returns a
// TestResults that should be treated as an opaque data type. Not all parameters
// are named in code, but they are named and explained in the comments and will
// be described by those names below.
// string suite_label - This is the name of this test suite. It is used for
// reporting messages. FnToTest testFn - This is the function to test. This
// may be replaced if necessary by function. It may not currently support
// class methods, but that is planned. vector<tuple<...>> tests - This is the
// test run data. Each tuple in the vector is a single test run. It's members
// are explained below.
// string test_name - This is the name of this test. It is used for
// reporting messages. TResult expectedOutput - This is the expected result
// of executing this test. bool(*)(const TResult expected, const TResult
// actual) maybe_compare_function - This is optional. If unset or set to
// nullptr it is skipped. If set to a function it is called to evaluate the
// test results. It takes the expected and actual results as parameters and
// should return true if the test passed and false otherwise. This may be
// changed to return a TestResults at some point. void(*)(TInputParams...)
// testSetupFn - This is optional. If unset or set to nullptr it is skipped.
// If set to a function it is called before each test to setup the
// environment for the test. You may use it to allocate resources and setup
// mocks, stubs, and spies. void(*)(TInputParams...) testTeardownFn - This
// is optiona. If unset or set to nullptr it is skipped. If set to a
// function it is called after each test to cleanup the environment after
// the test. You should free resources allocated by testSetupFn. bool
// isEnabled - This is optional. If unset or set to true the test is run. If
// set to false this test is skipped. If skipped it will be reported as a
// skipped/disabled test.
// bool(*)(const TResult expected, const TResult actual)
// suite_compare_function - This is optional. If unset or set to nullptr it is
// skipped. If set to a function and maybe_compare_function is not called for
// a test run then this function is called to evaluate the test results. It
// takes the expected and actual results as parameters and should return true
// if the test passed and false otherwise. This may be changed to return a
// TestResults at some point. void(*)() suiteSetupFn - This is optional. If
// unset or set to nullptr it is skipped. If set to a function it is called
// before starting this test suite to setup the environment. You may use it to
// allocate resources and setup mocks, stubs, and spies. void(*)()
// suiteTeardownFn - This is optional. If unset or set to nullptr it is
// skipped. If set to a function it is called after all tests in this suite
// have finished and all reporting has finished. You should free resources
// allocated by suiteSetupFn.
// This method should be called like so. This is the minimal call and omits all
// of the optional params. This is the most common usage. You should put one
// tuple of inputs and expected output for each test case.
// results = collect_and_report_TestResultstest_fn(
// "Test: functionUnderTest",
// functionUnderTest,
// vector({
// make_tuple(
// "ShouldReturnAppleForGroupId_1_and_ItemId_2",
// string("Apple"),
// make_tuple(1,2),
// ),
// }),
// );
// The suites can be run from one file as such. From a file called
// ThingDoer_test.cpp to test the class/methods ThingDoer declared in
// ThingDoer.cpp. This isn't mandatory but is a best practice. You can use
// testFn without calling collect_and_report_TestResults() and also could call
// it from a normal int main(int argc, char* argv[]) or other function.
// TestResults test_main_ThingDoer(int argc, char* argv[]) {
// TestResults results;
// results = collect_and_report_TestResults(results, testFn("doThing1",
// ...), argc, argv); results = collect_and_report_TestResults(results,
// testFn("doThing2", ...), argc, argv); return results;
// }
// Then some test harness either generated or explicit can call
// test_main_ThingDoer(...) and optionally reported there. Reporting granularity
// is controlled by how frequently you call collect_and_report_TestResults(...).
// You can combine test results with results = results + testFn(..); and then
// collect_and_report_TestResults on the aggregate TestResults value.
// _Step_9 - if T2 is a single value then make_tuple<T2>(T2) and call longer
// version auto testFunction = [](int id){return id==0?"":"";}; auto
// compareFunction = [](const string a, const string b){return a==b;};
// template<typename TResult, typename FnToTest, typename... TInputParams>
// _Step_10 -
// test_fn(string, _FnToTest, vector<tuple<string, _T1, _CompareFn,
// <tuple<_T2...>>>)
// Default to (string, _FnToTest, vector<tuple<"", _T1, [](a,b){return a==b;},
// make_tuple()) Also allow make_tuple(T2) if the last param is not a tuple.
// Begin TestResults methods
TestResults::TestResults() : errors_(0), failed_(0), passed_(0), skipped_(0), total_(0) {} TestResults::TestResults() : errors_(0), failed_(0), passed_(0), skipped_(0), total_(0) {}
TestResults::TestResults(const TestResults& other) TestResults::TestResults(const TestResults& other)
@@ -309,6 +201,8 @@ void PrintResults(std::ostream& os, TestResults results) {
os << "Errors: " << results.errors() << " 🔥" << endl; os << "Errors: " << results.errors() << " 🔥" << endl;
} }
// End TestResults methods.
MaybeTestConfigureFunction DefaultTestConfigureFunction() { MaybeTestConfigureFunction DefaultTestConfigureFunction() {
return std::nullopt; return std::nullopt;
} }

View File

@@ -1,54 +1,24 @@
#ifndef TEST_H__ #ifndef TEST_H__
#define TEST_H__ #define TEST_H__
/*************************************************************************************** /***************************************************************************************
* @file test.h * @file tinytest.h *
* * *
* @brief Defines structs and functions for implementing TinyTest. * @brief Defines structs and functions for implementing TinyTest. *
* @copyright * @copyright Copyright 2023 Tom Hicks <headhunter3@gmail.com> *
* Copyright 2023 Tom Hicks * Licensed under the MIT license see the LICENSE file for details. *
* Licensed under the MIT license see the LICENSE file for details.
***************************************************************************************/ ***************************************************************************************/
#include <cstdint> #include <cstdint>
#include <functional>
#include <iostream> #include <iostream>
#include <optional>
#include <sstream> #include <sstream>
#include <string> #include <string>
#include <tuple> #include <tuple>
#include <utility> #include <utility>
#include <vector>
// Test lifecycle // TODO: Document this.
// suite_setup_function(); - This is called to allocate any suite level
// resources. This is called once when the suite begins. These functions may be
// called in parallel but execution will not proceed past this block until they
// have all finished.
// test_setup_function(); - This is called once for every test in tests. You
// may use it to allocate resources or setup mocks, stubs, and spies.
// function_to_test(...); - This is called once for every test to execute the
// test. Only one of these test functions will actually be run for each test
// in tests. They should return true if the test passed, return false if the
// test failed or there was an error, and be nullptr if they should be
// skipped. The executed function will be called with expected_output and the
// result of function_to_test(...). They can be used to test functions with
// side effects, especially void functions. test_compare_function; - This is
// the highest priority compare function. If it is not nullptr then it will be
// called. suite_compare_function; - This is the second highest priority
// compare function. If test_compare_function is nullptr and this is not
// nullptr then it will be called.
// [](TResult expected, TResult actual) { return expected, actual; } - This is
// the lowest priority compare function. If all other compare functions are
// nullptr then this will be called to evaluate the test.
// test_teardown_function();
// - This is called once for every test in tests. You must free/release any
// resources allocated by test_setup_function.
// This ends the parallel test functions section all tests will have completed
// before execution proceeds. Collect reports - Ths step is not visible to the
// user at this point, but data returned by all of the test functions is
// collected here. This is where you will eventually be able to format/log data
// for reports. suite_teardown_function(); - This is called after all test calls
// have completed, all test_teardown_function calls have completed, and all test
// reports/logs have been written. You should free any resources allocated in
// suite_setup_function.
// Tuple printer from: // Tuple printer from:
// https://stackoverflow.com/questions/6245735/pretty-print-stdtuple/31116392#58417285 // https://stackoverflow.com/questions/6245735/pretty-print-stdtuple/31116392#58417285
template <typename TChar, typename TTraits, typename... TArgs> template <typename TChar, typename TTraits, typename... TArgs>
@@ -57,6 +27,7 @@ auto& operator<<(std::basic_ostream<TChar, TTraits>& os, std::tuple<TArgs...> co
return os; return os;
} }
// TODO: Document this.
template <typename TChar, typename TTraits, typename TItem> template <typename TChar, typename TTraits, typename TItem>
auto& operator<<(std::basic_ostream<TChar, TTraits>& os, std::vector<TItem> v) { auto& operator<<(std::basic_ostream<TChar, TTraits>& os, std::vector<TItem> v) {
os << "[ "; os << "[ ";
@@ -70,6 +41,7 @@ auto& operator<<(std::basic_ostream<TChar, TTraits>& os, std::vector<TItem> v) {
return os; return os;
} }
// TODO: Document this.
template <typename TChar, typename TTraits, typename TItem> template <typename TChar, typename TTraits, typename TItem>
auto& compare(std::basic_ostream<TChar, TTraits>& error_message, auto& compare(std::basic_ostream<TChar, TTraits>& error_message,
std::vector<TItem> expected, std::vector<TItem> expected,
@@ -90,10 +62,6 @@ auto& compare(std::basic_ostream<TChar, TTraits>& error_message,
} }
namespace TinyTest { namespace TinyTest {
using std::string;
using std::tuple;
using std::vector;
/// @brief /// @brief
class TestResults { class TestResults {
public: public:
@@ -159,7 +127,7 @@ class TestResults {
/// @brief Getter for the list of error messages. /// @brief Getter for the list of error messages.
/// @return /// @return
vector<string> error_messages(); std::vector<std::string> error_messages();
/// @brief Getter for the count of errors. /// @brief Getter for the count of errors.
/// @return /// @return
@@ -171,7 +139,7 @@ class TestResults {
/// @brief Getter for the list of failure messages. /// @brief Getter for the list of failure messages.
/// @return The list of failure messages. /// @return The list of failure messages.
vector<string> failure_messages(); std::vector<std::string> failure_messages();
/// @brief Getter for the count of passed tests. /// @brief Getter for the count of passed tests.
/// @return The count of passed tests. /// @return The count of passed tests.
@@ -183,7 +151,7 @@ class TestResults {
/// @brief Getter for the list of skip messages. /// @brief Getter for the list of skip messages.
/// @return The list of skip messages. /// @return The list of skip messages.
vector<string> skip_messages(); std::vector<std::string> skip_messages();
/// @brief Getter for the count of total tests. /// @brief Getter for the count of total tests.
/// @return The count of total tests run. /// @return The count of total tests run.
@@ -221,17 +189,21 @@ using TestCompareFunction = std::function<bool(const TResult& expected, const TR
template <typename TResult> template <typename TResult>
using MaybeTestCompareFunction = std::optional<TestCompareFunction<TResult>>; using MaybeTestCompareFunction = std::optional<TestCompareFunction<TResult>>;
// TODO: Document this.
template <typename TResult> template <typename TResult>
MaybeTestCompareFunction<TResult> DefaultTestCompareFunction() { MaybeTestCompareFunction<TResult> DefaultTestCompareFunction() {
return std::nullopt; return std::nullopt;
} }
// TODO: Document this.
using TestConfigureFunction = std::function<void()>; using TestConfigureFunction = std::function<void()>;
// TODO: Document this.
using MaybeTestConfigureFunction = std::optional<TestConfigureFunction>; using MaybeTestConfigureFunction = std::optional<TestConfigureFunction>;
// TOD: Document this.
MaybeTestConfigureFunction DefaultTestConfigureFunction(); MaybeTestConfigureFunction DefaultTestConfigureFunction();
// TODO: For some reason all hell breaks loose if test_name or expected output // TODO: For some reason all hell breaks loose if test_name or expected output
// are const&. Figure out why. // are const&. Figure out why. Probably need to use decay and make const& where we want it explicitly.
/// @brief /// @brief
/// @tparam TResult /// @tparam TResult
/// @tparam ...TInputParams /// @tparam ...TInputParams
@@ -347,7 +319,6 @@ using TestSuite = std::tuple<std::string,
// collect_and_report_test_results(...). You can combine test results with // collect_and_report_test_results(...). You can combine test results with
// results = results + function_to_test(..); and then // results = results + function_to_test(..); and then
// collect_and_report_test_results on the aggregate TestResults value. // collect_and_report_test_results on the aggregate TestResults value.
/// @brief /// @brief
/// @tparam TResult The result type of the test. /// @tparam TResult The result type of the test.
/// @tparam TInputParams... The types of parameters sent to the test function. /// @tparam TInputParams... The types of parameters sent to the test function.
@@ -369,7 +340,7 @@ using TestSuite = std::tuple<std::string,
template <typename TResult, typename... TInputParams> template <typename TResult, typename... TInputParams>
TestResults execute_suite(std::string suite_label, TestResults execute_suite(std::string suite_label,
std::function<TResult(TInputParams...)> function_to_test, std::function<TResult(TInputParams...)> function_to_test,
vector<TestTuple<TResult, TInputParams...>> tests, std::vector<TestTuple<TResult, TInputParams...>> tests,
MaybeTestCompareFunction<TResult> suite_compare = std::nullopt, MaybeTestCompareFunction<TResult> suite_compare = std::nullopt,
MaybeTestConfigureFunction before_all = std::nullopt, MaybeTestConfigureFunction before_all = std::nullopt,
MaybeTestConfigureFunction after_all = std::nullopt, MaybeTestConfigureFunction after_all = std::nullopt,
@@ -428,7 +399,7 @@ TestResults execute_suite(std::string suite_label,
results.error(qualified_test_name + " " + os.str()); results.error(qualified_test_name + " " + os.str());
std::cout << " 🔥ERROR: " << os.str() << std::endl; std::cout << " 🔥ERROR: " << os.str() << std::endl;
} catch (...) { } catch (...) {
string message = std::string message =
"Caught something that is neither an std::exception " "Caught something that is neither an std::exception "
"nor an std::string."; "nor an std::string.";
results.error(qualified_test_name + " " + message); results.error(qualified_test_name + " " + message);
@@ -511,9 +482,9 @@ TestResults execute_suite(std::string suite_label,
/// skipped for reporting purposes. /// skipped for reporting purposes.
/// @return A TestTuple suitable for use as a test run when calling test_fn. /// @return A TestTuple suitable for use as a test run when calling test_fn.
template <typename TResult, typename... TInputParams> template <typename TResult, typename... TInputParams>
TestTuple<TResult, TInputParams...> make_test(const string& test_name, TestTuple<TResult, TInputParams...> make_test(const std::string& test_name,
const TResult& expected, const TResult& expected,
tuple<TInputParams...> input_params, std::tuple<TInputParams...> input_params,
MaybeTestCompareFunction<TResult> test_compare_fn = std::nullopt, MaybeTestCompareFunction<TResult> test_compare_fn = std::nullopt,
MaybeTestConfigureFunction before_each = std::nullopt, MaybeTestConfigureFunction before_each = std::nullopt,
MaybeTestConfigureFunction after_each = std::nullopt, MaybeTestConfigureFunction after_each = std::nullopt,
@@ -534,9 +505,9 @@ TestTuple<TResult, TInputParams...> make_test(const string& test_name,
/// @param is_enabled /// @param is_enabled
/// @return /// @return
template <typename TResult, typename TFunctionToTest, typename... TInputParams> template <typename TResult, typename TFunctionToTest, typename... TInputParams>
TestSuite<TResult, TInputParams...> make_test_suite(const string& suite_name, TestSuite<TResult, TInputParams...> make_test_suite(const std::string& suite_name,
TFunctionToTest function_to_test, TFunctionToTest function_to_test,
vector<TestTuple<TResult, TInputParams...>> test_data, std::vector<TestTuple<TResult, TInputParams...>> test_data,
MaybeTestCompareFunction<TResult> compare = std::nullopt, MaybeTestCompareFunction<TResult> compare = std::nullopt,
MaybeTestConfigureFunction before_each = std::nullopt, MaybeTestConfigureFunction before_each = std::nullopt,
MaybeTestConfigureFunction after_each = std::nullopt, MaybeTestConfigureFunction after_each = std::nullopt,
@@ -546,7 +517,7 @@ TestSuite<TResult, TInputParams...> make_test_suite(const string& suite_name,
template <typename TResult, typename TFunctionToTest, typename... TInputParams> template <typename TResult, typename TFunctionToTest, typename... TInputParams>
TestSuite<TResult, TInputParams...> make_test_suite( TestSuite<TResult, TInputParams...> make_test_suite(
const string& suite_name, const std::string& suite_name,
TFunctionToTest function_to_test, TFunctionToTest function_to_test,
std::initializer_list<TestTuple<TResult, TInputParams...>> test_data, std::initializer_list<TestTuple<TResult, TInputParams...>> test_data,
MaybeTestCompareFunction<TResult> compare = std::nullopt, MaybeTestCompareFunction<TResult> compare = std::nullopt,
@@ -595,16 +566,4 @@ MaybeTestConfigureFunction coalesce(MaybeTestConfigureFunction first, MaybeTestC
/// @param results The TestResults to write. /// @param results The TestResults to write.
void PrintResults(std::ostream& os, TestResults results); void PrintResults(std::ostream& os, TestResults results);
} // End namespace TinyTest } // End namespace TinyTest
// TODO: Add TShared(*)(string /*test_name*/, UUID /*test_run_id*/)
// allocate_shared_data to the test tuple to make some shared data that can be
// used in a thread safe way by setup, teardown, and evaluate steps of the test.
// TODO: Add TShared to be returned by the setup functions, and consumed by the
// evaluate and teardown functions.
// Suite setup/teardown functions should allocate/free.
// Test setup/teardown functions should consume the data allocated by suite
// setup. Test setup functions may allocate additional resources. If they do
// then the allocated resources they should be freed by test teardown
// function. Suite and/or test compare functions may consume this shared data,
// but it will not be shared with the execution of function_to_test.
#endif // End !defined TEST_H__ #endif // End !defined TEST_H__