--- title: "Testing Suite" author: "Anastasis Giannousakis" date: "5/21/2021" output: html_document vignette: > %\VignetteIndexEntry{Testing Suite} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE) ``` ## Introduction This is an introduction to the Testing Suite for big models developed at PIK RD3. For big models no standard continuous integration (CI) techniques can be deployed, while unit tests are not easy to apply and do not provide tests for the complete model workflow. We therefore developed a custom Testing Suite. The Testing Suite tests **several scenarios**, including various **output routines** and interfaces with **input data preparation**. Further, it checks the model results for **consistency**. It is fully automated (runs every weekend), documents its findings in a gitlab project, and sends a weekly email with each new report to the team developing the model. ## How it works: A scheduled and recurring job runs on the cluster every weekend, starting model runs according to a predefined set of scenarios. Once the runs are finished the job creates an overview of the results using [rs2](http://htmlpreview.github.io/?https://github.com/pik-piam/modelstats/blob/master/vignettes/rs2.html), and (optionally) additionally runs [compareScenarios](https://github.com/pik-piam/remind2/blob/master/R/compareScenarios.R) comparing each run with the previous run with the same scenario name (the comparison PDF files are found in each scenario folder, they also contain comparisons with historical data). It also uploads all runs to the [shinyResults::appResults](https://github.com/pik-piam/shinyresults/blob/master/R/appResults.R) app for easy and interactive viewing. Also here a comparison with historical data is found. Finally, using a gitlab integration hook an overview of the results is automatically reported to the developer team via email. ## How to use/contribute: If the report contains runs that failed, or output that looks wrong (or has not been generated at all) someone has to take action. If there are cases for which you feel responsible or in the position to fix, please do so. This early warning system ensures stable workflows for all members of the team. If you want your own scenarios to be included in future tests: For REMIND, add your scenarios to the `scenario_config.csv`, add `AMT` in the `start` column and simply commit it to the model's develop branch (similar for MAgPIE). They will be automatically executed the next time the tests are run. For wishes, changes etc, please contact RSE or open a new issue here: https://github.com/pik-piam/modelstats/issues