Title: | Validation Tools for PIK-PIAM |
---|---|
Description: | The piamValidation package provides validation tools for the Potsdam Integrated Assessment Modelling environment. |
Authors: | Pascal Weigmann [aut, cre], Oliver Richters [aut] |
Maintainer: | Pascal Weigmann <[email protected]> |
License: | LGPL-3 |
Version: | 0.3.7 |
Built: | 2024-10-24 05:16:36 UTC |
Source: | https://github.com/pik-piam/piamValidation |
The piamValidation package provides validation tools for the Potsdam Integrated Assessment Modelling environment.
Maintainer: Pascal Weigmann [email protected]
Authors:
Oliver Richters
Useful links:
construct tooltips for interactive plots
appendTooltips(df)
appendTooltips(df)
df |
data.frame as returned from 'validateScenarios()' |
Test whether unit of on row of config and data for this variable match.
checkUnits(data, cfgRow)
checkUnits(data, cfgRow)
data |
scenario or reference data for one variable |
cfgRow |
one row of a config file containing the same variable as the data object |
for one row of cfg: filter and merge relevant scenario data with cfg results in one df that contains scenario data, reference data and thresholds
combineData(scenData, cfgRow, histData = NULL)
combineData(scenData, cfgRow, histData = NULL)
scenData |
scenario data |
cfgRow |
one row of a config file |
histData |
reference data |
performs the validation checks from a config on a scenario data set
validateScenarios(dataPath, config, outputFile = NULL, extraColors = TRUE)
validateScenarios(dataPath, config, outputFile = NULL, extraColors = TRUE)
dataPath |
one or multiple path(s) to scenario data in .mif or .csv format, in case of historic comparison, also path to reference data |
config |
select config from inst/config or give a full path to a config file on your computer |
outputFile |
give name of output file in case results should be exported; include file extension |
extraColors |
if TRUE, use cyan and blue for violation of min thresholds instead of using the same colors as for max thresholds (yel and red) |
takes the output of "validateScenarios()" and plots heatmaps per variable
takes the output of "validateScenarios()" and plots heatmaps per variable
validationHeatmap( df, var, met, historical = TRUE, interactive = TRUE, x_plot = "region", y_plot = "period", x_facet = "model", y_facet = "scenario" ) validationHeatmap( df, var, met, historical = TRUE, interactive = TRUE, x_plot = "region", y_plot = "period", x_facet = "model", y_facet = "scenario" )
validationHeatmap( df, var, met, historical = TRUE, interactive = TRUE, x_plot = "region", y_plot = "period", x_facet = "model", y_facet = "scenario" ) validationHeatmap( df, var, met, historical = TRUE, interactive = TRUE, x_plot = "region", y_plot = "period", x_facet = "model", y_facet = "scenario" )
df |
data.frame as returned by “validateScenarios()“ and “appendTooltips()“ |
var |
variable to be plotted |
met |
choose metric from "relative", "difference", "absolute" or "growthrate" |
historical |
should this be a plot comparing to historical data |
interactive |
return plots as interactive plotly plots by default |
x_plot |
choose dimension to display on x-axis of plot, default: region |
y_plot |
choose dimension to display on y-axis of plot, default: period |
x_facet |
choose dimension to display on x-dim of facets, default: model |
y_facet |
choose dimension to display on x-dim of facets, default: scenario |
returns information on whether scenarios passed critical validation checks
validationPass(data, yellowFail = FALSE)
validationPass(data, yellowFail = FALSE)
data |
data.frame as returned from “validateScenarios()“ |
yellowFail |
if set to TRUE a yellow check result of a critical variable will lead to the scenario not passing as validated |
perform validateScenarios and create an .html report using .Rmd templates
validationReport( dataPath, config, report = "default", outputDir = "output", extraColors = TRUE )
validationReport( dataPath, config, report = "default", outputDir = "output", extraColors = TRUE )
dataPath |
one or multiple path(s) to scenario data in .mif or .csv format |
config |
name a config from inst/config ("validationConfig_<name>.csv") or give a full path to a separate configuration file |
report |
name a .Rmd from inst/markdown ("validationReport_<name>.Rmd") to be rendered or give a full path to a separate .Rmd file |
outputDir |
choose a directory to save validation reports to |
extraColors |
if TRUE, use cyan and blue for violation of min thresholds instead of using the same colors as for max thresholds (yel and red) |