This function creates a plot representing the evaluation of a learning method across different training-set sizes.
Arguments
- data
data.frame
containing the summary of an object of class Renoir as returned bysummary_table
. This function expects specific columns:training_set_size
contains the considered training-set sizes
score
contains the performance metric for each model
mean_score
contains the mean performance metric for the specific training-set size
lower_ci
contains the lower bound of the confidence interval for the mean score
upper_ci
contains the upper bound of the confidence interval for the mean score
best_resample
contains the index of the automatically selected optimal training-set size
best_model
contains the index of the best model for the optimal training-set size
name
contains a grouping key, e.g. the learning method
The
name
column is used to identify the number of considered evaluations- ...
further arguments to
plot_single_evaluation
orplot_multi_evaluation
Details
A plot showing the mean performance and the related
95\
across different training-set sizes is produced.
The evaluated element is identified by the name
column in the data
. If a unique key is found then
plot_single_evaluation
is dispatched, while if
multiple keys are found then the dispatched method is
plot_multi_evaluation
. In the latter case,
multiple evaluations are reported in the same plot.