Skip to contents

This function creates an interactive plot representing the evaluation of a learning method across different training-set sizes.

Usage

plotly_evaluation(
  data,
  tooltip = c("imodel", "screened", "nfeatures", "configuration"),
  ...
)

Arguments

data

data.frame containing the summary of an object of class Renoir as returned by summary_table. This function expects specific columns:

training_set_size

contains the considered training-set sizes

score

contains the performance metric for each model

mean_score

contains the mean performance metric for the specific training-set size

lower_ci

contains the lower bound of the confidence interval for the mean score

upper_ci

contains the upper bound of the confidence interval for the mean score

best_resample

contains the index of the automatically selected optimal training-set size

best_model

contains the index of the best model for the optimal training-set size

name

contains a grouping key, e.g. the learning method

The name column is used to identify the number of considered evaluations

tooltip

a character vector, the subset of columns to select for creating a text for tooltip. It is used unless a tooltip column is found in data

...

further arguments to plotly_single_evaluation or plotly_multi_evaluation

Value

An object of class plotly

Details

An interactive plot showing the mean performance and the related 95\ across different training-set sizes is produced. The evaluated element is identified by the name column in the data. If a unique key is found then plotly_single_evaluation is dispatched, while if multiple keys are found then the dispatched method is plotly_multi_evaluation. In the latter case, multiple evaluations are reported in the same plot.

Author

Alessandro Barberis