Skip to contents

This function computes the precision.

Usage

precision(true, pred, weights = NULL, multi, ...)

Arguments

true

a vector (or a matrix) of observed values. If a matrix is provided, a multi-response is assumed

pred

a vector (or a matrix) of predicted values

weights

observation weights (not implemented yet)

multi

what to do when response has multiple output values

average

errors of multiple outputs are averaged to get a single value for each observation

micro

the score is computed by using the global number of true positives, false negatives and false positives

macro

scores of different classes are averaged by unweighted mean to get a single value. Class imbalance is not taken into account

weighted

scores of different classes are averaged by weighted mean to get a single value. Weights are number of true instances per class. It takes into account label imbalance

raw

returns a vector containing one score for each class

binary

returns the score for the class specified by positive

...

further arguments to multiresponse_classification_metric

Value

A single score for the selected class if multi = "binary", a vector containing one score for each class if multi = "raw", a summary score computed by using the global metrics if multi = "micro" or averaging the results from different classes if multi = "macro". A summary score produced by a weighted average of the results from different class is produced if multi = "weighted"

Details

The precision is the fraction of correctly predicted elements with a condition. It is defined as:

$$precision = \textit{positive predictive value} (PPV) = \frac{TP}{PP} = \frac{TP}{TP + FP} = 1 - FDR$$

The optimal value is 1 and the worst value is 0.

Author

Alessandro Barberis