Skip to content

Visualization

Module functions


In order to use functions in this module, import visualize as follows:

import ludwig
from ludwig import visualize

learning_curves

ludwig.visualize.learning_curves(
  train_stats_per_model,
  output_feature_name=None,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  callbacks=None
)

Show how model metrics change over training and validation data epochs.

For each model and for each output feature and metric of the model, it produces a line plot showing how that metric changed over the course of the epochs of training on the training and validation sets.

Inputs

  • train_stats_per_model (List[dict]): list containing dictionary of training statistics per model.
  • output_feature_name (Union[str, None], default: None): name of the output feature to use for the visualization. If None, use all output features.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • callbacks (list, default: None): a list of ludwig.callbacks.Callback objects that provide hooks into the Ludwig pipeline.

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


compare_performance

ludwig.visualize.compare_performance(
  test_stats_per_model,
  output_feature_name=None,
  model_names=None,
  output_directory=None,
  file_format='pdf'
)

Produces model comparison barplot visualization for each overall metric.

For each model (in the aligned lists of test_statistics and model_names) it produces bars in a bar plot, one for each overall metric available in the test_statistics file for the specified output_feature_name.

Inputs

  • test_stats_per_model (List[dict]): dictionary containing evaluation performance statistics.
  • output_feature_name (Union[str, None], default: None): name of the output feature to use for the visualization. If None, use all output features.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.

Return

  • return (Non): (None)

Example usage:

model_a = LudwigModel(config)
model_a.train(dataset)
a_evaluation_stats, _, _ = model_a.evaluate(eval_set)
model_b = LudwigModel.load("path/to/model/")
b_evaluation_stats, _, _ = model_b.evaluate(eval_set)
compare_performance([a_evaluation_stats, b_evaluation_stats], model_names=["A", "B"])

DeveloperAPI: This API may change across minor Ludwig releases.


compare_classifiers_performance_from_prob

ludwig.visualize.compare_classifiers_performance_from_prob(
  probabilities_per_model,
  ground_truth,
  metadata,
  output_feature_name,
  labels_limit=0,
  top_n_classes=3,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  ground_truth_apply_idx=True
)

Produces model comparison barplot visualization from probabilities.

For each model it produces bars in a bar plot, one for each overall metric computed on the fly from the probabilities of predictions for the specified model_names.

Inputs

  • probabilities_per_model (List[np.ndarray]): path to experiment probabilities file
  • ground_truth (pd.Series): ground truth values
  • metadata (dict): feature metadata dictionary
  • output_feature_name (str): output feature name
  • top_n_classes (List[int]): list containing the number of classes to plot.
  • labels_limit (int): upper limit on the numeric encoded label value. Encoded numeric label values in dataset that are higher than labels_limit are considered to be "rare" labels.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • ground_truth_apply_idx (bool, default: True): whether to use metadata['str2idx'] in np.vectorize

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


compare_classifiers_performance_from_pred

ludwig.visualize.compare_classifiers_performance_from_pred(
  predictions_per_model,
  ground_truth,
  metadata,
  output_feature_name,
  labels_limit,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  ground_truth_apply_idx=True
)

Produces model comparison barplot visualization from predictions.

For each model it produces bars in a bar plot, one for each overall metric computed on the fly from the predictions for the specified model_names.

Inputs

  • predictions_per_model (List[str]): path to experiment predictions file.
  • ground_truth (pd.Series): ground truth values
  • metadata (dict): feature metadata dictionary.
  • output_feature_name (str): name of the output feature to visualize.
  • labels_limit (int): upper limit on the numeric encoded label value. Encoded numeric label values in dataset that are higher than labels_limit are considered to be "rare" labels.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • ground_truth_apply_idx (bool, default: True): whether to use metadata['str2idx'] in np.vectorize

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


compare_classifiers_performance_subset

ludwig.visualize.compare_classifiers_performance_subset(
  probabilities_per_model,
  ground_truth,
  metadata,
  output_feature_name,
  top_n_classes,
  labels_limit,
  subset,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  ground_truth_apply_idx=True
)

Produces model comparison barplot visualization from train subset.

For each model it produces bars in a bar plot, one for each overall metric computed on the fly from the probabilities predictions for the specified model_names, considering only a subset of the full training set. The way the subset is obtained is using the top_n_classes and subset parameters.

Inputs

  • probabilities_per_model (List[numpy.array]): list of model probabilities.
  • ground_truth (Union[pd.Series, np.ndarray]): ground truth values
  • metadata (dict): feature metadata dictionary
  • output_feature_name (str): output feature name
  • top_n_classes (List[int]): list containing the number of classes to plot.
  • labels_limit (int): upper limit on the numeric encoded label value. Encoded numeric label values in dataset that are higher than labels_limit are considered to be "rare" labels.
  • subset (str): string specifying type of subset filtering. Valid values are ground_truth or predictions.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • ground_truth_apply_idx (bool, default: True): whether to use metadata['str2idx'] in np.vectorize

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


compare_classifiers_performance_changing_k

ludwig.visualize.compare_classifiers_performance_changing_k(
  probabilities_per_model,
  ground_truth,
  metadata,
  output_feature_name,
  top_k,
  labels_limit,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  ground_truth_apply_idx=True
)

Produce lineplot that show Hits@K metric while k goes from 1 to top_k.

For each model it produces a line plot that shows the Hits@K metric (that counts a prediction as correct if the model produces it among the first k) while changing k from 1 to top_k for the specified output_feature_name.

Inputs

  • probabilities_per_model (List[numpy.array]): list of model probabilities.
  • ground_truth (Union[pd.Series, np.ndarray]): ground truth values
  • metadata (dict): feature metadata dictionary
  • output_feature_name (str): output feature name
  • top_k (int): number of elements in the ranklist to consider.
  • labels_limit (int): upper limit on the numeric encoded label value. Encoded numeric label values in dataset that are higher than labels_limit are considered to be "rare" labels.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • ground_truth_apply_idx (bool, default: True): whether to use metadata['str2idx'] in np.vectorize

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


compare_classifiers_multiclass_multimetric

ludwig.visualize.compare_classifiers_multiclass_multimetric(
  test_stats_per_model,
  metadata,
  output_feature_name,
  top_n_classes,
  model_names=None,
  output_directory=None,
  file_format='pdf'
)

Show the precision, recall and F1 of the model for the specified output_feature_name.

For each model it produces four plots that show the precision, recall and F1 of the model on several classes for the specified output_feature_name.

Inputs

  • test_stats_per_model (List[dict]): list containing dictionary of evaluation performance statistics
  • metadata (dict): intermediate preprocess structure created during training containing the mappings of the input dataset.
  • output_feature_name (Union[str, None]): name of the output feature to use for the visualization. If None, use all output features.
  • top_n_classes (List[int]): list containing the number of classes to plot.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


compare_classifiers_predictions

ludwig.visualize.compare_classifiers_predictions(
  predictions_per_model,
  ground_truth,
  metadata,
  output_feature_name,
  labels_limit,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  ground_truth_apply_idx=True
)

Show two models comparison of their output_feature_name predictions.

Inputs

  • predictions_per_model (List[list]): list containing the model predictions for the specified output_feature_name.
  • ground_truth (Union[pd.Series, np.ndarray]): ground truth values
  • metadata (dict): feature metadata dictionary
  • output_feature_name (str): output feature name
  • labels_limit (int): upper limit on the numeric encoded label value. Encoded numeric label values in dataset that are higher than labels_limit are considered to be "rare" labels.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • ground_truth_apply_idx (bool, default: True): whether to use metadata['str2idx'] in np.vectorize

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


confidence_thresholding_2thresholds_2d

ludwig.visualize.confidence_thresholding_2thresholds_2d(
  probabilities_per_model,
  ground_truths,
  metadata,
  threshold_output_feature_names,
  labels_limit,
  model_names=None,
  output_directory=None,
  file_format='pdf'
)

Show confidence threshold data vs accuracy for two output feature names.

The first plot shows several semi transparent lines. They summarize the 3d surfaces displayed by confidence_thresholding_2thresholds_3d that have thresholds on the confidence of the predictions of the two threshold_output_feature_names as x and y axes and either the data coverage percentage or the accuracy as z axis. Each line represents a slice of the data coverage surface projected onto the accuracy surface.

Inputs

  • probabilities_per_model (List[numpy.array]): list of model probabilities.
  • ground_truth (Union[List[np.array], List[pd.Series]]): containing ground truth data
  • metadata (dict): feature metadata dictionary
  • threshold_output_feature_names (List[str]): List containing two output feature names for visualization.
  • labels_limit (int): upper limit on the numeric encoded label value. Encoded numeric label values in dataset that are higher than labels_limit are considered to be "rare" labels.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


confidence_thresholding_2thresholds_3d

ludwig.visualize.confidence_thresholding_2thresholds_3d(
  probabilities_per_model,
  ground_truths,
  metadata,
  threshold_output_feature_names,
  labels_limit,
  output_directory=None,
  file_format='pdf'
)

Show 3d confidence threshold data vs accuracy for two output feature names.

The plot shows the 3d surfaces displayed by confidence_thresholding_2thresholds_3d that have thresholds on the confidence of the predictions of the two threshold_output_feature_names as x and y axes and either the data coverage percentage or the accuracy as z axis.

Inputs

  • probabilities_per_model (List[numpy.array]): list of model probabilities.
  • ground_truth (Union[List[np.array], List[pd.Series]]): containing ground truth data
  • metadata (dict): feature metadata dictionary
  • threshold_output_feature_names (List[str]): List containing two output feature names for visualization.
  • labels_limit (int): upper limit on the numeric encoded label value. Encoded numeric label values in dataset that are higher than labels_limit are considered to be "rare" labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


confidence_thresholding

ludwig.visualize.confidence_thresholding(
  probabilities_per_model,
  ground_truth,
  metadata,
  output_feature_name,
  labels_limit,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  ground_truth_apply_idx=True
)

Show models accuracy and data coverage while increasing treshold.

For each model it produces a pair of lines indicating the accuracy of the model and the data coverage while increasing a threshold (x axis) on the probabilities of predictions for the specified output_feature_name.

Inputs

  • probabilities_per_model (List[numpy.array]): list of model probabilities.
  • ground_truth (Union[pd.Series, np.ndarray]): ground truth values
  • metadata (dict): feature metadata dictionary
  • output_feature_name (str): output feature name
  • labels_limit (int): upper limit on the numeric encoded label value. Encoded numeric label values in dataset that are higher than labels_limit are considered to be "rare" labels.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • ground_truth_apply_idx (bool, default: True): whether to use metadata['str2idx'] in np.vectorize

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


confidence_thresholding_data_vs_acc

ludwig.visualize.confidence_thresholding_data_vs_acc(
  probabilities_per_model,
  ground_truth,
  metadata,
  output_feature_name,
  labels_limit,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  ground_truth_apply_idx=True
)

Show models comparison of confidence threshold data vs accuracy.

For each model it produces a line indicating the accuracy of the model and the data coverage while increasing a threshold on the probabilities of predictions for the specified output_feature_name. The difference with confidence_thresholding is that it uses two axes instead of three, not visualizing the threshold and having coverage as x axis instead of the threshold.

Inputs

  • probabilities_per_model (List[numpy.array]): list of model probabilities.
  • ground_truth (Union[pd.Series, np.ndarray]): ground truth values
  • metadata (dict): feature metadata dictionary
  • output_feature_name (str): output feature name
  • labels_limit (int): upper limit on the numeric encoded label value. Encoded numeric label values in dataset that are higher than labels_limit are considered to be "rare" labels.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • ground_truth_apply_idx (bool, default: True): whether to use metadata['str2idx'] in np.vectorize

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


confidence_thresholding_data_vs_acc_subset

ludwig.visualize.confidence_thresholding_data_vs_acc_subset(
  probabilities_per_model,
  ground_truth,
  metadata,
  output_feature_name,
  top_n_classes,
  labels_limit,
  subset,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  ground_truth_apply_idx=True
)

Show models comparison of confidence threshold data vs accuracy on a subset of data.

For each model it produces a line indicating the accuracy of the model and the data coverage while increasing a threshold on the probabilities of predictions for the specified output_feature_name, considering only a subset of the full training set. The way the subset is obtained is using the top_n_classes and subset parameters. The difference with confidence_thresholding is that it uses two axes instead of three, not visualizing the threshold and having coverage as x axis instead of the threshold.

If the values of subset is ground_truth, then only datapoints where the ground truth class is within the top n most frequent ones will be considered as test set, and the percentage of datapoints that have been kept from the original set will be displayed. If the values of subset is predictions, then only datapoints where the the model predicts a class that is within the top n most frequent ones will be considered as test set, and the percentage of datapoints that have been kept from the original set will be displayed for each model.

Inputs

  • probabilities_per_model (List[numpy.array]): list of model probabilities.
  • ground_truth (Union[pd.Series, np.ndarray]): ground truth values
  • metadata (dict): feature metadata dictionary
  • output_feature_name (str): output feature name
  • top_n_classes (List[int]): list containing the number of classes to plot.
  • labels_limit (int): upper limit on the numeric encoded label value. Encoded numeric label values in dataset that are higher than labels_limit are considered to be "rare" labels.
  • subset (str): string specifying type of subset filtering. Valid values are ground_truth or predictions.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • ground_truth_apply_idx (bool, default: True): whether to use metadata['str2idx'] in np.vectorize

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


binary_threshold_vs_metric

ludwig.visualize.binary_threshold_vs_metric(
  probabilities_per_model,
  ground_truth,
  metadata,
  output_feature_name,
  metrics,
  positive_label=1,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  ground_truth_apply_idx=True
)

Show confidence of the model against metric for the specified output_feature_name.

For each metric specified in metrics (options are f1, precision, recall, accuracy), this visualization produces a line chart plotting a threshold on the confidence of the model against the metric for the specified output_feature_name. If output_feature_name is a category feature, positive_label, which is specified as the numeric encoded value, indicates the class to be considered positive class and all others will be considered negative. To figure out the association between classes and numeric encoded values check the ground_truth_metadata JSON file.

Inputs

  • probabilities_per_model (List[numpy.array]): list of model probabilities.
  • ground_truth (Union[pd.Series, np.ndarray]): ground truth values
  • metadata (dict): feature metadata dictionary
  • output_feature_name (str): output feature name
  • metrics (List[str]): metrics to display ('f1', 'precision', 'recall', 'accuracy').
  • positive_label (int, default: 1): numeric encoded value for the positive class.
  • model_names (List[str], default: None): list of the names of the models to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • ground_truth_apply_idx (bool, default: True): whether to use metadata['str2idx'] in np.vectorize

Return

  • return (None): (None`)

DeveloperAPI: This API may change across minor Ludwig releases.


roc_curves

ludwig.visualize.roc_curves(
  probabilities_per_model,
  ground_truth,
  metadata,
  output_feature_name,
  positive_label=1,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  ground_truth_apply_idx=True
)

Show the roc curves for output features in the specified models.

This visualization produces a line chart plotting the roc curves for the specified output feature name. If output feature name is a category feature, positive_label indicates which is the class to be considered positive class and all the others will be considered negative. positive_label is the encoded numeric value for category classes. The numeric value can be determined by association between classes and integers captured in the training metadata JSON file.

Inputs

  • probabilities_per_model (List[numpy.array]): list of model probabilities.
  • ground_truth (Union[pd.Series, np.ndarray]): ground truth values
  • metadata (dict): feature metadata dictionary
  • output_feature_name (str): output feature name
  • positive_label (int, default: 1): numeric encoded value for the positive class.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • ground_truth_apply_idx (bool, default: True): whether to use metadata['str2idx'] in np.vectorize

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


roc_curves_from_test_statistics

ludwig.visualize.roc_curves_from_test_statistics(
  test_stats_per_model,
  output_feature_name,
  model_names=None,
  output_directory=None,
  file_format='pdf'
)

Show the roc curves for the specified models output binary output_feature_name.

This visualization uses output_feature_name, test_stats_per_model and model_names parameters. output_feature_name needs to be binary feature. This visualization produces a line chart plotting the roc curves for the specified output_feature_name.

Inputs

  • test_stats_per_model (List[dict]): dictionary containing evaluation performance statistics.
  • output_feature_name (str): name of the output feature to use for the visualization.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


calibration_1_vs_all

ludwig.visualize.calibration_1_vs_all(
  probabilities_per_model,
  ground_truth,
  metadata,
  output_feature_name,
  top_n_classes,
  labels_limit,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  ground_truth_apply_idx=True
)

Show models probability of predictions for the specified output_feature_name.

For each class or each of the k most frequent classes if top_k is specified, it produces two plots computed on the fly from the probabilities of predictions for the specified output_feature_name.

The first plot is a calibration curve that shows the calibration of the predictions considering the current class to be the true one and all others to be a false one, drawing one line for each model (in the aligned lists of probabilities and model_names).

The second plot shows the distributions of the predictions considering the current class to be the true one and all others to be a false one, drawing the distribution for each model (in the aligned lists of probabilities and model_names).

Inputs

  • probabilities_per_model (List[numpy.array]): list of model probabilities.
  • ground_truth (Union[pd.Series, np.ndarray]): ground truth values
  • metadata (dict): feature metadata dictionary
  • output_feature_name (str): output feature name
  • top_n_classes (list): List containing the number of classes to plot.
  • labels_limit (int): upper limit on the numeric encoded label value. Encoded numeric label values in dataset that are higher than labels_limit are considered to be "rare" labels.
  • model_names (List[str], default: None): list of the names of the models to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • ground_truth_apply_idx (bool, default: True): whether to use metadata['str2idx'] in np.vectorize

String

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


calibration_multiclass

ludwig.visualize.calibration_multiclass(
  probabilities_per_model,
  ground_truth,
  metadata,
  output_feature_name,
  labels_limit,
  model_names=None,
  output_directory=None,
  file_format='pdf',
  ground_truth_apply_idx=True
)

Show models probability of predictions for each class of the specified output_feature_name.

Inputs

  • probabilities_per_model (List[numpy.array]): list of model probabilities.
  • ground_truth (Union[pd.Series, np.ndarray]): ground truth values
  • metadata (dict): feature metadata dictionary
  • output_feature_name (str): output feature name
  • labels_limit (int): upper limit on the numeric encoded label value. Encoded numeric label values in dataset that are higher than labels_limit are considered to be "rare" labels.
  • model_names (List[str], default: None): list of the names of the models to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.
  • ground_truth_apply_idx (bool, default: True): whether to use metadata['str2idx'] in np.vectorize

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


confusion_matrix

ludwig.visualize.confusion_matrix(
  test_stats_per_model,
  metadata,
  output_feature_name,
  top_n_classes,
  normalize,
  model_names=None,
  output_directory=None,
  file_format='pdf'
)

Show confusion matrix in the models predictions for each output_feature_name.

For each model (in the aligned lists of test_statistics and model_names) it produces a heatmap of the confusion matrix in the predictions for each output_feature_name that has a confusion matrix in test_statistics. The value of top_n_classes limits the heatmap to the n most frequent classes.

Inputs

  • test_stats_per_model (List[dict]): dictionary containing evaluation performance statistics.
  • metadata (dict): intermediate preprocess structure created during training containing the mappings of the input dataset.
  • output_feature_name (Union[str, None]): name of the output feature to use for the visualization. If None, use all output features.
  • top_n_classes (List[int]): number of top classes or list containing the number of top classes to plot.
  • normalize (bool): flag to normalize rows in confusion matrix.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


frequency_vs_f1

ludwig.visualize.frequency_vs_f1(
  test_stats_per_model,
  metadata,
  output_feature_name,
  top_n_classes,
  model_names=None,
  output_directory=None,
  file_format='pdf'
)

Show prediction statistics for the specified output_feature_name for each model.

For each model (in the aligned lists of test_stats_per_model and model_names), produces two plots statistics of predictions for the specified output_feature_name.

The first plot is a line plot with one x axis representing the different classes and two vertical axes colored in orange and blue respectively. The orange one is the frequency of the class and an orange line is plotted to show the trend. The blue one is the F1 score for that class and a blue line is plotted to show the trend. The classes on the x axis are sorted by f1 score.

The second plot has the same structure of the first one, but the axes are flipped and the classes on the x axis are sorted by frequency.

Inputs

  • test_stats_per_model (List[dict]): dictionary containing evaluation performance statistics.
  • metadata (dict): intermediate preprocess structure created during training containing the mappings of the input dataset.
  • output_feature_name (Union[str, None]): name of the output feature to use for the visualization. If None, use all output features.
  • top_n_classes (List[int]): number of top classes or list containing the number of top classes to plot.
  • model_names (Union[str, List[str]], default: None): model name or list of the model names to use as labels.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


hyperopt_report

ludwig.visualize.hyperopt_report(
  hyperopt_stats_path,
  output_directory=None,
  file_format='pdf'
)

Produces a report about hyperparameter optimization creating one graph per hyperparameter to show the distribution of results and one additional graph of pairwise hyperparameters interactions.

Inputs

  • hyperopt_stats_path (str): path to the hyperopt results JSON file.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window.
  • file_format (str, default: 'pdf'): file format of output plots - 'pdf' or 'png'.

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.


hyperopt_hiplot

ludwig.visualize.hyperopt_hiplot(
  hyperopt_stats_path,
  output_directory=None
)

Produces a parallel coordinate plot about hyperparameter optimization creating one HTML file and optionally a CSV file to be read by hiplot.

Inputs

  • hyperopt_stats_path (str): path to the hyperopt results JSON file.
  • output_directory (str, default: None): directory where to save plots. If not specified, plots will be displayed in a window.

Return

  • return (Non): (None)

DeveloperAPI: This API may change across minor Ludwig releases.