summit.multiview_platform.monoview package
Submodules
summit.multiview_platform.monoview.exec_classif_mono_view module
Execution: Script to perform a MonoView classification
- exec_monoview(directory, X, Y, database_name, labels_names, classification_indices, k_folds, nb_cores, databaseType, path, random_state, hyper_param_search='Random', metrics={'accuracy_score*': {}}, n_iter=30, view_name='', hps_kwargs={}, feature_ids=[], **args)
- exec_monoview_multicore(directory, name, labels_names, classification_indices, k_folds, dataset_file_index, database_type, path, random_state, labels, hyper_param_search='randomized_search', metrics=[['accuracy_score', None]], n_iter=30, **args)
- get_hyper_params(classifier_module, search_method, classifier_module_name, classifier_class_name, X_train, y_train, random_state, output_file_name, k_folds, nb_cores, metrics, kwargs, **hps_kwargs)
- init_constants(args, X, classification_indices, labels_names, name, directory, view_name)
- init_train_test(X, Y, classification_indices)
- save_results(string_analysis, output_file_name, full_labels_pred, y_train_pred, y_train, images_analysis, y_test, confusion_matrix)
summit.multiview_platform.monoview.monoview_utils module
- class BaseMonoviewClassifier
Bases:
BaseClassifier
- get_feature_importance(directory, base_file_name, feature_ids, nb_considered_feats=50)
Used to generate a graph and a pickle dictionary representing feature importances
- get_interpretation(directory, base_file_name, y_test, feature_ids, multi_class=False)
Base method that returns an empty string if there is not interpretation method in the classifier’s module
- get_name_for_fusion()
- class MonoviewResult(view_index, classifier_name, view_name, metrics_scores, full_labels_pred, classifier_config, classifier, n_features, hps_duration, fit_duration, pred_duration, class_metric_scores)
Bases:
object
- get_classifier_name()
- class MonoviewResultAnalyzer(view_name, classifier_name, shape, classifier, classification_indices, k_folds, hps_method, metrics_dict, n_iter, class_label_names, pred, directory, base_file_name, labels, database_name, nb_cores, duration, feature_ids)
Bases:
ResultAnalyser
- get_base_string()
- get_view_specific_info()
- change_label_to_minus(y)
Change the label 0 to minus one
- Parameters:
y
- Return type:
label y with -1 instead of 0
- change_label_to_zero(y)
Change the label -1 to 0
- Parameters:
y
- compute_possible_combinations(params_dict)
- gen_test_folds_preds(X_train, y_train, KFolds, estimator)
- get_accuracy_graph(plotted_data, classifier_name, file_name, name='Accuracies', bounds=None, bound_name=None, boosting_bound=None, set='train', zero_to_one=True)
- percent(x, pos)
Used to print percentage of importance on the y axis