API documentation¶
AutoML
class¶
Automated Machine Learning for supervised tasks (binary classification, multiclass classification, regression).
Source code in supervised/automl.py
class AutoML(BaseAutoML):
"""
Automated Machine Learning for supervised tasks (binary classification, multiclass classification, regression).
"""
def __init__(
self,
results_path: Optional[str] = None,
total_time_limit: int = 60 * 60,
mode: Literal["Explain", "Perform", "Compete", "Optuna"] = "Explain",
ml_task: Literal[
"auto", "binary_classification", "multiclass_classification", "regression"
] = "auto",
model_time_limit: Optional[int] = None,
algorithms: Union[
Literal["auto"],
List[
Literal[
"Baseline",
"Linear",
"Decicion Tree",
"Random Forest",
"Extra Trees",
"LightGBM",
"Xgboost",
"CatBoost",
"Neural Network",
"Nearest Neighbors",
]
],
] = "auto",
train_ensemble: bool = True,
stack_models: Union[Literal["auto"], bool] = "auto",
eval_metric: str = "auto",
validation_strategy: Union[Literal["auto"], dict] = "auto",
explain_level: Union[Literal["auto"], Literal[0, 1, 2]] = "auto",
golden_features: Union[Literal["auto"], bool, int] = "auto",
features_selection: Union[Literal["auto"], bool] = "auto",
start_random_models: Union[Literal["auto"], int] = "auto",
hill_climbing_steps: Union[Literal["auto"], int] = "auto",
top_models_to_improve: Union[Literal["auto"], int] = "auto",
boost_on_errors: Union[Literal["auto"], bool] = "auto",
kmeans_features: Union[Literal["auto"], bool] = "auto",
mix_encoding: Union[Literal["auto"], bool] = "auto",
max_single_prediction_time: Optional[Union[int, float]] = None,
optuna_time_budget: Optional[int] = None,
optuna_init_params: dict = {},
optuna_verbose: bool = True,
fairness_metric: str = "auto",
fairness_threshold: Union[Literal["auto"], float] = "auto",
privileged_groups: Union[Literal["auto"], list] = "auto",
underprivileged_groups: Union[Literal["auto"], list] = "auto",
n_jobs: int = -1,
verbose: int = 1,
random_state: int = 1234,
):
"""
Initialize `AutoML` object.
Arguments:
results_path (str): The path with results. If None, then the name of directory will be generated with the template: AutoML_{number},
where the number can be from 1 to 1,000 - depends which direcory name will be available.
If the `results_path` will point to directory with AutoML results (`params.json` must be present),
then all models will be loaded.
total_time_limit (int): The total time limit in seconds for AutoML training.
It is not used when `model_time_limit` is not `None`.
mode (str): Can be {`Explain`, `Perform`, `Compete`, `Optuna`}. This parameter defines the goal of AutoML and how intensive the AutoML search will be.
- `Explain` : To to be used when the user wants to explain and understand the data.
- Uses 75%/25% train/test split.
- Uses the following models: `Baseline`, `Linear`, `Decision Tree`, `Random Forest`, `XGBoost`, `Neural Network`, and `Ensemble`.
- Has full explanations in reports: learning curves, importance plots, and SHAP plots.
- `Perform` : To be used when the user wants to train a model that will be used in real-life use cases.
- Uses 5-fold CV (Cross-Validation).
- Uses the following models: `Linear`, `Random Forest`, `LightGBM`, `XGBoost`, `CatBoost`, `Neural Network`, and `Ensemble`.
- Has learning curves and importance plots in reports.
- `Compete` : To be used for machine learning competitions (maximum performance).
- Uses 80/20 train/test split, or 5-fold CV, or 10-fold CV (Cross-Validation) - it depends on `total_time_limit`. If not set directly, AutoML will select validation automatically.
- Uses the following models: `Decision Tree`, `Random Forest`, `Extra Trees`, `LightGBM`, `XGBoost`, `CatBoost`, `Neural Network`,
`Nearest Neighbors`, `Ensemble`, and `Stacking`.
- It has only learning curves in the reports.
- `Optuna` : To be used for creating highly-tuned machine learning models.
- Uses 10-fold CV (Cross-Validation).
- It tunes with Optuna the following algorithms: `Random Forest`, `Extra Trees`, `LightGBM`, `XGBoost`, `CatBoost`, `Neural Network`.
- It applies `Ensemble` and `Stacking` for trained models.
- It has only learning curves in the reports.
ml_task (str): Can be {"auto", "binary_classification", "multiclass_classification", "regression"}.
- If left `auto` AutoML will try to guess the task based on target values.
- If there will be only 2 values in the target, then task will be set to `"binary_classification"`.
- If number of values in the target will be between 2 and 20 (included), then task will be set to `"multiclass_classification"`.
- In all other casses, the task is set to `"regression"`.
model_time_limit (int): The time limit for training a single model, in seconds.
If `model_time_limit` is set, the `total_time_limit` is not respected.
The single model can contain several learners. The time limit for subsequent learners is computed based on `model_time_limit`.
For example, in the case of 10-fold cross-validation, one model will have 10 learners.
The `model_time_limit` is the time for all 10 learners.
algorithms (list of str): The list of algorithms that will be used in the training.
The algorithms can be:
- `Baseline`,
- `Linear`,
- `Decision Tree`,
- `Random Forest`,
- `Extra Trees`,
- `LightGBM`,
- `Xgboost`,
- `CatBoost`,
- `Neural Network`,
- `Nearest Neighbors`,
train_ensemble (boolean): Whether an ensemble gets created at the end of the training.
stack_models (boolean): Whether a models stack gets created at the end of the training. Stack level is 1.
eval_metric (str): The metric to be used in early stopping and to compare models.
- for binary classification: `logloss`, `auc`, `f1`, `average_precision`, `accuracy` - default is logloss (if left "auto")
- for mutliclass classification: `logloss`, `f1`, `accuracy` - default is `logloss` (if left "auto")
- for regression: `rmse`, `mse`, `mae`, `r2`, `mape`, `spearman`, `pearson` - default is `rmse` (if left "auto")
validation_strategy (dict): Dictionary with validation type. Right now train/test split and cross-validation are supported.
Example:
Cross-validation exmaple:
{
"validation_type": "kfold",
"k_folds": 5,
"shuffle": True,
"stratify": True,
"random_seed": 123
}
Train/test example:
{
"validation_type": "split",
"train_ratio": 0.75,
"shuffle": True,
"stratify": True
}
explain_level (int): The level of explanations included to each model:
- if `explain_level` is `0` no explanations are produced.
- if `explain_level` is `1` the following explanations are produced: importance plot (with permutation method), for decision trees produce tree plots, for linear models save coefficients.
- if `explain_level` is `2` the following explanations are produced: the same as `1` plus SHAP explanations.
If left `auto` AutoML will produce explanations based on the selected `mode`.
golden_features (boolean or int): Whether to use golden features (and how many should be added)
If left `auto` AutoML will use golden features based on the selected `mode`:
- If `mode` is "Explain", `golden_features` = False.
- If `mode` is "Perform", `golden_features` = True.
- If `mode` is "Compete", `golden_features` = True.
If `boolean` value is set then the number of Golden Features is set automatically.
It is set to min(100, max(10, 0.1*number_of_input_features)).
If `int` value is set, the number of Golden Features is set to this value.
features_selection (boolean): Whether to do features_selection
If left `auto` AutoML will do feature selection based on the selected `mode`:
- If `mode` is "Explain", `features_selection` = False.
- If `mode` is "Perform", `features_selection` = True.
- If `mode` is "Compete", `features_selection` = True.
start_random_models (int): Number of starting random models to try.
If left `auto` AutoML will select it based on the selected `mode`:
- If `mode` is "Explain", `start_random_models` = 1.
- If `mode` is "Perform", `start_random_models` = 5.
- If `mode` is "Compete", `start_random_models` = 10.
hill_climbing_steps (int): Number of steps to perform during hill climbing.
If left `auto` AutoML will select it based on the selected `mode`:
- If `mode` is "Explain", `hill_climbing_steps` = 0.
- If `mode` is "Perform", `hill_climbing_steps` = 2.
- If `mode` is "Compete", `hill_climbing_steps` = 2.
top_models_to_improve (int): Number of best models to improve in `hill_climbing` steps.
If left `auto` AutoML will select it based on the selected `mode`:
- If `mode` is "Explain", `top_models_to_improve` = 0.
- If `mode` is "Perform", `top_models_to_improve` = 2.
- If `mode` is "Compete", `top_models_to_improve` = 3.
boost_on_errors (boolean): Whether a model with boost on errors from previous best model should be trained. By default available in the `Compete` mode.
kmeans_features (boolean): Whether a model with k-means generated features should be trained. By default available in the `Compete` mode.
mix_encoding (boolean): Whether a model with mixed encoding should be trained. Mixed encoding is the encoding that uses label encoding
for categoricals with more than 25 categories, and one-hot binary encoding for other categoricals. It is only applied if there are
categorical features with cardinality smaller than 25. By default it is available in the `Compete` mode.
max_single_prediction_time (int or float): The limit for prediction time for single sample. Use it if you want to have a model with fast predictions.
Ideal for creating ML pipelines used as REST API. Time is in seconds. By default (`max_single_prediction_time=None`) models are not optimized for fast predictions,
except the mode `Perform`. For the mode `Perform` the default is `0.5` seconds.
optuna_time_budget (int): The time in seconds which should be used by Optuna to tune each algorithm. It is time for tuning single algorithm.
If you select two algorithms: Xgboost and CatBoost, and set optuna_time_budget=1000, then Xgboost will be tuned for 1000 seconds and CatBoost will be tuned for 1000 seconds.
What is more, the tuning is made for each data type, for example for raw data and for data with inserted Golden Features.
This parameter is only used when `mode="Optuna"`. If you set `mode="Optuna"` and forget to set this parameter, it will be set to 3600 seconds.
optuna_init_params (dict): If you have already tuned parameters from Optuna you can reuse them by setting this parameter.
This parameter is only used when `mode="Optuna"`. The dict should have structure and params as specified in the MLJAR AutoML .
optuna_verbose (boolean): If true the Optuna tuning details are displayed. Set to `True` by default.
fairness_metric (string): Name of fairness metric that will be used for assessing fairness criteria.
Available metrics for binary and multiclass classification:
- `demographic_parity_difference`,
- `demographic_parity_ratio` - default metric,
- `equalized_odds_difference`,
- `equalized_odds_ratio`.
Metrics for regression:
- `group_loss_difference`,
- `group_loss_ratio` - default metric.
fairness_threshold (float): The treshold value for fairness metric.
The direction optimization (below or above threshold) of fairness metric is determined automatically.
Default values:
- for `demographic_parity_difference` the metric value should be below 0.1,
- for `demographic_parity_ratio` the metric value should be above 0.8,
- for `equalized_odds_difference` the metric value should be below 0.1,
- for `equalized_odds_ratio` the metric value shoule be above 0.8.
- for `group_loss_ratio` the metric value shoule be above 0.8.
For `group_loss_difference` the default threshold value can't be set because it depends on the dataset.
If `group_loss_difference` metric is used and `fairness_threshold` is not specified manually, then an exception will be raised.
privileged_groups (list): The list of privileged groups.
By default, list of privileged groups are automatically detected based on fairness metrics.
For example, in binary classification task, a privileged group is the one with the highest selection rate.
Example value: `[{"sex": "Male"}]`
underprivileged_groups (list): The list of underprivileged groups.
By default, list of underprivileged groups are automatically detected based on fairness metrics.
For example, in binary classification task, an underprivileged group is the one with the lowest selection rate.
Example value: `[{"sex": "Female"}]`
n_jobs (int): Number of CPU cores to be used. By default is set to `-1` which means using all processors.
verbose (int): Controls the verbosity when fitting and predicting.
Note:
Still not implemented, please left `1`
random_state (int): Controls the randomness of the `AutoML`
Examples:
Binary Classification Example:
>>> import pandas as pd
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import roc_auc_score
>>> from supervised import AutoML
>>> df = pd.read_csv(
... "https://raw.githubusercontent.com/pplonski/datasets-for-start/master/adult/data.csv",
... skipinitialspace=True
... )
>>> X_train, X_test, y_train, y_test = train_test_split(
... df[df.columns[:-1]], df["income"], test_size=0.25
... )
>>> automl = AutoML()
>>> automl.fit(X_train, y_train)
>>> y_pred_prob = automl.predict_proba(X_test)
>>> print(f"AUROC: {roc_auc_score(y_test, y_pred_prob):.2f}%")
Multi-Class Classification Example:
>>> import pandas as pd
>>> from sklearn.datasets import load_digits
>>> from sklearn.metrics import accuracy_score
>>> from sklearn.model_selection import train_test_split
>>> from supervised import AutoML
>>> digits = load_digits()
>>> X_train, X_test, y_train, y_test = train_test_split(
... digits.data, digits.target, stratify=digits.target, test_size=0.25,
... random_state=123
... )
>>> automl = AutoML(mode="Perform")
>>> automl.fit(X_train, y_train)
>>> y_pred = automl.predict(X_test)
>>> print(f"Accuracy: {accuracy_score(y_test, y_pred):.2f}%")
Regression Example:
>>> import pandas as pd
>>> from sklearn.datasets import fetch_california_housing
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import mean_squared_error
>>> from supervised import AutoML
>>> housing = fetch_california_housing()
>>> X_train, X_test, y_train, y_test = train_test_split(
... pd.DataFrame(housing.data, columns=housing.feature_names),
... housing.target,
... test_size=0.25,
... random_state=123,
... )
>>> automl = AutoML(mode="Compete")
>>> automl.fit(X_train, y_train)
>>> print("Test R^2:", automl.score(X_test, y_test))
Scikit-learn Pipeline Integration Example:
>>> from imblearn.over_sampling import RandomOverSampler
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from supervised import AutoML
>>> X, y = make_classification()
>>> X_train, X_test, y_train, y_test = train_test_split(X,y)
>>> pipeline = make_pipeline(RandomOverSampler(), AutoML())
>>> print(pipeline.fit(X_train, y_train).score(X_test, y_test))
"""
super(AutoML, self).__init__()
# Set user arguments
self.mode = mode
self.ml_task = ml_task
self.results_path = results_path
self.total_time_limit = total_time_limit
self.model_time_limit = model_time_limit
self.algorithms = algorithms
self.train_ensemble = train_ensemble
self.stack_models = stack_models
self.eval_metric = eval_metric
self.validation_strategy = validation_strategy
self.verbose = verbose
self.explain_level = explain_level
self.golden_features = golden_features
self.features_selection = features_selection
self.start_random_models = start_random_models
self.hill_climbing_steps = hill_climbing_steps
self.top_models_to_improve = top_models_to_improve
self.boost_on_errors = boost_on_errors
self.kmeans_features = kmeans_features
self.mix_encoding = mix_encoding
self.max_single_prediction_time = max_single_prediction_time
self.optuna_time_budget = optuna_time_budget
self.optuna_init_params = optuna_init_params
self.optuna_verbose = optuna_verbose
self.fairness_metric = fairness_metric
self.fairness_threshold = fairness_threshold
self.privileged_groups = privileged_groups
self.underprivileged_groups = underprivileged_groups
self.n_jobs = n_jobs
self.random_state = random_state
def fit(
self,
X: Union[numpy.ndarray, pandas.DataFrame],
y: Union[numpy.ndarray, pandas.Series],
sample_weight: Optional[Union[numpy.ndarray, pandas.Series]] = None,
cv: Optional[Union[Iterable, List]] = None,
sensitive_features: Optional[
Union[numpy.ndarray, pandas.Series, pandas.DataFrame]
] = None,
):
"""Fit the AutoML model.
Arguments:
X (numpy.ndarray or pandas.DataFrame): Training data
y (numpy.ndarray or pandas.Series): Training targets
sample_weight (numpy.ndarray or pandas.Series): Training sample weights
cv (iterable or list): List or iterable with (train, validation) splits representing array of indices.
It is used only with custom validation (`validation_strategy={'validation_type': 'custom'}`).
sensitive_features (numpy.ndarray or pandas.Series or pandas.DataFrame): Sensitive features to learn fair models
Returns:
AutoML object: Returns `self`
"""
return self._fit(X, y, sample_weight, cv, sensitive_features)
def predict(self, X: Union[List, numpy.ndarray, pandas.DataFrame]) -> numpy.ndarray:
"""
Computes predictions from AutoML best model.
Arguments:
X (list or numpy.ndarray or pandas.DataFrame):
Input values to make predictions on.
Returns:
numpy.ndarray:
- One-dimensional array of class labels for classification.
- One-dimensional array of predictions for regression.
Raises:
AutoMLException: Model has not yet been fitted.
"""
return self._predict(X)
def predict_proba(
self, X: Union[List, numpy.ndarray, pandas.DataFrame]
) -> numpy.ndarray:
"""
Computes class probabilities from AutoML best model.
This method can only be used for classification tasks.
Arguments:
X (list or numpy.ndarray or pandas.DataFrame):
Input values to make predictions on.
Returns:
numpy.ndarray of shape (n_samples, n_classes):
Matrix of containing class probabilities of the input samples
Raises:
AutoMLException: Model has not yet been fitted.
"""
return self._predict_proba(X)
def predict_all(
self, X: Union[List, numpy.ndarray, pandas.DataFrame]
) -> pandas.DataFrame:
"""
Computes both class probabilities and class labels for classification tasks.
Computes predictions for regression tasks.
Arguments:
X (list or numpy.ndarray or pandas.DataFrame):
Input values to make predictions on.
Returns:
pandas.Dataframe:
Dataframe (n_samples, n_classes + 1) containing both class probabilities and class
labels of the input samples for classification tasks.
Dataframe with predictions for regression tasks.
Raises:
AutoMLException: Model has not yet been fitted.
"""
return self._predict_all(X)
def score(
self,
X: Union[numpy.ndarray, pandas.DataFrame],
y: Optional[Union[numpy.ndarray, pandas.Series]] = None,
sample_weight: Optional[Union[numpy.ndarray, pandas.Series]] = None,
) -> float:
"""Calculates a goodness of `fit` for an AutoML instance.
Arguments:
X (numpy.ndarray or pandas.DataFrame):
Test values to make predictions on.
y (numpy.ndarray or pandas.Series):
True labels for X.
sample_weight (numpy.ndarray or pandas.Series):
Sample weights.
Returns:
float: Returns a goodness of fit measure (higher is better):
- For classification tasks: returns the mean accuracy on the given test data and labels.
- For regression tasks: returns the R^2 (coefficient of determination) on the given test data and labels.
"""
return self._score(X, y, sample_weight)
def report(self, width=900, height=1200):
return self._report(width, height)
def need_retrain(
self,
X: Union[numpy.ndarray, pandas.DataFrame],
y: Union[numpy.ndarray, pandas.Series],
sample_weight: Optional[Union[numpy.ndarray, pandas.Series]] = None,
decrease: float = 0.1,
) -> bool:
"""Decides about model retraining based on new data.
Arguments:
X (numpy.ndarray or pandas.DataFrame):
New data.
y (numpy.ndarray or pandas.Series):
True labels for X.
sample_weight (numpy.ndarray or pandas.Series):
Sample weights.
decrease (float): The ratio of change in the performance used as a threshold for retraining decision.
By default, it is set to `0.1` which means that if the performance of AutoML will decrease by 10%
on new data then there is a need to retrain. This value should be set depending on your project needs.
Sometimes, 10% is enough, but for some projects, it can be even lower than 1%.
Returns:
boolean: Decides if there is a need to retrain the AutoML.
"""
return self._need_retrain(X, y, sample_weight, decrease)
__init__(self, results_path=None, total_time_limit=3600, mode='Explain', ml_task='auto', model_time_limit=None, algorithms='auto', train_ensemble=True, stack_models='auto', eval_metric='auto', validation_strategy='auto', explain_level='auto', golden_features='auto', features_selection='auto', start_random_models='auto', hill_climbing_steps='auto', top_models_to_improve='auto', boost_on_errors='auto', kmeans_features='auto', mix_encoding='auto', max_single_prediction_time=None, optuna_time_budget=None, optuna_init_params={}, optuna_verbose=True, fairness_metric='auto', fairness_threshold='auto', privileged_groups='auto', underprivileged_groups='auto', n_jobs=-1, verbose=1, random_state=1234)
special
¶
Initialize AutoML
object.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
results_path |
str |
The path with results. If None, then the name of directory will be generated with the template: AutoML_{number},
where the number can be from 1 to 1,000 - depends which direcory name will be available.
If the |
None |
total_time_limit |
int |
The total time limit in seconds for AutoML training.
It is not used when |
3600 |
mode |
str |
Can be {
|
'Explain' |
ml_task |
str |
Can be {"auto", "binary_classification", "multiclass_classification", "regression"}.
|
'auto' |
model_time_limit |
int |
The time limit for training a single model, in seconds.
If For example, in the case of 10-fold cross-validation, one model will have 10 learners.
The |
None |
algorithms |
list of str |
The list of algorithms that will be used in the training. The algorithms can be:
|
'auto' |
train_ensemble |
boolean |
Whether an ensemble gets created at the end of the training. |
True |
stack_models |
boolean |
Whether a models stack gets created at the end of the training. Stack level is 1. |
'auto' |
eval_metric |
str |
The metric to be used in early stopping and to compare models.
|
'auto' |
validation_strategy |
dict |
Dictionary with validation type. Right now train/test split and cross-validation are supported. Example:
|
'auto' |
explain_level |
int |
The level of explanations included to each model:
If left |
'auto' |
golden_features |
boolean or int |
Whether to use golden features (and how many should be added)
If left
If If |
'auto' |
features_selection |
boolean |
Whether to do features_selection
If left
|
'auto' |
start_random_models |
int |
Number of starting random models to try.
If left
|
'auto' |
hill_climbing_steps |
int |
Number of steps to perform during hill climbing.
If left
|
'auto' |
top_models_to_improve |
int |
Number of best models to improve in
|
'auto' |
boost_on_errors |
boolean |
Whether a model with boost on errors from previous best model should be trained. By default available in the |
'auto' |
kmeans_features |
boolean |
Whether a model with k-means generated features should be trained. By default available in the |
'auto' |
mix_encoding |
boolean |
Whether a model with mixed encoding should be trained. Mixed encoding is the encoding that uses label encoding
for categoricals with more than 25 categories, and one-hot binary encoding for other categoricals. It is only applied if there are
categorical features with cardinality smaller than 25. By default it is available in the |
'auto' |
max_single_prediction_time |
int or float |
The limit for prediction time for single sample. Use it if you want to have a model with fast predictions.
Ideal for creating ML pipelines used as REST API. Time is in seconds. By default ( |
None |
optuna_time_budget |
int |
The time in seconds which should be used by Optuna to tune each algorithm. It is time for tuning single algorithm.
If you select two algorithms: Xgboost and CatBoost, and set optuna_time_budget=1000, then Xgboost will be tuned for 1000 seconds and CatBoost will be tuned for 1000 seconds.
What is more, the tuning is made for each data type, for example for raw data and for data with inserted Golden Features.
This parameter is only used when |
None |
optuna_init_params |
dict |
If you have already tuned parameters from Optuna you can reuse them by setting this parameter.
This parameter is only used when |
{} |
optuna_verbose |
boolean |
If true the Optuna tuning details are displayed. Set to |
True |
fairness_metric |
string |
Name of fairness metric that will be used for assessing fairness criteria. Available metrics for binary and multiclass classification:
Metrics for regression:
|
'auto' |
fairness_threshold |
float |
The treshold value for fairness metric. The direction optimization (below or above threshold) of fairness metric is determined automatically. Default values:
For |
'auto' |
privileged_groups |
list |
The list of privileged groups. By default, list of privileged groups are automatically detected based on fairness metrics. For example, in binary classification task, a privileged group is the one with the highest selection rate. Example value: |
'auto' |
underprivileged_groups |
list |
The list of underprivileged groups. By default, list of underprivileged groups are automatically detected based on fairness metrics. For example, in binary classification task, an underprivileged group is the one with the lowest selection rate. Example value: |
'auto' |
n_jobs |
int |
Number of CPU cores to be used. By default is set to |
-1 |
verbose |
int |
Controls the verbosity when fitting and predicting. Note:
Still not implemented, please left |
1 |
random_state |
int |
Controls the randomness of the |
1234 |
Examples:
Binary Classification Example:
>>> import pandas as pd
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import roc_auc_score
>>> from supervised import AutoML
>>> df = pd.read_csv(
... "https://raw.githubusercontent.com/pplonski/datasets-for-start/master/adult/data.csv",
... skipinitialspace=True
... )
>>> X_train, X_test, y_train, y_test = train_test_split(
... df[df.columns[:-1]], df["income"], test_size=0.25
... )
>>> automl = AutoML()
>>> automl.fit(X_train, y_train)
>>> y_pred_prob = automl.predict_proba(X_test)
>>> print(f"AUROC: {roc_auc_score(y_test, y_pred_prob):.2f}%")
Multi-Class Classification Example:
>>> import pandas as pd
>>> from sklearn.datasets import load_digits
>>> from sklearn.metrics import accuracy_score
>>> from sklearn.model_selection import train_test_split
>>> from supervised import AutoML
>>> digits = load_digits()
>>> X_train, X_test, y_train, y_test = train_test_split(
... digits.data, digits.target, stratify=digits.target, test_size=0.25,
... random_state=123
... )
>>> automl = AutoML(mode="Perform")
>>> automl.fit(X_train, y_train)
>>> y_pred = automl.predict(X_test)
>>> print(f"Accuracy: {accuracy_score(y_test, y_pred):.2f}%")
Regression Example:
>>> import pandas as pd
>>> from sklearn.datasets import fetch_california_housing
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import mean_squared_error
>>> from supervised import AutoML
>>> housing = fetch_california_housing()
>>> X_train, X_test, y_train, y_test = train_test_split(
... pd.DataFrame(housing.data, columns=housing.feature_names),
... housing.target,
... test_size=0.25,
... random_state=123,
... )
>>> automl = AutoML(mode="Compete")
>>> automl.fit(X_train, y_train)
>>> print("Test R^2:", automl.score(X_test, y_test))
Scikit-learn Pipeline Integration Example:
>>> from imblearn.over_sampling import RandomOverSampler
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from supervised import AutoML
>>> X, y = make_classification()
>>> X_train, X_test, y_train, y_test = train_test_split(X,y)
>>> pipeline = make_pipeline(RandomOverSampler(), AutoML())
>>> print(pipeline.fit(X_train, y_train).score(X_test, y_test))
Source code in supervised/automl.py
def __init__(
self,
results_path: Optional[str] = None,
total_time_limit: int = 60 * 60,
mode: Literal["Explain", "Perform", "Compete", "Optuna"] = "Explain",
ml_task: Literal[
"auto", "binary_classification", "multiclass_classification", "regression"
] = "auto",
model_time_limit: Optional[int] = None,
algorithms: Union[
Literal["auto"],
List[
Literal[
"Baseline",
"Linear",
"Decicion Tree",
"Random Forest",
"Extra Trees",
"LightGBM",
"Xgboost",
"CatBoost",
"Neural Network",
"Nearest Neighbors",
]
],
] = "auto",
train_ensemble: bool = True,
stack_models: Union[Literal["auto"], bool] = "auto",
eval_metric: str = "auto",
validation_strategy: Union[Literal["auto"], dict] = "auto",
explain_level: Union[Literal["auto"], Literal[0, 1, 2]] = "auto",
golden_features: Union[Literal["auto"], bool, int] = "auto",
features_selection: Union[Literal["auto"], bool] = "auto",
start_random_models: Union[Literal["auto"], int] = "auto",
hill_climbing_steps: Union[Literal["auto"], int] = "auto",
top_models_to_improve: Union[Literal["auto"], int] = "auto",
boost_on_errors: Union[Literal["auto"], bool] = "auto",
kmeans_features: Union[Literal["auto"], bool] = "auto",
mix_encoding: Union[Literal["auto"], bool] = "auto",
max_single_prediction_time: Optional[Union[int, float]] = None,
optuna_time_budget: Optional[int] = None,
optuna_init_params: dict = {},
optuna_verbose: bool = True,
fairness_metric: str = "auto",
fairness_threshold: Union[Literal["auto"], float] = "auto",
privileged_groups: Union[Literal["auto"], list] = "auto",
underprivileged_groups: Union[Literal["auto"], list] = "auto",
n_jobs: int = -1,
verbose: int = 1,
random_state: int = 1234,
):
"""
Initialize `AutoML` object.
Arguments:
results_path (str): The path with results. If None, then the name of directory will be generated with the template: AutoML_{number},
where the number can be from 1 to 1,000 - depends which direcory name will be available.
If the `results_path` will point to directory with AutoML results (`params.json` must be present),
then all models will be loaded.
total_time_limit (int): The total time limit in seconds for AutoML training.
It is not used when `model_time_limit` is not `None`.
mode (str): Can be {`Explain`, `Perform`, `Compete`, `Optuna`}. This parameter defines the goal of AutoML and how intensive the AutoML search will be.
- `Explain` : To to be used when the user wants to explain and understand the data.
- Uses 75%/25% train/test split.
- Uses the following models: `Baseline`, `Linear`, `Decision Tree`, `Random Forest`, `XGBoost`, `Neural Network`, and `Ensemble`.
- Has full explanations in reports: learning curves, importance plots, and SHAP plots.
- `Perform` : To be used when the user wants to train a model that will be used in real-life use cases.
- Uses 5-fold CV (Cross-Validation).
- Uses the following models: `Linear`, `Random Forest`, `LightGBM`, `XGBoost`, `CatBoost`, `Neural Network`, and `Ensemble`.
- Has learning curves and importance plots in reports.
- `Compete` : To be used for machine learning competitions (maximum performance).
- Uses 80/20 train/test split, or 5-fold CV, or 10-fold CV (Cross-Validation) - it depends on `total_time_limit`. If not set directly, AutoML will select validation automatically.
- Uses the following models: `Decision Tree`, `Random Forest`, `Extra Trees`, `LightGBM`, `XGBoost`, `CatBoost`, `Neural Network`,
`Nearest Neighbors`, `Ensemble`, and `Stacking`.
- It has only learning curves in the reports.
- `Optuna` : To be used for creating highly-tuned machine learning models.
- Uses 10-fold CV (Cross-Validation).
- It tunes with Optuna the following algorithms: `Random Forest`, `Extra Trees`, `LightGBM`, `XGBoost`, `CatBoost`, `Neural Network`.
- It applies `Ensemble` and `Stacking` for trained models.
- It has only learning curves in the reports.
ml_task (str): Can be {"auto", "binary_classification", "multiclass_classification", "regression"}.
- If left `auto` AutoML will try to guess the task based on target values.
- If there will be only 2 values in the target, then task will be set to `"binary_classification"`.
- If number of values in the target will be between 2 and 20 (included), then task will be set to `"multiclass_classification"`.
- In all other casses, the task is set to `"regression"`.
model_time_limit (int): The time limit for training a single model, in seconds.
If `model_time_limit` is set, the `total_time_limit` is not respected.
The single model can contain several learners. The time limit for subsequent learners is computed based on `model_time_limit`.
For example, in the case of 10-fold cross-validation, one model will have 10 learners.
The `model_time_limit` is the time for all 10 learners.
algorithms (list of str): The list of algorithms that will be used in the training.
The algorithms can be:
- `Baseline`,
- `Linear`,
- `Decision Tree`,
- `Random Forest`,
- `Extra Trees`,
- `LightGBM`,
- `Xgboost`,
- `CatBoost`,
- `Neural Network`,
- `Nearest Neighbors`,
train_ensemble (boolean): Whether an ensemble gets created at the end of the training.
stack_models (boolean): Whether a models stack gets created at the end of the training. Stack level is 1.
eval_metric (str): The metric to be used in early stopping and to compare models.
- for binary classification: `logloss`, `auc`, `f1`, `average_precision`, `accuracy` - default is logloss (if left "auto")
- for mutliclass classification: `logloss`, `f1`, `accuracy` - default is `logloss` (if left "auto")
- for regression: `rmse`, `mse`, `mae`, `r2`, `mape`, `spearman`, `pearson` - default is `rmse` (if left "auto")
validation_strategy (dict): Dictionary with validation type. Right now train/test split and cross-validation are supported.
Example:
Cross-validation exmaple:
{
"validation_type": "kfold",
"k_folds": 5,
"shuffle": True,
"stratify": True,
"random_seed": 123
}
Train/test example:
{
"validation_type": "split",
"train_ratio": 0.75,
"shuffle": True,
"stratify": True
}
explain_level (int): The level of explanations included to each model:
- if `explain_level` is `0` no explanations are produced.
- if `explain_level` is `1` the following explanations are produced: importance plot (with permutation method), for decision trees produce tree plots, for linear models save coefficients.
- if `explain_level` is `2` the following explanations are produced: the same as `1` plus SHAP explanations.
If left `auto` AutoML will produce explanations based on the selected `mode`.
golden_features (boolean or int): Whether to use golden features (and how many should be added)
If left `auto` AutoML will use golden features based on the selected `mode`:
- If `mode` is "Explain", `golden_features` = False.
- If `mode` is "Perform", `golden_features` = True.
- If `mode` is "Compete", `golden_features` = True.
If `boolean` value is set then the number of Golden Features is set automatically.
It is set to min(100, max(10, 0.1*number_of_input_features)).
If `int` value is set, the number of Golden Features is set to this value.
features_selection (boolean): Whether to do features_selection
If left `auto` AutoML will do feature selection based on the selected `mode`:
- If `mode` is "Explain", `features_selection` = False.
- If `mode` is "Perform", `features_selection` = True.
- If `mode` is "Compete", `features_selection` = True.
start_random_models (int): Number of starting random models to try.
If left `auto` AutoML will select it based on the selected `mode`:
- If `mode` is "Explain", `start_random_models` = 1.
- If `mode` is "Perform", `start_random_models` = 5.
- If `mode` is "Compete", `start_random_models` = 10.
hill_climbing_steps (int): Number of steps to perform during hill climbing.
If left `auto` AutoML will select it based on the selected `mode`:
- If `mode` is "Explain", `hill_climbing_steps` = 0.
- If `mode` is "Perform", `hill_climbing_steps` = 2.
- If `mode` is "Compete", `hill_climbing_steps` = 2.
top_models_to_improve (int): Number of best models to improve in `hill_climbing` steps.
If left `auto` AutoML will select it based on the selected `mode`:
- If `mode` is "Explain", `top_models_to_improve` = 0.
- If `mode` is "Perform", `top_models_to_improve` = 2.
- If `mode` is "Compete", `top_models_to_improve` = 3.
boost_on_errors (boolean): Whether a model with boost on errors from previous best model should be trained. By default available in the `Compete` mode.
kmeans_features (boolean): Whether a model with k-means generated features should be trained. By default available in the `Compete` mode.
mix_encoding (boolean): Whether a model with mixed encoding should be trained. Mixed encoding is the encoding that uses label encoding
for categoricals with more than 25 categories, and one-hot binary encoding for other categoricals. It is only applied if there are
categorical features with cardinality smaller than 25. By default it is available in the `Compete` mode.
max_single_prediction_time (int or float): The limit for prediction time for single sample. Use it if you want to have a model with fast predictions.
Ideal for creating ML pipelines used as REST API. Time is in seconds. By default (`max_single_prediction_time=None`) models are not optimized for fast predictions,
except the mode `Perform`. For the mode `Perform` the default is `0.5` seconds.
optuna_time_budget (int): The time in seconds which should be used by Optuna to tune each algorithm. It is time for tuning single algorithm.
If you select two algorithms: Xgboost and CatBoost, and set optuna_time_budget=1000, then Xgboost will be tuned for 1000 seconds and CatBoost will be tuned for 1000 seconds.
What is more, the tuning is made for each data type, for example for raw data and for data with inserted Golden Features.
This parameter is only used when `mode="Optuna"`. If you set `mode="Optuna"` and forget to set this parameter, it will be set to 3600 seconds.
optuna_init_params (dict): If you have already tuned parameters from Optuna you can reuse them by setting this parameter.
This parameter is only used when `mode="Optuna"`. The dict should have structure and params as specified in the MLJAR AutoML .
optuna_verbose (boolean): If true the Optuna tuning details are displayed. Set to `True` by default.
fairness_metric (string): Name of fairness metric that will be used for assessing fairness criteria.
Available metrics for binary and multiclass classification:
- `demographic_parity_difference`,
- `demographic_parity_ratio` - default metric,
- `equalized_odds_difference`,
- `equalized_odds_ratio`.
Metrics for regression:
- `group_loss_difference`,
- `group_loss_ratio` - default metric.
fairness_threshold (float): The treshold value for fairness metric.
The direction optimization (below or above threshold) of fairness metric is determined automatically.
Default values:
- for `demographic_parity_difference` the metric value should be below 0.1,
- for `demographic_parity_ratio` the metric value should be above 0.8,
- for `equalized_odds_difference` the metric value should be below 0.1,
- for `equalized_odds_ratio` the metric value shoule be above 0.8.
- for `group_loss_ratio` the metric value shoule be above 0.8.
For `group_loss_difference` the default threshold value can't be set because it depends on the dataset.
If `group_loss_difference` metric is used and `fairness_threshold` is not specified manually, then an exception will be raised.
privileged_groups (list): The list of privileged groups.
By default, list of privileged groups are automatically detected based on fairness metrics.
For example, in binary classification task, a privileged group is the one with the highest selection rate.
Example value: `[{"sex": "Male"}]`
underprivileged_groups (list): The list of underprivileged groups.
By default, list of underprivileged groups are automatically detected based on fairness metrics.
For example, in binary classification task, an underprivileged group is the one with the lowest selection rate.
Example value: `[{"sex": "Female"}]`
n_jobs (int): Number of CPU cores to be used. By default is set to `-1` which means using all processors.
verbose (int): Controls the verbosity when fitting and predicting.
Note:
Still not implemented, please left `1`
random_state (int): Controls the randomness of the `AutoML`
Examples:
Binary Classification Example:
>>> import pandas as pd
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import roc_auc_score
>>> from supervised import AutoML
>>> df = pd.read_csv(
... "https://raw.githubusercontent.com/pplonski/datasets-for-start/master/adult/data.csv",
... skipinitialspace=True
... )
>>> X_train, X_test, y_train, y_test = train_test_split(
... df[df.columns[:-1]], df["income"], test_size=0.25
... )
>>> automl = AutoML()
>>> automl.fit(X_train, y_train)
>>> y_pred_prob = automl.predict_proba(X_test)
>>> print(f"AUROC: {roc_auc_score(y_test, y_pred_prob):.2f}%")
Multi-Class Classification Example:
>>> import pandas as pd
>>> from sklearn.datasets import load_digits
>>> from sklearn.metrics import accuracy_score
>>> from sklearn.model_selection import train_test_split
>>> from supervised import AutoML
>>> digits = load_digits()
>>> X_train, X_test, y_train, y_test = train_test_split(
... digits.data, digits.target, stratify=digits.target, test_size=0.25,
... random_state=123
... )
>>> automl = AutoML(mode="Perform")
>>> automl.fit(X_train, y_train)
>>> y_pred = automl.predict(X_test)
>>> print(f"Accuracy: {accuracy_score(y_test, y_pred):.2f}%")
Regression Example:
>>> import pandas as pd
>>> from sklearn.datasets import fetch_california_housing
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.metrics import mean_squared_error
>>> from supervised import AutoML
>>> housing = fetch_california_housing()
>>> X_train, X_test, y_train, y_test = train_test_split(
... pd.DataFrame(housing.data, columns=housing.feature_names),
... housing.target,
... test_size=0.25,
... random_state=123,
... )
>>> automl = AutoML(mode="Compete")
>>> automl.fit(X_train, y_train)
>>> print("Test R^2:", automl.score(X_test, y_test))
Scikit-learn Pipeline Integration Example:
>>> from imblearn.over_sampling import RandomOverSampler
>>> from sklearn.pipeline import make_pipeline
>>> from sklearn.datasets import make_classification
>>> from sklearn.model_selection import train_test_split
>>> from supervised import AutoML
>>> X, y = make_classification()
>>> X_train, X_test, y_train, y_test = train_test_split(X,y)
>>> pipeline = make_pipeline(RandomOverSampler(), AutoML())
>>> print(pipeline.fit(X_train, y_train).score(X_test, y_test))
"""
super(AutoML, self).__init__()
# Set user arguments
self.mode = mode
self.ml_task = ml_task
self.results_path = results_path
self.total_time_limit = total_time_limit
self.model_time_limit = model_time_limit
self.algorithms = algorithms
self.train_ensemble = train_ensemble
self.stack_models = stack_models
self.eval_metric = eval_metric
self.validation_strategy = validation_strategy
self.verbose = verbose
self.explain_level = explain_level
self.golden_features = golden_features
self.features_selection = features_selection
self.start_random_models = start_random_models
self.hill_climbing_steps = hill_climbing_steps
self.top_models_to_improve = top_models_to_improve
self.boost_on_errors = boost_on_errors
self.kmeans_features = kmeans_features
self.mix_encoding = mix_encoding
self.max_single_prediction_time = max_single_prediction_time
self.optuna_time_budget = optuna_time_budget
self.optuna_init_params = optuna_init_params
self.optuna_verbose = optuna_verbose
self.fairness_metric = fairness_metric
self.fairness_threshold = fairness_threshold
self.privileged_groups = privileged_groups
self.underprivileged_groups = underprivileged_groups
self.n_jobs = n_jobs
self.random_state = random_state
fit(self, X, y, sample_weight=None, cv=None, sensitive_features=None)
¶
Fit the AutoML model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X |
numpy.ndarray or pandas.DataFrame |
Training data |
required |
y |
numpy.ndarray or pandas.Series |
Training targets |
required |
sample_weight |
numpy.ndarray or pandas.Series |
Training sample weights |
None |
cv |
iterable or list |
List or iterable with (train, validation) splits representing array of indices.
It is used only with custom validation ( |
None |
sensitive_features |
numpy.ndarray or pandas.Series or pandas.DataFrame |
Sensitive features to learn fair models |
None |
Returns:
Type | Description |
---|---|
AutoML object |
Returns |
Source code in supervised/automl.py
def fit(
self,
X: Union[numpy.ndarray, pandas.DataFrame],
y: Union[numpy.ndarray, pandas.Series],
sample_weight: Optional[Union[numpy.ndarray, pandas.Series]] = None,
cv: Optional[Union[Iterable, List]] = None,
sensitive_features: Optional[
Union[numpy.ndarray, pandas.Series, pandas.DataFrame]
] = None,
):
"""Fit the AutoML model.
Arguments:
X (numpy.ndarray or pandas.DataFrame): Training data
y (numpy.ndarray or pandas.Series): Training targets
sample_weight (numpy.ndarray or pandas.Series): Training sample weights
cv (iterable or list): List or iterable with (train, validation) splits representing array of indices.
It is used only with custom validation (`validation_strategy={'validation_type': 'custom'}`).
sensitive_features (numpy.ndarray or pandas.Series or pandas.DataFrame): Sensitive features to learn fair models
Returns:
AutoML object: Returns `self`
"""
return self._fit(X, y, sample_weight, cv, sensitive_features)
need_retrain(self, X, y, sample_weight=None, decrease=0.1)
¶
Decides about model retraining based on new data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X |
numpy.ndarray or pandas.DataFrame |
New data. |
required |
y |
numpy.ndarray or pandas.Series |
True labels for X. |
required |
sample_weight |
numpy.ndarray or pandas.Series |
Sample weights. |
None |
decrease |
float |
The ratio of change in the performance used as a threshold for retraining decision.
By default, it is set to |
0.1 |
Returns |
boolean: Decides if there is a need to retrain the AutoML. |
required |
Source code in supervised/automl.py
def need_retrain(
self,
X: Union[numpy.ndarray, pandas.DataFrame],
y: Union[numpy.ndarray, pandas.Series],
sample_weight: Optional[Union[numpy.ndarray, pandas.Series]] = None,
decrease: float = 0.1,
) -> bool:
"""Decides about model retraining based on new data.
Arguments:
X (numpy.ndarray or pandas.DataFrame):
New data.
y (numpy.ndarray or pandas.Series):
True labels for X.
sample_weight (numpy.ndarray or pandas.Series):
Sample weights.
decrease (float): The ratio of change in the performance used as a threshold for retraining decision.
By default, it is set to `0.1` which means that if the performance of AutoML will decrease by 10%
on new data then there is a need to retrain. This value should be set depending on your project needs.
Sometimes, 10% is enough, but for some projects, it can be even lower than 1%.
Returns:
boolean: Decides if there is a need to retrain the AutoML.
"""
return self._need_retrain(X, y, sample_weight, decrease)
predict(self, X)
¶
Computes predictions from AutoML best model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X |
list or numpy.ndarray or pandas.DataFrame |
Input values to make predictions on. |
required |
Returns:
Type | Description |
---|---|
numpy.ndarray |
|
Exceptions:
Type | Description |
---|---|
AutoMLException |
Model has not yet been fitted. |
Source code in supervised/automl.py
def predict(self, X: Union[List, numpy.ndarray, pandas.DataFrame]) -> numpy.ndarray:
"""
Computes predictions from AutoML best model.
Arguments:
X (list or numpy.ndarray or pandas.DataFrame):
Input values to make predictions on.
Returns:
numpy.ndarray:
- One-dimensional array of class labels for classification.
- One-dimensional array of predictions for regression.
Raises:
AutoMLException: Model has not yet been fitted.
"""
return self._predict(X)
predict_all(self, X)
¶
Computes both class probabilities and class labels for classification tasks. Computes predictions for regression tasks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X |
list or numpy.ndarray or pandas.DataFrame |
Input values to make predictions on. |
required |
Returns:
Type | Description |
---|---|
pandas.Dataframe |
Dataframe (n_samples, n_classes + 1) containing both class probabilities and class labels of the input samples for classification tasks. Dataframe with predictions for regression tasks. |
Exceptions:
Type | Description |
---|---|
AutoMLException |
Model has not yet been fitted. |
Source code in supervised/automl.py
def predict_all(
self, X: Union[List, numpy.ndarray, pandas.DataFrame]
) -> pandas.DataFrame:
"""
Computes both class probabilities and class labels for classification tasks.
Computes predictions for regression tasks.
Arguments:
X (list or numpy.ndarray or pandas.DataFrame):
Input values to make predictions on.
Returns:
pandas.Dataframe:
Dataframe (n_samples, n_classes + 1) containing both class probabilities and class
labels of the input samples for classification tasks.
Dataframe with predictions for regression tasks.
Raises:
AutoMLException: Model has not yet been fitted.
"""
return self._predict_all(X)
predict_proba(self, X)
¶
Computes class probabilities from AutoML best model. This method can only be used for classification tasks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X |
list or numpy.ndarray or pandas.DataFrame |
Input values to make predictions on. |
required |
Returns:
Type | Description |
---|---|
numpy.ndarray of shape (n_samples, n_classes) |
Matrix of containing class probabilities of the input samples |
Exceptions:
Type | Description |
---|---|
AutoMLException |
Model has not yet been fitted. |
Source code in supervised/automl.py
def predict_proba(
self, X: Union[List, numpy.ndarray, pandas.DataFrame]
) -> numpy.ndarray:
"""
Computes class probabilities from AutoML best model.
This method can only be used for classification tasks.
Arguments:
X (list or numpy.ndarray or pandas.DataFrame):
Input values to make predictions on.
Returns:
numpy.ndarray of shape (n_samples, n_classes):
Matrix of containing class probabilities of the input samples
Raises:
AutoMLException: Model has not yet been fitted.
"""
return self._predict_proba(X)
score(self, X, y=None, sample_weight=None)
¶
Calculates a goodness of fit
for an AutoML instance.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
X |
numpy.ndarray or pandas.DataFrame |
Test values to make predictions on. |
required |
y |
numpy.ndarray or pandas.Series |
True labels for X. |
None |
sample_weight |
numpy.ndarray or pandas.Series |
Sample weights. |
None |
Returns:
Type | Description |
---|---|
float |
Returns a goodness of fit measure (higher is better):
|
Source code in supervised/automl.py
def score(
self,
X: Union[numpy.ndarray, pandas.DataFrame],
y: Optional[Union[numpy.ndarray, pandas.Series]] = None,
sample_weight: Optional[Union[numpy.ndarray, pandas.Series]] = None,
) -> float:
"""Calculates a goodness of `fit` for an AutoML instance.
Arguments:
X (numpy.ndarray or pandas.DataFrame):
Test values to make predictions on.
y (numpy.ndarray or pandas.Series):
True labels for X.
sample_weight (numpy.ndarray or pandas.Series):
Sample weights.
Returns:
float: Returns a goodness of fit measure (higher is better):
- For classification tasks: returns the mean accuracy on the given test data and labels.
- For regression tasks: returns the R^2 (coefficient of determination) on the given test data and labels.
"""
return self._score(X, y, sample_weight)