SparkAutoML

class sparklightautoml.automl.base.SparkAutoML(reader=None, levels=None, timer=None, blender=None, skip_conn=False, return_all_predictions=False, computation_settings=('no_parallelism', -1))[source]

Bases: TransformerInputOutputRoles

Class for compile full pipeline of AutoML task.

AutoML steps:

  • Read, analyze data and get inner LAMLDataset from input dataset: performed by reader.

  • Create validation scheme.

  • Compute passed ml pipelines from levels. Each element of levels is list of MLPipelines prediction from current level are passed to next level pipelines as features.

  • Time monitoring - check if we have enough time to calc new pipeline.

  • Blend last level models and prune useless pipelines to speedup inference: performed by blender.

  • Returns prediction on validation data. If crossvalidation scheme is used, out-of-fold prediction will returned. If validation data is passed it will return prediction on validation dataset. In case of cv scheme when some point of train data never was used as validation (ex. timeout exceeded or custom cv iterator like TimeSeriesIterator was used) NaN for this point will be returned.

Example

Common usecase - create custom pipelines or presets.

>>> reader = SparkToSparkReader()
>>> pipe = SparkMLPipeline([SparkMLAlgo()])
>>> levels = [[pipe]]
>>> automl = SparkAutoML(reader, levels, )
>>> automl.fit_predict(data, roles={'target': 'TARGET'})
__init__(reader=None, levels=None, timer=None, blender=None, skip_conn=False, return_all_predictions=False, computation_settings=('no_parallelism', -1))[source]
Parameters:

Note

There are several verbosity levels:

  • 0: No messages.

  • 1: Warnings.

  • 2: Info.

  • 3: Debug.

fit_predict(train_data, roles, train_features=None, cv_iter=None, valid_data=None, valid_features=None, verbose=0, persistence_manager=None)[source]

Fit on input data and make prediction on validation part.

Parameters:
  • train_data (Any) – Dataset to train.

  • roles (dict) – Roles dict.

  • train_features (Optional[Sequence[str]]) – Optional features names, if cannot be inferred from train_data.

  • cv_iter (Optional[Iterable]) – Custom cv iterator. For example, TimeSeriesIterator.

  • valid_data (Optional[Any]) – Optional validation dataset.

  • valid_features (Optional[Sequence[str]]) – Optional validation dataset features if can’t be inferred from valid_data.

  • verbose (int) – controls verbosity

Return type:

SparkDataset

Returns:

Predicted values.

predict(data, return_all_predictions=None, add_reader_attrs=False, persistence_manager=None)[source]

Get dataset with predictions.

Almost same as lightautoml.automl.base.AutoML.predict on new dataset, with additional features.

Additional features - working with different data formats. Supported now:

  • Path to .csv, .parquet, .feather files.

  • ndarray, or dict of ndarray. For example, {'data': X...}. In this case roles are optional, but train_features and valid_features required.

  • pandas.DataFrame.

Parallel inference - you can pass n_jobs to speedup prediction (requires more RAM). Batch_inference - you can pass batch_size to decrease RAM usage (may be longer).

Parameters:
  • data (Union[str, DataFrame]) – Dataset to perform inference.

  • features_names – Optional features names, if cannot be inferred from train_data.

  • return_all_predictions (Optional[bool]) – if True, returns all model predictions from last level

  • add_reader_attrs (bool) – if True, the reader’s attributes will be added to the SparkDataset

Return type:

SparkDataset

Returns:

Dataset with predictions.

collect_used_feats()[source]

Get feats that automl uses on inference.

Return type:

List[str]

Returns:

Features names list.

collect_model_stats()[source]

Collect info about models in automl.

Return type:

Dict[str, int]

Returns:

Dict with models and its runtime numbers.