SparkAutoML
- class sparklightautoml.automl.base.SparkAutoML(reader=None, levels=None, timer=None, blender=None, skip_conn=False, return_all_predictions=False, computation_settings=('no_parallelism', -1))[source]
Bases:
TransformerInputOutputRoles
Class for compile full pipeline of AutoML task.
AutoML steps:
Read, analyze data and get inner
LAMLDataset
from input dataset: performed by reader.Create validation scheme.
Compute passed ml pipelines from levels. Each element of levels is list of
MLPipelines
prediction from current level are passed to next level pipelines as features.Time monitoring - check if we have enough time to calc new pipeline.
Blend last level models and prune useless pipelines to speedup inference: performed by blender.
Returns prediction on validation data. If crossvalidation scheme is used, out-of-fold prediction will returned. If validation data is passed it will return prediction on validation dataset. In case of cv scheme when some point of train data never was used as validation (ex. timeout exceeded or custom cv iterator like
TimeSeriesIterator
was used) NaN for this point will be returned.
Example
Common usecase - create custom pipelines or presets.
>>> reader = SparkToSparkReader() >>> pipe = SparkMLPipeline([SparkMLAlgo()]) >>> levels = [[pipe]] >>> automl = SparkAutoML(reader, levels, ) >>> automl.fit_predict(data, roles={'target': 'TARGET'})
- __init__(reader=None, levels=None, timer=None, blender=None, skip_conn=False, return_all_predictions=False, computation_settings=('no_parallelism', -1))[source]
- Parameters:
reader (
Optional
[SparkToSparkReader
]) – Instance of Reader class object that createsLAMLDataset
from input data.levels (
Optional
[Sequence
[Sequence
[SparkMLPipeline
]]]) – List of list ofMLPipelines
.timer (
Optional
[PipelineTimer
]) – Timer instance ofPipelineTimer
. Default - unlimited timer.blender (
Optional
[SparkBlender
]) – Instance of Blender. Default -BestModelSelector
.skip_conn (
bool
) – True if we should pass first level input features to next levels.
Note
There are several verbosity levels:
0: No messages.
1: Warnings.
2: Info.
3: Debug.
- fit_predict(train_data, roles, train_features=None, cv_iter=None, valid_data=None, valid_features=None, verbose=0, persistence_manager=None)[source]
Fit on input data and make prediction on validation part.
- Parameters:
train_data (
Any
) – Dataset to train.roles (
dict
) – Roles dict.train_features (
Optional
[Sequence
[str
]]) – Optional features names, if cannot be inferred from train_data.cv_iter (
Optional
[Iterable
]) – Custom cv iterator. For example,TimeSeriesIterator
.valid_features (
Optional
[Sequence
[str
]]) – Optional validation dataset features if can’t be inferred from valid_data.verbose (
int
) – controls verbosity
- Return type:
- Returns:
Predicted values.
- predict(data, return_all_predictions=None, add_reader_attrs=False, persistence_manager=None)[source]
Get dataset with predictions.
Almost same as
lightautoml.automl.base.AutoML.predict
on new dataset, with additional features.Additional features - working with different data formats. Supported now:
Parallel inference - you can pass
n_jobs
to speedup prediction (requires more RAM). Batch_inference - you can passbatch_size
to decrease RAM usage (may be longer).- Parameters:
data (
Union
[str
,DataFrame
]) – Dataset to perform inference.features_names – Optional features names, if cannot be inferred from train_data.
return_all_predictions (
Optional
[bool
]) – if True, returns all model predictions from last leveladd_reader_attrs (
bool
) – if True, the reader’s attributes will be added to the SparkDataset
- Return type:
- Returns:
Dataset with predictions.