public final class Bucketizer extends Model<Bucketizer> implements HasHandleInvalid, HasInputCol, HasOutputCol, HasInputCols, HasOutputCols, DefaultParamsWritable
Bucketizer
maps a column of continuous features to a column of feature buckets.
Since 2.3.0,
Bucketizer
can map multiple columns at once by setting the inputCols
parameter. Note that
when both the inputCol
and inputCols
parameters are set, an Exception will be thrown. The
splits
parameter is only used for single column usage, and splitsArray
is for multiple
columns.
Constructor and Description |
---|
Bucketizer() |
Bucketizer(String uid) |
Modifier and Type | Method and Description |
---|---|
Bucketizer |
copy(ParamMap extra)
Creates a copy of this instance with the same UID and some extra params.
|
double[] |
getSplits() |
double[][] |
getSplitsArray() |
Param<String> |
handleInvalid()
Param for how to handle invalid entries containing NaN values.
|
Param<String> |
inputCol()
Param for input column name.
|
StringArrayParam |
inputCols()
Param for input column names.
|
static Bucketizer |
load(String path) |
Param<String> |
outputCol()
Param for output column name.
|
StringArrayParam |
outputCols()
Param for output column names.
|
static MLReader<T> |
read() |
Bucketizer |
setHandleInvalid(String value) |
Bucketizer |
setInputCol(String value) |
Bucketizer |
setInputCols(String[] value) |
Bucketizer |
setOutputCol(String value) |
Bucketizer |
setOutputCols(String[] value) |
Bucketizer |
setSplits(double[] value) |
Bucketizer |
setSplitsArray(double[][] value) |
DoubleArrayParam |
splits()
Parameter for mapping continuous features into buckets.
|
DoubleArrayArrayParam |
splitsArray()
Parameter for specifying multiple splits parameters.
|
String |
toString() |
Dataset<Row> |
transform(Dataset<?> dataset)
Transforms the input dataset.
|
StructType |
transformSchema(StructType schema)
Check transform validity and derive the output schema from the input schema.
|
String |
uid()
An immutable unique ID for the object and its derivatives.
|
transform, transform, transform
params
getHandleInvalid
getInputCol
getOutputCol
getInputCols
getOutputCols
clear, copyValues, defaultCopy, defaultParamMap, explainParam, explainParams, extractParamMap, extractParamMap, get, getDefault, getOrDefault, getParam, hasDefault, hasParam, isDefined, isSet, paramMap, params, set, set, set, setDefault, setDefault, shouldOwn
write
save
$init$, initializeForcefully, initializeLogIfNecessary, initializeLogIfNecessary, initializeLogIfNecessary$default$2, initLock, isTraceEnabled, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning, org$apache$spark$internal$Logging$$log__$eq, org$apache$spark$internal$Logging$$log_, uninitialize
public static Bucketizer load(String path)
public static MLReader<T> read()
public final StringArrayParam outputCols()
HasOutputCols
outputCols
in interface HasOutputCols
public final StringArrayParam inputCols()
HasInputCols
inputCols
in interface HasInputCols
public final Param<String> outputCol()
HasOutputCol
outputCol
in interface HasOutputCol
public final Param<String> inputCol()
HasInputCol
inputCol
in interface HasInputCol
public String uid()
Identifiable
uid
in interface Identifiable
public DoubleArrayParam splits()
See also handleInvalid
, which can optionally create an additional bucket for NaN values.
public double[] getSplits()
public Bucketizer setSplits(double[] value)
public Bucketizer setInputCol(String value)
public Bucketizer setOutputCol(String value)
public Param<String> handleInvalid()
handleInvalid
in interface HasHandleInvalid
public Bucketizer setHandleInvalid(String value)
public DoubleArrayArrayParam splitsArray()
public double[][] getSplitsArray()
public Bucketizer setSplitsArray(double[][] value)
public Bucketizer setInputCols(String[] value)
public Bucketizer setOutputCols(String[] value)
public Dataset<Row> transform(Dataset<?> dataset)
Transformer
transform
in class Transformer
dataset
- (undocumented)public StructType transformSchema(StructType schema)
PipelineStage
We check validity for interactions between parameters during transformSchema
and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
transformSchema
in class PipelineStage
schema
- (undocumented)public Bucketizer copy(ParamMap extra)
Params
defaultCopy()
.copy
in interface Params
copy
in class Model<Bucketizer>
extra
- (undocumented)public String toString()
toString
in interface Identifiable
toString
in class Object