class LinearRegression extends Regressor[Vector, LinearRegression, LinearRegressionModel] with LinearRegressionParams with DefaultParamsWritable with Logging
Linear regression.
The learning objective is to minimize the specified loss function, with regularization. This supports two kinds of loss:
- squaredError (a.k.a squared loss)
- huber (a hybrid of squared error for relatively small errors and absolute error for relatively large ones, and we estimate the scale parameter from training data)
This supports multiple types of regularization:
- none (a.k.a. ordinary least squares)
- L2 (ridge regression)
- L1 (Lasso)
- L2 + L1 (elastic net)
The squared error objective function is:
$$ \begin{align} \min_{w}\frac{1}{2n}{\sum_{i=1}^n(X_{i}w - y_{i})^{2} + \lambda\left[\frac{1-\alpha}{2}{||w||_{2}}^{2} + \alpha{||w||_{1}}\right]} \end{align} $$
The huber objective function is:
$$ \begin{align} \min_{w, \sigma}\frac{1}{2n}{\sum_{i=1}^n\left(\sigma + H_m\left(\frac{X_{i}w - y_{i}}{\sigma}\right)\sigma\right) + \frac{1}{2}\lambda {||w||_2}^2} \end{align} $$
where
$$ \begin{align} H_m(z) = \begin{cases} z^2, & \text {if } |z| < \epsilon, \\ 2\epsilon|z| - \epsilon^2, & \text{otherwise} \end{cases} \end{align} $$
Note: Fitting with huber loss only supports none and L2 regularization.
- Annotations
- @Since( "1.3.0" )
- Source
- LinearRegression.scala
- Grouped
- Alphabetic
- By Inheritance
- LinearRegression
- DefaultParamsWritable
- MLWritable
- LinearRegressionParams
- HasLoss
- HasAggregationDepth
- HasSolver
- HasWeightCol
- HasStandardization
- HasFitIntercept
- HasTol
- HasMaxIter
- HasElasticNetParam
- HasRegParam
- Regressor
- Predictor
- PredictorParams
- HasPredictionCol
- HasFeaturesCol
- HasLabelCol
- Estimator
- PipelineStage
- Logging
- Params
- Serializable
- Serializable
- Identifiable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Instance Constructors
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
$[T](param: Param[T]): T
An alias for
getOrDefault()
.An alias for
getOrDefault()
.- Attributes
- protected
- Definition Classes
- Params
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
val
aggregationDepth: IntParam
Param for suggested depth for treeAggregate (>= 2).
Param for suggested depth for treeAggregate (>= 2).
- Definition Classes
- HasAggregationDepth
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
final
def
clear(param: Param[_]): LinearRegression.this.type
Clears the user-supplied value for the input param.
Clears the user-supplied value for the input param.
- Definition Classes
- Params
-
def
clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
def
copy(extra: ParamMap): LinearRegression
Creates a copy of this instance with the same UID and some extra params.
Creates a copy of this instance with the same UID and some extra params. Subclasses should implement this method and set the return type properly. See
defaultCopy()
.- Definition Classes
- LinearRegression → Predictor → Estimator → PipelineStage → Params
- Annotations
- @Since( "1.4.0" )
-
def
copyValues[T <: Params](to: T, extra: ParamMap = ParamMap.empty): T
Copies param values from this instance to another instance for params shared by them.
Copies param values from this instance to another instance for params shared by them.
This handles default Params and explicitly set Params separately. Default Params are copied from and to
defaultParamMap
, and explicitly set Params are copied from and toparamMap
. Warning: This implicitly assumes that this Params instance and the target instance share the same set of default Params.- to
the target instance, which should work with the same set of default Params as this source instance
- extra
extra params to be copied to the target's
paramMap
- returns
the target instance with param values copied
- Attributes
- protected
- Definition Classes
- Params
-
final
def
defaultCopy[T <: Params](extra: ParamMap): T
Default implementation of copy with extra params.
Default implementation of copy with extra params. It tries to create a new instance with the same UID. Then it copies the embedded and extra parameters over and returns the new instance.
- Attributes
- protected
- Definition Classes
- Params
-
final
val
elasticNetParam: DoubleParam
Param for the ElasticNet mixing parameter, in range [0, 1].
Param for the ElasticNet mixing parameter, in range [0, 1]. For alpha = 0, the penalty is an L2 penalty. For alpha = 1, it is an L1 penalty.
- Definition Classes
- HasElasticNetParam
-
final
val
epsilon: DoubleParam
The shape parameter to control the amount of robustness.
The shape parameter to control the amount of robustness. Must be > 1.0. At larger values of epsilon, the huber criterion becomes more similar to least squares regression; for small values of epsilon, the criterion is more similar to L1 regression. Default is 1.35 to get as much robustness as possible while retaining 95% statistical efficiency for normally distributed data. It matches sklearn HuberRegressor and is "M" from A robust hybrid of lasso and ridge regression. Only valid when "loss" is "huber".
- Definition Classes
- LinearRegressionParams
- Annotations
- @Since( "2.3.0" )
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
explainParam(param: Param[_]): String
Explains a param.
Explains a param.
- param
input param, must belong to this instance.
- returns
a string that contains the input param name, doc, and optionally its default value and the user-supplied value
- Definition Classes
- Params
-
def
explainParams(): String
Explains all params of this instance.
Explains all params of this instance. See
explainParam()
.- Definition Classes
- Params
-
def
extractInstances(dataset: Dataset[_], validateInstance: (Instance) ⇒ Unit): RDD[Instance]
Extract labelCol, weightCol(if any) and featuresCol from the given dataset, and put it in an RDD with strong types.
Extract labelCol, weightCol(if any) and featuresCol from the given dataset, and put it in an RDD with strong types. Validate the output instances with the given function.
- Attributes
- protected
- Definition Classes
- PredictorParams
-
def
extractInstances(dataset: Dataset[_]): RDD[Instance]
Extract labelCol, weightCol(if any) and featuresCol from the given dataset, and put it in an RDD with strong types.
Extract labelCol, weightCol(if any) and featuresCol from the given dataset, and put it in an RDD with strong types.
- Attributes
- protected
- Definition Classes
- PredictorParams
-
def
extractLabeledPoints(dataset: Dataset[_]): RDD[LabeledPoint]
Extract labelCol and featuresCol from the given dataset, and put it in an RDD with strong types.
Extract labelCol and featuresCol from the given dataset, and put it in an RDD with strong types.
- Attributes
- protected
- Definition Classes
- Predictor
-
final
def
extractParamMap(): ParamMap
extractParamMap
with no extra values.extractParamMap
with no extra values.- Definition Classes
- Params
-
final
def
extractParamMap(extra: ParamMap): ParamMap
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra.
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra.
- Definition Classes
- Params
-
final
val
featuresCol: Param[String]
Param for features column name.
Param for features column name.
- Definition Classes
- HasFeaturesCol
-
def
finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
def
fit(dataset: Dataset[_]): LinearRegressionModel
Fits a model to the input data.
-
def
fit(dataset: Dataset[_], paramMaps: Array[ParamMap]): Seq[LinearRegressionModel]
Fits multiple models to the input data with multiple sets of parameters.
Fits multiple models to the input data with multiple sets of parameters. The default implementation uses a for loop on each parameter map. Subclasses could override this to optimize multi-model training.
- dataset
input dataset
- paramMaps
An array of parameter maps. These values override any specified in this Estimator's embedded ParamMap.
- returns
fitted models, matching the input parameter maps
- Definition Classes
- Estimator
- Annotations
- @Since( "2.0.0" )
-
def
fit(dataset: Dataset[_], paramMap: ParamMap): LinearRegressionModel
Fits a single model to the input data with provided parameter map.
Fits a single model to the input data with provided parameter map.
- dataset
input dataset
- paramMap
Parameter map. These values override any specified in this Estimator's embedded ParamMap.
- returns
fitted model
- Definition Classes
- Estimator
- Annotations
- @Since( "2.0.0" )
-
def
fit(dataset: Dataset[_], firstParamPair: ParamPair[_], otherParamPairs: ParamPair[_]*): LinearRegressionModel
Fits a single model to the input data with optional parameters.
Fits a single model to the input data with optional parameters.
- dataset
input dataset
- firstParamPair
the first param pair, overrides embedded params
- otherParamPairs
other param pairs. These values override any specified in this Estimator's embedded ParamMap.
- returns
fitted model
- Definition Classes
- Estimator
- Annotations
- @Since( "2.0.0" ) @varargs()
-
final
val
fitIntercept: BooleanParam
Param for whether to fit an intercept term.
Param for whether to fit an intercept term.
- Definition Classes
- HasFitIntercept
-
final
def
get[T](param: Param[T]): Option[T]
Optionally returns the user-supplied value of a param.
Optionally returns the user-supplied value of a param.
- Definition Classes
- Params
-
final
def
getAggregationDepth: Int
- Definition Classes
- HasAggregationDepth
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
getDefault[T](param: Param[T]): Option[T]
Gets the default value of a parameter.
Gets the default value of a parameter.
- Definition Classes
- Params
-
final
def
getElasticNetParam: Double
- Definition Classes
- HasElasticNetParam
-
def
getEpsilon: Double
- Definition Classes
- LinearRegressionParams
- Annotations
- @Since( "2.3.0" )
-
final
def
getFeaturesCol: String
- Definition Classes
- HasFeaturesCol
-
final
def
getFitIntercept: Boolean
- Definition Classes
- HasFitIntercept
-
final
def
getLabelCol: String
- Definition Classes
- HasLabelCol
-
final
def
getLoss: String
- Definition Classes
- HasLoss
-
final
def
getMaxIter: Int
- Definition Classes
- HasMaxIter
-
final
def
getOrDefault[T](param: Param[T]): T
Gets the value of a param in the embedded param map or its default value.
Gets the value of a param in the embedded param map or its default value. Throws an exception if neither is set.
- Definition Classes
- Params
-
def
getParam(paramName: String): Param[Any]
Gets a param by its name.
Gets a param by its name.
- Definition Classes
- Params
-
final
def
getPredictionCol: String
- Definition Classes
- HasPredictionCol
-
final
def
getRegParam: Double
- Definition Classes
- HasRegParam
-
final
def
getSolver: String
- Definition Classes
- HasSolver
-
final
def
getStandardization: Boolean
- Definition Classes
- HasStandardization
-
final
def
getTol: Double
- Definition Classes
- HasTol
-
final
def
getWeightCol: String
- Definition Classes
- HasWeightCol
-
final
def
hasDefault[T](param: Param[T]): Boolean
Tests whether the input param has a default value set.
Tests whether the input param has a default value set.
- Definition Classes
- Params
-
def
hasParam(paramName: String): Boolean
Tests whether this instance contains a param with a given name.
Tests whether this instance contains a param with a given name.
- Definition Classes
- Params
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean = false): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
def
initializeLogIfNecessary(isInterpreter: Boolean): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
def
isDefined(param: Param[_]): Boolean
Checks whether a param is explicitly set or has a default value.
Checks whether a param is explicitly set or has a default value.
- Definition Classes
- Params
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
final
def
isSet(param: Param[_]): Boolean
Checks whether a param is explicitly set.
Checks whether a param is explicitly set.
- Definition Classes
- Params
-
def
isTraceEnabled(): Boolean
- Attributes
- protected
- Definition Classes
- Logging
-
final
val
labelCol: Param[String]
Param for label column name.
Param for label column name.
- Definition Classes
- HasLabelCol
-
def
log: Logger
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logDebug(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logError(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logInfo(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logName: String
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logTrace(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String, throwable: Throwable): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
def
logWarning(msg: ⇒ String): Unit
- Attributes
- protected
- Definition Classes
- Logging
-
final
val
loss: Param[String]
The loss function to be optimized.
The loss function to be optimized. Supported options: "squaredError" and "huber". Default: "squaredError"
- Definition Classes
- LinearRegressionParams → HasLoss
- Annotations
- @Since( "2.3.0" )
-
final
val
maxIter: IntParam
Param for maximum number of iterations (>= 0).
Param for maximum number of iterations (>= 0).
- Definition Classes
- HasMaxIter
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
lazy val
params: Array[Param[_]]
Returns all params sorted by their names.
Returns all params sorted by their names. The default implementation uses Java reflection to list all public methods that have no arguments and return Param.
- Definition Classes
- Params
- Note
Developer should not use this method in constructor because we cannot guarantee that this variable gets initialized before other params.
-
final
val
predictionCol: Param[String]
Param for prediction column name.
Param for prediction column name.
- Definition Classes
- HasPredictionCol
-
final
val
regParam: DoubleParam
Param for regularization parameter (>= 0).
Param for regularization parameter (>= 0).
- Definition Classes
- HasRegParam
-
def
save(path: String): Unit
Saves this ML instance to the input path, a shortcut of
write.save(path)
.Saves this ML instance to the input path, a shortcut of
write.save(path)
.- Definition Classes
- MLWritable
- Annotations
- @Since( "1.6.0" ) @throws( ... )
-
final
def
set(paramPair: ParamPair[_]): LinearRegression.this.type
Sets a parameter in the embedded param map.
Sets a parameter in the embedded param map.
- Attributes
- protected
- Definition Classes
- Params
-
final
def
set(param: String, value: Any): LinearRegression.this.type
Sets a parameter (by name) in the embedded param map.
Sets a parameter (by name) in the embedded param map.
- Attributes
- protected
- Definition Classes
- Params
-
final
def
set[T](param: Param[T], value: T): LinearRegression.this.type
Sets a parameter in the embedded param map.
Sets a parameter in the embedded param map.
- Definition Classes
- Params
-
def
setAggregationDepth(value: Int): LinearRegression.this.type
Suggested depth for treeAggregate (greater than or equal to 2).
Suggested depth for treeAggregate (greater than or equal to 2). If the dimensions of features or the number of partitions are large, this param could be adjusted to a larger size. Default is 2.
- Annotations
- @Since( "2.1.0" )
-
final
def
setDefault(paramPairs: ParamPair[_]*): LinearRegression.this.type
Sets default values for a list of params.
Sets default values for a list of params.
Note: Java developers should use the single-parameter
setDefault
. Annotating this with varargs can cause compilation failures due to a Scala compiler bug. See SPARK-9268.- paramPairs
a list of param pairs that specify params and their default values to set respectively. Make sure that the params are initialized before this method gets called.
- Attributes
- protected
- Definition Classes
- Params
-
final
def
setDefault[T](param: Param[T], value: T): LinearRegression.this.type
Sets a default value for a param.
Sets a default value for a param.
- param
param to set the default value. Make sure that this param is initialized before this method gets called.
- value
the default value
- Attributes
- protected
- Definition Classes
- Params
-
def
setElasticNetParam(value: Double): LinearRegression.this.type
Set the ElasticNet mixing parameter.
Set the ElasticNet mixing parameter. For alpha = 0, the penalty is an L2 penalty. For alpha = 1, it is an L1 penalty. For alpha in (0,1), the penalty is a combination of L1 and L2. Default is 0.0 which is an L2 penalty.
Note: Fitting with huber loss only supports None and L2 regularization, so throws exception if this param is non-zero value.
- Annotations
- @Since( "1.4.0" )
-
def
setEpsilon(value: Double): LinearRegression.this.type
Sets the value of param epsilon.
Sets the value of param epsilon. Default is 1.35.
- Annotations
- @Since( "2.3.0" )
-
def
setFeaturesCol(value: String): LinearRegression
- Definition Classes
- Predictor
-
def
setFitIntercept(value: Boolean): LinearRegression.this.type
Set if we should fit the intercept.
Set if we should fit the intercept. Default is true.
- Annotations
- @Since( "1.5.0" )
-
def
setLabelCol(value: String): LinearRegression
- Definition Classes
- Predictor
-
def
setLoss(value: String): LinearRegression.this.type
Sets the value of param loss.
Sets the value of param loss. Default is "squaredError".
- Annotations
- @Since( "2.3.0" )
-
def
setMaxIter(value: Int): LinearRegression.this.type
Set the maximum number of iterations.
Set the maximum number of iterations. Default is 100.
- Annotations
- @Since( "1.3.0" )
-
def
setPredictionCol(value: String): LinearRegression
- Definition Classes
- Predictor
-
def
setRegParam(value: Double): LinearRegression.this.type
Set the regularization parameter.
Set the regularization parameter. Default is 0.0.
- Annotations
- @Since( "1.3.0" )
-
def
setSolver(value: String): LinearRegression.this.type
Set the solver algorithm used for optimization.
Set the solver algorithm used for optimization. In case of linear regression, this can be "l-bfgs", "normal" and "auto".
- "l-bfgs" denotes Limited-memory BFGS which is a limited-memory quasi-Newton optimization method.
- "normal" denotes using Normal Equation as an analytical solution to the linear regression
problem. This solver is limited to
LinearRegression.MAX_FEATURES_FOR_NORMAL_SOLVER
. - "auto" (default) means that the solver algorithm is selected automatically. The Normal Equations solver will be used when possible, but this will automatically fall back to iterative optimization methods when needed.
Note: Fitting with huber loss doesn't support normal solver, so throws exception if this param was set with "normal".
- Annotations
- @Since( "1.6.0" )
-
def
setStandardization(value: Boolean): LinearRegression.this.type
Whether to standardize the training features before fitting the model.
Whether to standardize the training features before fitting the model. The coefficients of models will be always returned on the original scale, so it will be transparent for users. Default is true.
- Annotations
- @Since( "1.5.0" )
- Note
With/without standardization, the models should be always converged to the same solution when no regularization is applied. In R's GLMNET package, the default behavior is true as well.
-
def
setTol(value: Double): LinearRegression.this.type
Set the convergence tolerance of iterations.
Set the convergence tolerance of iterations. Smaller value will lead to higher accuracy with the cost of more iterations. Default is 1E-6.
- Annotations
- @Since( "1.4.0" )
-
def
setWeightCol(value: String): LinearRegression.this.type
Whether to over-/under-sample training instances according to the given weights in weightCol.
Whether to over-/under-sample training instances according to the given weights in weightCol. If not set or empty, all instances are treated equally (weight 1.0). Default is not set, so all instances have weight one.
- Annotations
- @Since( "1.6.0" )
-
final
val
solver: Param[String]
The solver algorithm for optimization.
The solver algorithm for optimization. Supported options: "l-bfgs", "normal" and "auto". Default: "auto"
- Definition Classes
- LinearRegressionParams → HasSolver
- Annotations
- @Since( "1.6.0" )
-
final
val
standardization: BooleanParam
Param for whether to standardize the training features before fitting the model.
Param for whether to standardize the training features before fitting the model.
- Definition Classes
- HasStandardization
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- Identifiable → AnyRef → Any
-
final
val
tol: DoubleParam
Param for the convergence tolerance for iterative algorithms (>= 0).
Param for the convergence tolerance for iterative algorithms (>= 0).
- Definition Classes
- HasTol
-
def
train(dataset: Dataset[_]): LinearRegressionModel
Train a model using the given dataset and parameters.
Train a model using the given dataset and parameters. Developers can implement this instead of
fit()
to avoid dealing with schema validation and copying parameters into the model.- dataset
Training dataset
- returns
Fitted model
- Attributes
- protected
- Definition Classes
- LinearRegression → Predictor
-
def
transformSchema(schema: StructType): StructType
:: DeveloperApi ::
:: DeveloperApi ::
Check transform validity and derive the output schema from the input schema.
We check validity for interactions between parameters during
transformSchema
and raise an exception if any parameter value is invalid. Parameter value checks which do not depend on other parameters are handled byParam.validate()
.Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
- Definition Classes
- Predictor → PipelineStage
-
def
transformSchema(schema: StructType, logging: Boolean): StructType
:: DeveloperApi ::
:: DeveloperApi ::
Derives the output schema from the input schema and parameters, optionally with logging.
This should be optimistic. If it is unclear whether the schema will be valid, then it should be assumed valid until proven otherwise.
- Attributes
- protected
- Definition Classes
- PipelineStage
- Annotations
- @DeveloperApi()
-
val
uid: String
An immutable unique ID for the object and its derivatives.
An immutable unique ID for the object and its derivatives.
- Definition Classes
- LinearRegression → Identifiable
- Annotations
- @Since( "1.3.0" )
-
def
validateAndTransformSchema(schema: StructType, fitting: Boolean, featuresDataType: DataType): StructType
Validates and transforms the input schema with the provided param map.
Validates and transforms the input schema with the provided param map.
- schema
input schema
- fitting
whether this is in fitting
- featuresDataType
SQL DataType for FeaturesType. E.g.,
VectorUDT
for vector features.- returns
output schema
- Attributes
- protected
- Definition Classes
- LinearRegressionParams → PredictorParams
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... ) @native()
-
final
val
weightCol: Param[String]
Param for weight column name.
Param for weight column name. If this is not set or empty, we treat all instance weights as 1.0.
- Definition Classes
- HasWeightCol
-
def
write: MLWriter
Returns an
MLWriter
instance for this ML instance.Returns an
MLWriter
instance for this ML instance.- Definition Classes
- DefaultParamsWritable → MLWritable
Inherited from DefaultParamsWritable
Inherited from MLWritable
Inherited from LinearRegressionParams
Inherited from HasLoss
Inherited from HasAggregationDepth
Inherited from HasSolver
Inherited from HasWeightCol
Inherited from HasStandardization
Inherited from HasFitIntercept
Inherited from HasTol
Inherited from HasMaxIter
Inherited from HasElasticNetParam
Inherited from HasRegParam
Inherited from Regressor[Vector, LinearRegression, LinearRegressionModel]
Inherited from Predictor[Vector, LinearRegression, LinearRegressionModel]
Inherited from PredictorParams
Inherited from HasPredictionCol
Inherited from HasFeaturesCol
Inherited from HasLabelCol
Inherited from Estimator[LinearRegressionModel]
Inherited from PipelineStage
Inherited from Logging
Inherited from Params
Inherited from Serializable
Inherited from Serializable
Inherited from Identifiable
Inherited from AnyRef
Inherited from Any
Parameters
A list of (hyper-)parameter keys this algorithm can take. Users can set and get the parameter values through setters and getters, respectively.
Members
getExpertParam
setExpertParam
Parameter setters
Parameter getters
(expert-only) Parameters
A list of advanced, expert-only (hyper-)parameter keys this algorithm can take. Users can set and get the parameter values through setters and getters, respectively.