In the context of this exercise, what else does the
LinearRegression function provide?
There are a few other things that the
LinearRegression function provides and lets us do.
When creating a
LinearRegression model, you can choose to determine whether it should calculate any intercept for the model, by setting the
fit_intercept parameter to
False. If you choose not to calculate any intercept for the model, it will except that the data is already centered. In addition, you can also set other parameters such as
In addition, you can obtain all the parameters of the model’s estimator using the
get_params() method, and, you can use the
score() method to obtain the R^2 score, which is a value telling how close the data is to the regression line.
To see a full list of all you can do with the
LinearRegression function, with more details on each method and parameter, you can also check out the documentation.
For anyone who may have been puzzled by the wording regarding fit_intercept, here is the explanation from the documentation
fit_intercept : boolean, optional, default True
whether to calculate the intercept for this model. If set to False, no intercept will be used in calculations (e.g. data is expected to be already centered).
.fit() method gives the model two variables that are useful to us:
line_fitter.coef_ , which contains the slope
line_fitter.intercept_ , which contains the intercept
Is there any point to print these figures out?
I tried to print it but was unable to find the variables
great question, I’d also like to know
Just print it:
# variable name = line_fitter
They have hidden the terminal output, and hence you can’t see the values printed to screen.
.fit() method is creating the equation of the line in this example. Right?
Then, what is the point of
.predict() method in this example, and in general for linear regression, when we have the equation of the line (by having m & b)?