Loading...
Navigation
Table of Contents

Notes of Statistical LearningData

My notes of book An Introduction to Statistical Learning with Applications in R

Question

  • Why $R^2$ statistic is exactly the $Cor(X,Y)^2$ in simple linear regression?

  • Is more complicated model more flexible or less? A: flexible=complicated, i.e., linear regression is inflexible since it can only line shapes.

  • Is the $k$-fold CV (value) used to approximate the test error?)

Notes

  • In regression, we use $X$ or $X_i$ in the theoretical model and $x$ or $x_i$ in the prediction/regression process.

2 Statistical Learning

2.1 What is statistical learning

statistical learning predictors/independent variables (自变量)/explanatory variables (解释变量,经济学) $X$ --> responses/dependent variables (因变量)/explained variables (被解释变量) $Y$

prediction/inference prediction is used to estimate $Y$ (accurately predict the response for future predictors), while inference is to understand the way that $Y$ is affected as $X$ change (a better understanding of the relationship between response and predictors).

parametric methods a) make an assumption on the functional form/model; b) use the training data to fit or train the model.

  • pros. simplify the problem to estimate several parameters.
  • cons. the model we choose may not match the true unknown form $f$ well.

non-parametric methods do not make explicit assumptions about the functional form of $f$, they seek an estimate of $f$ that gets as close to the data points as possible without being too rough or wiggly (摇摆的).

  • pros. have potential to accurately fit a wide range of model.
  • cons. must need many observations (training data?); overfitting.

accuracy/interpretability (可解释性) when we are interested in inference, we can use simple and inflexible models such as linear regression since we want to how each predictor is associated with the response (e.g. is there any irrelevant variables leads to unnecessary complexity in the resulting model?), these methods are more interpretable. If we are interested in prediction, the accuracy rather than interpretability is more important, so we choose flexible models.

supervised/unsupervised learning if we have both predictors and responses in data, we can use supervised learning; if we only have "unlabeled" data (from Wikipedia), we use unsupervised.

regression/classification we tend to refer to problems with a quantitative response (takes on numerical values) as regression, those involving qualitative response (takes on values in one of K different classes). E.g., linear regression v.s. logistic regression.

2.2 Accessing the model accuracy

mean square error (MSE) $MSE=\frac{1}{n}\sum_{i=1}^n (y_i-\hat{f}(x_i))^2$, when training MSE is larger than test MSE, it is an overfitting. Cross-Validation is a method for estimating test MSE using the training data.

variance/bias variance refers to the amount by which $\hat{f}$ would change if estimated it using different training data. bias refers to error introduced by the model. As we use more flexible models, the var increases and the bias will decrease. Note that all off these are in terms of model itself, as to data, more training data can help reduce variance. Mathematically, the test MSE for a single $x_0$ is
$$E [y_0-\hat f(x_0)]^2=E [y_0-E\hat f(x_0)]^2+ [E\hat f(x_0)-\hat f(x_0)]^2 + Var (\epsilon)$$
which is the adds of variance, bias and error variancesee here. Note that the $\hat f(x)$ is an estimate of $f(x)$ rather than $y$. The difference is $f(x)$ is the main relationship we want to describe in our model. For example, in linear regression, $f(x)$ denotes the linear part, but the actual relationship $y(x)$ is not linear, that is why we have $\epsilon$.

3 Linear Regression

3.1 Simple linear regression

population regression line (总体) $Y=\beta_0 + \beta_1 X +\epsilon$, $\beta_0$ is the intercept (截断), $\beta_1$ is slope (斜率), $\epsilon$ is a mean-zero random error term and independent of $X$. They are denoted by coefficient or parameter. We use $\hat{y}=\hat{\beta_0}+\hat{\beta_1}x$ to predict future responses.

Here, $\hat{\beta_0}=\bar{y}-\hat{\beta_1}\bar{x}$ and $\hat{\beta_1}=\frac{\sum (x_i-\bar{x})(y_i-\bar{y})}{\sum (x_i-\bar{x})^2}$. $Var(\hat{\beta_0})=\sigma^2[\frac{1}{n}+\frac{\bar{x}^2}{\sum (x_i-\bar{x})^2}]$ and $Var(\hat{\beta_1})=\frac{\sigma ^2}{\sum (x_i-\bar{x})^2}$, where $\hat{\sigma}^2= \frac{RSS}{n-2}$ is an unbiased estimate of $\sigma^2$. (the $\hat{\sigma}$ is only an estimate of $\sigma$?).

residual sum of squares (RSS) $RSS=\sum (y_i-\hat{\beta_0}-\hat{\beta_1}x_i)^2=n\times MSE$.

unbiased estimate (无偏估计) $\hat{\mu}=\bar{y}=\frac{1}{n}\sum y_i$ is the unbiased estimate of the mean $\mu$ of $y$. 如果$E[统计量]=a$,则该统计量就是$a$的无偏估计.

hypothesis test on the coefficients null hypothesis $H_0$ (原假设): no relationship between $X$ and $Y$, i.e., $\beta_1=0$ and alternative hypothesis $H_a$ (备择假设): $\beta_1\ne 0$.

  • t-statistic (t统计量). $t=\frac{\hat{\beta_1}-0}{\sqrt{Var(\hat{\beta_1})}}\thicksim t(n-2)$

  • p-value. the probability $\ge |t|$. Small p-value like 1% is good. It means we reject the null hypothesis and there exists the relationship.

model accuracy to know the extent to which the model fits the data, we can use residual standard error (RSE). $RSE=\sqrt{RSS/(n-2)}$ or $R^2$ statistic ($R^2$统计量): $R^2=1-\frac{RSS}{TSS}, TSS=\sum (y_i-\bar{y})^2=Cor(X,Y)^2$. Larger $R^2$ is better.

3.2 Multiple linear regression

population regression line $Y=\beta_0+\beta_1X_1+\beta_2X_2+\cdots+\beta_pX_p+\epsilon, p\ll n$.

hypothesis test null hypothesis $H_0: \beta_1=\beta_2=\cdots=\beta_p=0$, we compute F-statistic: $F=\frac{(TSS-RSS)/p}{RSS/(n-p-1)}\thicksim F(p,n-p-1)$. In multiple linear regression, we should use F-statistic rather than t-statistic since we are almost guaranteed to see small p-value especially when $n$ is large, so p-value may be useless.

model accuracy we can use $RSE=\sqrt{RSS/(n-p-1)}$; we can also use $R^2$ statistic: $R^2=Cor(Y,\hat{Y})^2$.

3.3 Other considerations in the regression model

qualitative predictors the final predictions are identical regardless the coding scheme for qualitative predictors.

synergy effect/interaction effect the increase of one predictor may cause another predictor to change, thus we add an interaction term $\beta_3X_1X_2$ to the model. By hierarchical principle, we should also remain the main effect when we include an interaction term, even if the p-values associated with the main effect are not statistically significant.

polynomial regression sometimes a quadratic term: $y=\beta_0+\beta_1x_1+\beta_2x_2^2$ may provide a better fit, but this is still a linear model. (See Chapter 7)

correlation of error terms if the error terms $\epsilon_1,\epsilon_2\dots$ are correlated, we have correlation$\rightarrow$ standard error gets small $\rightarrow$ confidence interval gets large $\rightarrow$ wrong. Typically, if we double all data, the confidence interval should be narrowed by a factor of $\sqrt{2}$. The correlation usually happens in time series data, we can plot the residuals $e_i=y_i-\hat{y}_i$-time to see it.

non-constant variance of error terms the error term may have a non-constant variance, which can be seen by the residual-$Y$ plot. We can substitute $Y$ by using concave functions (凹函数) like $\log Y$ or $\sqrt{Y}$ to reduce the problem.

influential points

  • outliers. outliers are points with unusual $y_i-\hat{y}_i$, that means that the relationship between x and y is different for that point than for the other points. We can identify them by checking studentized residuals: $\frac{e_i}{\sqrt{Var(e_i)}}$. If it $>3$, then we may regard it as outliers.

  • high leverage observations (高影响力点). they are points whose $x_i$ is far from the mean $\bar{x}$. Observation with very low or very high value of $x_i$ are in positions of high leverage. In simple linear regression, we use leverage statistic $h_i=\frac{1}{n}+\frac{(x_i-\bar{x})^2}{\sum_k (x_k-\bar{x})^2}, \frac{1}{n}< h_i<1$ to judge. If the leverage statistic greatly exceeds $(p+1)/n$ (p=1 in simple linear regression), we regard it as high leverage point.

  • if a point is both outlier and of high leverage, it is an influential point and dangerous.

col-linearity (共线性) it refers to some predictors are correlated, i.e., the gender and height.

  • influence. the collinearity can make the RSS contour plot (RSS等高线图) be more narrow; It also reduce the accuracy of $\beta_i$, cause larger standard error and p-value.

  • detection. a simple way is looking at the correlation matrix between any two coefficients; A better way to check the multi-col-linearity is to compute the variance inflation factor (VIF,方差膨胀因子): $VIF(\hat\beta_j)=1/(1-R^2_{X_j|X_{-j}})$, the $R^2_{X_j|X_{-j}}$ is the $R^2$ statistic in model $X_j=\sum_{i\ne j}\alpha_iX_i+\epsilon$. The $VIF>1$ and it can indicate a problem when it exceeds 5 or 10.

  • solution. drop one of the problematic predictor since it affects less in the model. combine the collinear variables together into a single predictor.

3.5 Comparison of linear regression and KNN

KNN regression given a $K$ and a prediction point $x_0$, KNN use the K-nearest training observations set $N_0$ to estimate $f(x_0)$: $\hat{f}(x_0)=\frac{1}{K}\sum_{x_i\in N_0}y_i$

bias-variance trade-off small $K$ provides more flexible fit: low bias and high variance.

KNN and linear regression in linear situations: linear regression is more accurate than KNN; in non-linear situations: KNN might be better; in high dimensions ($p$ is large), the $K$ nearest observations may be far away from $x_0$, which is called curse of dimensionality. In general, we prefer linear regression for its simplicity and available p-values.

4 Classification

4.2 Why not linear regression

dummy variable for a binary qualitative response, we use dummy variable $\{0,1\}$ to code them. But it can not be easily extended to more than two levels.

the logistic regression since the estimates in linear regression may be out of $[0,1]$, though it is the same as the LDA procedure, we use logistic regression.

4.3 Logistic regression

logistic function the function is $p(X)=p(Y=1|X)=\frac{e^{\beta_0+\beta_1X}}{1+e^{\beta_0+\beta_1X}}$. The log-odds/logitis linear in $X$: $\log (\frac{p(X)}{1-p(X)})=\beta_0+\beta_1X$.

maximum likelihood choose $\hat{\beta_0}$ and $\hat{\beta_1}$ that maximize the likelihood function: $(\beta_0,\beta_1)=\prod_{i:y_i=1} p(x_i)\prod_{j:y_j=0}(1-p(x_j))$. We use z-statistic here to check the hypothesis test.

make predictions we use $\hat{p}(X)=\frac{e^{\hat{\beta}_0+\hat{\beta}_1X}}{1+e^{\hat{\beta}_0+\hat{\beta}_1X}}$ to make predictions.

multiple logistic regression the coefficient in multiple logistic regression may be different from that in logistic regression. The reason is that "A student is riskier than a non-student if we know nothing on him, while a student is less risky than a non-student with same credit card balance.

4.4 Linear discriminant analysis

motivation the logistic regression are surprisingly unstable when the classes are well-separated; the LDA is more stable when $n$ is small and the distribution is Gaussian in each class; the LDA deals with more than two classes.

notations

  • prior probability. $\{\pi_k\}$ is the prior probability over the $k$ classes.

  • density function of X. $f_k(x)=Pr(X=x|Y=k)$ is the pdf in $k$th class. The $f_k(X)$ may be continuous since it is as to $x\in R$.

  • posterior probability. from the Bayes' theorem, $p_k(X)=Pr(Y=k|X=x)=\pi_k f_k(x)/\sum_{l=1}^{K}\pi_l f_l(x)$. the posterior is discrete since it is as to $K$ classes.

LDA model for p=1 suppose $f_k(x)\sim N(\mu_k,\sigma^2)$ (same variance), the posterior are transformed to $\delta_k(x)=x\frac{\mu_k}{\sigma^2}-\frac{\mu_k^2}{2\sigma^2}+\log (\pi_k)$.

discriminant function $ \hat{\delta}_k(x)=x\frac{\hat\mu_k}{\hat\sigma^2}-\frac{\hat\mu_k^2}{2\hat\sigma^2}+\log (\hat\pi_k) $, where $ \hat\mu_k =\frac{1}{n-k}\sum_{i:y_i=k} x_i $, $ \hat\sigma^2= \frac{1}{n-K}\sum_{k=1}^K \sum_{i:y_i=k} (x_i-\hat\mu_k)^2$ and $\hat\pi_k=\frac{n_k}{n}$.

LDA model for p>1 consider the observation $X$ in the $k$th class are drawn from $N(\mu_k, \Sigma)$, the rest is same as $p=1$ case.

low sensitivity? the LDA suffers from low sensitivity (default to default, true positive) since it try to approximate the Bayes classifier by minimizing the total error rate (include two types of error). We can deal with the problem by lowering the threshold for the posterior probability.

ROC curve/AUC a popular graphic for simultaneously displaying the true positive rate/sensitivity - false positive rate/ fall-out rate as the criterion changes. The area under the ROC (AUC) is larger, the classifier is better.
confusion_matrix (predicted label vs. Real Label)

Positive是指实验结果为正例, 尤其多见于医学场景下, 如阳性/阴性. 其可能是真正True Positive, 也可能是假正Negative Positvie. 更多信息参考知乎

two types of error in hypothesis test, a type I error is the incorrect rejection of a true null hypothesis (false positive), a type II error is the failure to reject a true alternative hypothesis (false negative). In medical statistics, the negative (阴性) means the results is good such as no cancer and $\beta_0\neq 0$ (alternative hypothesis). The false positive/negative in classification here is same:

(这里是指真实分类 - 假设分类,统计学概念)

quadratic discriminant analysis unlike LDA, QDA assumes each class has its own covariance matrix. QDA has relatively low bias but high variance, it is suitable for large training set case since we need to estimate nearly $Kp(p+1)/2$ parameters.

comparison true decision boundary are linear. LDA (Gaussian, well separated data) and logistic; moderately non-linear: QDA; more complicated: KNN.

5 Resampling Methods

5.1 Cross-Validation

motivation in the absence of designated test set, we want to use some techniques to estimate the test error which is usually larger than training error.

validation set approach split the observation set into 2 parts randomly: training set and validation set; the split can be produced many times. The drawbacks of this approach is highly variable test error rate and fewer available training observations (may overestimate the test error rate then). (This approach is called validation rather than cross validation!)

k-fold cross validation randomly divide the observation set into $k$ groups, one as the validation set and the other $k-1$ as the training set, we repeat the procedure $k$ times and obtain $MSE_1,MSE_2,\dots,MSE_k$. The $k$-fold CV (value) (which is used to approximate the test error?) is $CV_K=\frac{1}{k}\sum_{i=1}^k MSE_i$. The leave-one-out cross validation (LOOCV) is a typical case with $k=1$.

  • LOOCV. it seems to need much computation since we need to fit the model $n$ times, but we can use such a formula $CV_n=\frac{1}{n}\sum_{i=1}^n (\frac{y_i-\hat y_i}{1-h_i}^2)$, where $\hat y_i$ is the original least square fit, $h_i$ is the leverage. This help inflate the high-leverage points.

  • minimum point. when performing cross-validation over different parameters (polynomial degrees in logistic regression, p185), our goal is determine how well the model can be. Thus we focus on the minimum point in the estimated test MSE curve to choose the most proper parameter?.

  • bias-variance trade-off. small $k$ like LOOCV are of low bias since the estimated test error is actually unbiased, but is has large variance since in each repeat the model is almost identical, large $k$ case is similar. Empirically we choose $k=5,10$.

5.2 The bootstrap

bootstrap (自助法) it is used to generate more data sets by sampling on current observations. The sampling is performed with replacement.

6 Linear Model Selection and Regularization

6.1 Subset selection

motivation select a subset of predictors which we believe to be relevant to the response.

best subset selection step 1. $\forall 1\le k\le p$, fit all $C_n^p$ models and choose the smallest RSS or largest $R^2$ as the best subset $M_k$; step 2. select a single best model among $M_k$ by using $C_p(AIC)$, $BIC$, adjusted $R^2$ or $CV$.

forward stepwise selection step 1. $\forall 1\le k < p$, choose the predictor which makes the augmented model $M_{k+1}$ has the smallest RSS or largest $R^2$. step 1. select the best model among $M_k$. backward stepwise selection is similar, its advantage is it can be applied in large $p$ setting.

choosing the optimal model in order to select the best model with smallest test error, we can indirectly estimate the test error by making adjustment to training error or directly estimate the test error by validation approach.

  • $C_p=\frac{1}{n}(RSS+2d\hat\sigma^2)$ where $d$ is the predictors in the subset, the $C_p$ statistic adds a penalty to the training RSS actually. If $\hat\sigma^2$ is an estimate of $\sigma^2$, then $C_p$ is an unbiased estimate of test MSE;

  • $AIC=\frac{1}{n\hat\sigma^2}(RSS+2d\hat\sigma^2)$, it is proportional to $C_p$;

  • $BIC=\frac{1}{n}(RSS+\log (n)d\hat\sigma^2)$ is derived from Bayesian point of view, it places a heavier penalty and hence results in selecting a smaller model than $C_p$;

  • the adjusted $R^2=1-\frac{RSS/(n-d-1)}{TSS/(n-1)}$. Since adding noise variable leads an increase in RSS and $d$, thus $RSS/(n-d-1)$ then. The adjusted $R^2$ statistic pays a price for the inclusion of unnecessary variables in the model.

  • cross validation. one-standard-error-rule: first calculate the standard error of the estimated test MSE for each model size, then select the smallest(simplest) model.

6.2 Shrinkage methods (Regularization)

two shrinkage models

  • ridge regression (岭回归). just add an $\ell_2$ shrinkage penalty to the least squares: $\hat\beta_\lambda^R=\arg \min=RSS+\lambda\sum_{j=1}^p\beta_j^2$ (no $\beta_0$ in the penalty) where $\lambda\ge0$ is the tuning parameter (调参).

  • the Lasso. just add an $\ell_1$ penalty: $\hat\beta_\lambda^L=\arg \min RSS+\lambda\sum_{j=1}^p|\beta_j|$.

properties

  • standardized coefficients. in least squares, multiple $x_j$ by a constant $c$ leads to a scaling of the coefficient estimates by a factor of $1/c$. However, in ridge regression the coefficient can change substantially (not only multiply $1/c$) due to the penalty term. Thus we need to standardize the predictors: $\tilde x_{ij}=x_{ij}/\sqrt{\frac{1}{n}\sum_{i=1}^n (x_{ij}-\bar x_j)^2}$.

  • tuning parameter. as $\lambda$ increases, $\hat\beta$ tends to 0. The flexibility/accuracy decreases and it leads to higher bias and lower variance. Note that, in situations where there exists unrelated predictors and thus low bias but high variance, the penalty can help identity the unrelated predictors can reduce the variance. We can choose the tuning parameter by making the $CV$-$\lambda$ plot.

comparison compared to subset selection, the two regressions are computationally feasible.

  • ridge regression ($l_2$). it always include all predictors in the model since it only reduce the magnitudes of the coefficients. It is suitable for model that most predictors are related to response.

  • Lasso ($l_1$). it performs variable selection by shrinks some coefficient estimates to 0, and hence reduce the model variance and make it easier to interpret.

  • contour plot. the contours of error plot in p.222 illustrate the difference between ridge regression and Lasso.

6.3 Dimension reduction methods

principal component the first principal component direction is that along which the observations vary the most. The first component $Z_1=\{z_{i1}\}$is chosen so that the projected points and the original points are as close as possible. In $p=2$ case, the $z_{i1}=\phi_1(x_{i1}-\bar X_1)+\phi_2(x_{i2}-\bar X_2)$ are called the principal component scores. It is the distance between its projective point and the "middle point" in the principal direction line. The second principal component $Z_2$ is also a linear combination of variables that is uncorrelated with $Z_1$ and has the largest variance.

principal component regression (PCR) use the first $M$ principal component in the linear regression to explain most of the variability of data. It is not a feature selection method. Also, we need to standardize the observations before.

6.4 Other considerations in high dimensions

The $C_p$, $AIC$, $BIC$ are not appropriate in high-dimension setting because estimating $\hat\sigma^2$ is problematic.

We never use MSE, p-values $R^2$ or other traditional measures on training data as the evidence in high-dimensional setting. For example, we can easily get $R^2=1$ when $p>n$.

7 Moving Beyond Linearity

7.3 Basic functions

polynomial regression $Y=\beta_0+\beta_1X+\beta_2X^2+\beta_3X^3+\cdots+\epsilon$.

step functions set $K$ indicator function in the range of $X$, $C_0(X)=I(X< c_1), C_1(X)=I(c_1\le X< c_2)\dots$, we have $\sum_0^K C_i(X)=1$. Then we use the model $Y=\beta_0+\beta_1C_1(X)+\cdots+\epsilon$

7.4 regression splines (样条回归)

piecewise polynomials use some knots to separate the interval and make polynomial regression on each interval. More knots can bring more flexible model.

degrees of freedom denoted by $df$, it is the unknown coefficients that we use in regression. There is also effective degrees of freedom.

i. spline basis approach a cubic spline with $K$ knots is modeled as $Y=\beta_0+\beta_1b_1(X)+\beta_2b_2(X)+\cdots+\beta_{K+3}b_{K+3}(X)+\epsilon$, the $K+3$ predictors are $X,X^2,X^3,h(X,\xi_1)\dots h(X,\xi_K)$ where $h(x,\xi)=(x-\xi)_+^3$ is called truncated power basis function on each knot. The degree of freedom is $K+4$ here. The reason we choose cubic is because most human eyes can not detect the discontinuity beyond it.

  • drawbacks. the cubic spline are wiggly in the left/right boundary region, it is not called natural splines since there is no boundary constraints.

  • degree-d spline. just a $d$-degree polynomial, its $d-1$th derivative is continuous.

  • choosing knots. where: the software automatically place it; how many: see $CV-K$ plot by cross validation.

ii. smoothing splines find a smooth function/spline by minimizing $\sum (y_i-g(x_i))^2+\lambda \int g"(t)^2 dt$. When $\lambda$ increases, the spline becomes more smooth. The smoothing splines can also be piecewise since we can assign some knots.

  • **df.**The effective degree of smoother $df_\lambda=tr (\mathbf{S}_\lambda)$ where $\mathbf{\hat g}_\lambda=\mathbf{S}_\lambda\mathbf{y}$ ($\mathbf{\hat g}_\lambda$ is the fitting value vector and $\mathbf{y}$ is the response vector).

  • choosing knots and lambda. every observation can be a knot; we use LOOCV to obtain $CV$-$K$ plot. The $CV_\lambda=\sum (y_i-\hat g_\lambda^{(-i)}(x_i))^2=\sum \frac{y_i-\hat g(x_i)}{1-\{\mathbf{S}_\lambda\}_{ii}}$ where the $\hat g_\lambda^{(-i)}(x_i)$ is the fitting value without including $(x_i,y_i)$ in the training set.

7.6 Local regression

local regression choose the most $s$ training points as a neighborhood, assign a weight to them and do the weighted least squares regression. The choosing of span s can be accomplished by CV.

7.7 Generalized additive models (广义相加模型)

generalized additive models we write the linear regression model as $Y=\beta_0+f_1(X_1)+f_2(X_2)+\cdots+f_p(X_p)+\epsilon$ where $f_i(X_i)$ are non-linear functions.

8 Tree-Based Methods

8.1 The basics of decision trees

regression trees

tree building the goal of building the tree is to minimize the RSS: $\sum_{j=1}^J \sum_{i\in R_j}(y_i-\bar y_{R_j})^2$ where $\bar y_{R_j}$ is the mean response in $j$th region. Since it is infeasible to consider every possible partition, we use a greedy recursive binary splitting: a) select a cut point $s$ which leads to the greatest reduction in RSS; b) select one of the two previous regions and split; c) continue the process until a criterion is reached, e.g., no region contains more than 5 points.

tree pruning to avoid the tree is too complex and thus overfitting, we prune tree by minimizing $\sum \sum (y_i-\bar y_{R_j})^2+\alpha |T|$, $|T|$ is the number of terminal nodes and relevant to $\alpha$ ($\alpha$ controls the complexity of the tree). We select $\alpha$ using CV.

classification trees for defining classification rate, we use the rate of training observations which does not belong to the most commonly occurring class in that region. For prediction, we similarly predict it as the most commonly occurring class in that region.

tree vs linear models pros. tree can easily handle non-linear and complex relationship, qualitative predictors. It is easy to be interpret and visualize. cons. it has relatively low accuracy and can be very non-robust.

8.2 Bagging, random forests and boosting

bagging generate $B$ training sets by bootstrap, build a separate model using each training set and make prediction, then average the prediction (regression problem) or use majority vote (classification problem) to determine $y$. The bagging can improve prediction accuracy at the expense of interpretability.

out-of-bag error estimation In bootstrap, about one-third ($1/e$?) of the observations can not sampled to fit bagged tree, so we call them out-of-bag (OOB) observations. We can use OOB MSE to estimate the bagged model. It can be shown the OOB error is virtually equivalent to LOOCV error when $B$ is large.

random forests in each split of the tree building process (after generating the training set), we randomly choose $m$ predictors from the full $p$ predictors and select one to split. Thus can avoid all bagged trees are correlated and release the variance-reduce potential in bagging.

boosting unclear. 参考Boosting原理.

9 Support Vector Machine

9.1 Maximal margin classifier

maximal margin classifier it is suitable for separable case. suppose we have constructed the hyperplane, define the most closest equidistant observation(s) to it as the support vector and define margin as the perpendicular distance. Our goal is to maximize the margin. The support means if these points were moved then the maximal margin hyperplane would move as well.

math form $\max_{\beta_i} M$, subject to $\sum \beta_i^2=1$ and $y_i(\mathbf{\beta X})\ge M=margin$.

9.2 Support vector classifiers

support vector classifier since the maximal margin classifier is too sensitive on training observations, we use soft margin instead. The support vector classifier has greater robustness at the expense of not perfectly separating the two classes.

math form $\max_{\beta_i,\epsilon_i} M$, subject to $\sum \beta_i^2=1$, $margin=y_i(\mathbf{\beta X})\ge M(1-\epsilon_i)$, $\epsilon_i\ge0$ and $\sum\epsilon_i \le C$. $C$ is the tunning parameter choosen via CV and controls the bias-variance trade-off. Larger $C$ means wider margin.

support vector in this case, observations lie directly on the margin or violate the margin are called support vectors.

9.3 Support vector machines

non-linear decision boundary we consider enlarging the feature space using functions of predictors, i.e., using $2p$ features: $X_1,X_1^2\dots X_p,X_p^2$.

In the enlarged feature space, the decision boundary is linear (a linear method). But in original feature space ($X_i$ space), it is non-linear.

support vector machine from the form of Lagrange derivativessee here, it can be shown that the support vector classifiers can be represented as $f(x)=\beta_0+\sum_{i\in S} \alpha_i(x,x_i)$, $(x,x_i)$ is the inner product and $S$ is the support vector space. We can also replace the inner product with the kernel functions $K(x_i,x_{i'})$.

kernel linear kernel: $K(x_i,x_{i'})=\sum x_{ij}x_{i'j}$. polynomial kernel with degree $d$: $K(x_i,x_{i'})=(1+\sum x_{ij}x_{i'j})^d$. radial kernel and so on. The advantage of using kernel rather than simply enlarging the feature space is computational, since we only need to compute $C_n^2$ pairs of kernels.

9.4 SMVs with more than two classes

one-versus-one construct $C_K^2$ classifiers. The final class is the most frequent class in the $C_K^2$ results.

one-versus-all construct $K$ classes each time comparing one of the class to the remaining $K-1$. For a test observation, we choose the class in which the $\mathbf{\beta X}$ is the largest since it amounts to high level of confidence.

9.5 Relationship to logistic regression

When classes are well separated, we use SVM. In more overlapping regimes, we prefer logistic regression.

10 Unsupervised Learning

10.2 Principal component analysis (PCA)

motivation the PCA looks to find a low-dimensional representation of the observations that explain a good fraction of the variance, it can be used for data visualization or data processing before using supervised techniques.

first principal component the first principal component is a combination of the $p$ features: the principal component scores $Z_1=\sum \phi_{i1X_i}$ (It is the distance between its projective point and the 'middle point' in the principal direction line.), the load vector $\mathbf{\phi}$ satisfies $\sum\phi_{i1}^2=1$ (it restrict the PC line intercept actually). Along the first PC loading vector, the observations vary the most in the feature space. Also, the projected points and the original points are as close as possible.

math form suppose the $n\times p$ data matrix $\mathbf{X}$ has been centered to have mean zero, then our optimization goal is maximize the 'variance' $\frac{1}{n}\sum z_{i1}^2$ ($E[z_{i1}]=0$).

second principal component $Z_2$ is also a linear combination of variables which has the largest variance, but it is under constraint of being uncorrelated (orthogonal/perpendicular) with $Z_1$.

more on PCA

  • scaling the variables. before the PCA is performed, the variable should be centered in each dimension.

  • PVE. to know how much information in each principal component are kept, we use proportion of variance explained (PVE) to measure: the variance explained by the $M$th PC is $\frac{1}{n}\sum z_{m1}^2$, so the PVE explained by $m$th PC is $\sum_{i=1}^n z_{im}^2/\sum_{j=1}^p\sum_{i=1}^nx_{ij}^2$. Also, we can compute the cumulative PVE.

10.3 Clustering Methods

motivation clustering looks to find homogeneous subgroups among the observations.

K-means clustering Let $\{C_i\}$ denotes non-overlapping sets of data set, the within-cluster variation for class $C_k$ is $W(k)=2\sum_{j=1}^p\sum_{i\in C_k}(x_{ij}-\bar x_{kj})^2$ where $x_{kj}$ is the mean for feature $j$ in cluster $C_k$. So the optimization goal is $\min \sum_{k=1}^K W(C_k)$.
algorithm. a) for each observation, assign a number from $1$ to $K$ randomly. b) for each cluster, compute the centroid, then assign each observation to the closest cluster. c) repeat b until cluster assignment stops changing. Note that, the results results is dependent on the initial assignment, we need to run the algorithm multiple times and select the best solution.

hierarchical clustering it adds an advantage over $K$-means clustering in that it results in an attractive tree-based representation of the observations, called dendrogram (系统树). We can make a cut on it to make clusters.

  • algorithm. a) begin with $n$ observations and a linkage (联系) which defines the dissimilarity, assign each observation as a cluster. b) examine all pairwise inter-cluster dissimilarities among the clusters and fuse the most similar two. c) compute the new pairwise dissimilarities and repeat b and c.

  • consideration. a) the choice of dissimilarity measure is important. b) we should scale the variable before the dissimilarity.

Last updated on Nov 07, 2018.