Skip to content

Commit 0b4f2cc

Browse files
fixing sections in ipynb
1 parent a3f13fa commit 0b4f2cc

12 files changed

+2161
-1848
lines changed

Ch02-statlearn-lab.ipynb

Lines changed: 1295 additions & 1291 deletions
Large diffs are not rendered by default.

Ch03-linreg-lab.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1533,7 +1533,7 @@
15331533
"metadata": {},
15341534
"source": [
15351535
"Next we examine some diagnostic plots, several of which were discussed\n",
1536-
"in Section~\\ref{Ch3:problems.sec}.\n",
1536+
"in Section 3.3.3.\n",
15371537
"We can find the fitted values and residuals\n",
15381538
"of the fit as attributes of the `results` object.\n",
15391539
"Various influence measures describing the regression model\n",
@@ -2142,7 +2142,7 @@
21422142
"and\n",
21432143
"`np.sqrt(results.scale)` gives us the RSE.\n",
21442144
"\n",
2145-
"Variance inflation factors (section~\\ref{Ch3:problems.sec}) are sometimes useful\n",
2145+
"Variance inflation factors (section 3.3.3) are sometimes useful\n",
21462146
"to assess the effect of collinearity in the model matrix of a regression model.\n",
21472147
"We will compute the VIFs in our multiple regression fit, and use the opportunity to introduce the idea of *list comprehension*.\n",
21482148
"\n",

Ch04-classification-lab.ipynb

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -2007,7 +2007,7 @@
20072007
"metadata": {},
20082008
"source": [
20092009
"Here we have used the list comprehensions introduced\n",
2010-
"in Section~\\ref{Ch3-linreg-lab:multivariate-goodness-of-fit}. Looking at our first line above, we see that the right-hand side is a list\n",
2010+
"in Section 3.6.4. Looking at our first line above, we see that the right-hand side is a list\n",
20112011
"of length two. This is because the code `for M in [X_train, X_test]` iterates over a list\n",
20122012
"of length two. While here we loop over a list,\n",
20132013
"the list comprehension method works when looping over any iterable object.\n",
@@ -2173,7 +2173,7 @@
21732173
"id": "f0a4abaf",
21742174
"metadata": {},
21752175
"source": [
2176-
"These values provide the linear combination of `Lag1` and `Lag2` that are used to form the LDA decision rule. In other words, these are the multipliers of the elements of $X=x$ in (\\ref{Ch4:bayes.multi}).\n",
2176+
"These values provide the linear combination of `Lag1` and `Lag2` that are used to form the LDA decision rule. In other words, these are the multipliers of the elements of $X=x$ in (4.24).\n",
21772177
" If $-0.64\\times `Lag1` - 0.51 \\times `Lag2` $ is large, then the LDA classifier will predict a market increase, and if it is small, then the LDA classifier will predict a market decline."
21782178
]
21792179
},
@@ -2200,7 +2200,7 @@
22002200
"metadata": {},
22012201
"source": [
22022202
"As we observed in our comparison of classification methods\n",
2203-
" (Section~\\ref{Ch4:comparison.sec}), the LDA and logistic\n",
2203+
" (Section 4.5), the LDA and logistic\n",
22042204
"regression predictions are almost identical."
22052205
]
22062206
},
@@ -2421,7 +2421,7 @@
24212421
"`sklearn` library. We will use several other objects\n",
24222422
"from this library. The objects\n",
24232423
"follow a common structure that simplifies tasks such as cross-validation,\n",
2424-
"which we will see in Chapter~\\ref{Ch5:resample}. Specifically,\n",
2424+
"which we will see in Chapter 5. Specifically,\n",
24252425
"the methods first create a generic classifier without\n",
24262426
"referring to any data. This classifier is then fit\n",
24272427
"to data with the `fit()` method and predictions are\n",
@@ -4570,7 +4570,7 @@
45704570
"The number of neighbors in KNN is referred to as a *tuning parameter*, also referred to as a *hyperparameter*.\n",
45714571
"We do not know *a priori* what value to use. It is therefore of interest\n",
45724572
"to see how the classifier performs on test data as we vary these\n",
4573-
"parameters. This can be achieved with a `for` loop, described in Section~\\ref{Ch2-statlearn-lab:for-loops}.\n",
4573+
"parameters. This can be achieved with a `for` loop, described in Section 2.3.8.\n",
45744574
"Here we use a for loop to look at the accuracy of our classifier in the group predicted to purchase\n",
45754575
"insurance as we vary the number of neighbors from 1 to 5:"
45764576
]
@@ -4629,7 +4629,7 @@
46294629
"data. This can also be done\n",
46304630
"with `sklearn`, though by default it fits\n",
46314631
"something like the *ridge regression* version\n",
4632-
"of logistic regression, which we introduce in Chapter~\\ref{Ch6:varselect}. This can\n",
4632+
"of logistic regression, which we introduce in Chapter 6. This can\n",
46334633
"be modified by appropriately setting the argument `C` below. Its default\n",
46344634
"value is 1 but by setting it to a very large number, the algorithm converges to the same solution as the usual (unregularized)\n",
46354635
"logistic regression estimator discussed above.\n",
@@ -4849,7 +4849,7 @@
48494849
"metadata": {},
48504850
"source": [
48514851
"## Linear and Poisson Regression on the Bikeshare Data\n",
4852-
"Here we fit linear and Poisson regression models to the `Bikeshare` data, as described in Section~\\ref{Ch4:sec:pois}.\n",
4852+
"Here we fit linear and Poisson regression models to the `Bikeshare` data, as described in Section 4.6.\n",
48534853
"The response `bikers` measures the number of bike rentals per hour\n",
48544854
"in Washington, DC in the period 2010--2012."
48554855
]
@@ -5322,7 +5322,7 @@
53225322
"February than in January. Similarly there are about 16.5 more riders\n",
53235323
"in March than in January.\n",
53245324
"\n",
5325-
"The results seen in Section~\\ref{sec:bikeshare.linear}\n",
5325+
"The results seen in Section 4.6.1\n",
53265326
"used a slightly different coding of the variables `hr` and `mnth`, as follows:"
53275327
]
53285328
},
@@ -5834,7 +5834,7 @@
58345834
"id": "41fb2787",
58355835
"metadata": {},
58365836
"source": [
5837-
"To reproduce the left-hand side of Figure~\\ref{Ch4:bikeshare}\n",
5837+
"To reproduce the left-hand side of Figure 4.13\n",
58385838
"we must first obtain the coefficient estimates associated with\n",
58395839
"`mnth`. The coefficients for January through November can be obtained\n",
58405840
"directly from the `M2_lm` object. The coefficient for December\n",
@@ -5988,7 +5988,7 @@
59885988
"id": "6c68761a",
59895989
"metadata": {},
59905990
"source": [
5991-
"Reproducing the right-hand plot in Figure~\\ref{Ch4:bikeshare} follows a similar process."
5991+
"Reproducing the right-hand plot in Figure 4.13 follows a similar process."
59925992
]
59935993
},
59945994
{
@@ -6088,7 +6088,7 @@
60886088
"id": "8552fb8b",
60896089
"metadata": {},
60906090
"source": [
6091-
"We can plot the coefficients associated with `mnth` and `hr`, in order to reproduce Figure~\\ref{Ch4:bikeshare.pois}. We first complete these coefficients as before."
6091+
"We can plot the coefficients associated with `mnth` and `hr`, in order to reproduce Figure 4.15. We first complete these coefficients as before."
60926092
]
60936093
},
60946094
{

Ch05-resample-lab.ipynb

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -486,7 +486,7 @@
486486
"id": "a3a920ae",
487487
"metadata": {},
488488
"source": [
489-
"As in Figure~\\ref{Ch5:cvplot}, we see a sharp drop in the estimated test MSE between the linear and\n",
489+
"As in Figure 5.4, we see a sharp drop in the estimated test MSE between the linear and\n",
490490
"quadratic fits, but then no clear improvement from using higher-degree polynomials.\n",
491491
"\n",
492492
"Above we introduced the `outer()` method of the `np.power()`\n",
@@ -589,7 +589,7 @@
589589
"Notice that the computation time is much shorter than that of LOOCV.\n",
590590
"(In principle, the computation time for LOOCV for a least squares\n",
591591
"linear model should be faster than for $k$-fold CV, due to the\n",
592-
"availability of the formula~(\\ref{Ch5:eq:LOOCVform}) for LOOCV;\n",
592+
"availability of the formula~(5.2) for LOOCV;\n",
593593
"however, the generic `cross_validate()` function does not make\n",
594594
"use of this formula.) We still see little evidence that using cubic\n",
595595
"or higher-degree polynomial terms leads to a lower test error than simply\n",
@@ -699,7 +699,7 @@
699699
"\n",
700700
"## The Bootstrap\n",
701701
"We illustrate the use of the bootstrap in the simple example\n",
702-
" {of Section~\\ref{Ch5:sec:bootstrap},} as well as on an example involving\n",
702+
" {of Section 5.2,} as well as on an example involving\n",
703703
"estimating the accuracy of the linear regression model on the `Auto`\n",
704704
"data set.\n",
705705
"### Estimating the Accuracy of a Statistic of Interest\n",
@@ -714,8 +714,8 @@
714714
"To illustrate the bootstrap, we\n",
715715
"start with a simple example.\n",
716716
"The `Portfolio` data set in the `ISLP` package is described\n",
717-
"in Section~\\ref{Ch5:sec:bootstrap}. The goal is to estimate the\n",
718-
"sampling variance of the parameter $\\alpha$ given in formula~(\\ref{Ch5:min.var}). We will\n",
717+
"in Section 5.2. The goal is to estimate the\n",
718+
"sampling variance of the parameter $\\alpha$ given in formula~(5.7). We will\n",
719719
"create a function\n",
720720
"`alpha_func()`, which takes as input a dataframe `D` assumed\n",
721721
"to have columns `X` and `Y`, as well as a\n",
@@ -754,7 +754,7 @@
754754
"source": [
755755
"This function returns an estimate for $\\alpha$\n",
756756
"based on applying the minimum\n",
757-
" variance formula (\\ref{Ch5:min.var}) to the observations indexed by\n",
757+
" variance formula (5.7) to the observations indexed by\n",
758758
"the argument `idx`. For instance, the following command\n",
759759
"estimates $\\alpha$ using all 100 observations."
760760
]
@@ -934,7 +934,7 @@
934934
"`horsepower` to predict `mpg` in the `Auto` data set. We\n",
935935
"will compare the estimates obtained using the bootstrap to those\n",
936936
"obtained using the formulas for ${\\rm SE}(\\hat{\\beta}_0)$ and\n",
937-
"${\\rm SE}(\\hat{\\beta}_1)$ described in Section~\\ref{Ch3:secoefsec}.\n",
937+
"${\\rm SE}(\\hat{\\beta}_1)$ described in Section 3.1.2.\n",
938938
"\n",
939939
"To use our `boot_SE()` function, we must write a function (its\n",
940940
"first argument)\n",
@@ -1115,7 +1115,7 @@
11151115
"0.85, and that the bootstrap\n",
11161116
"estimate for ${\\rm SE}(\\hat{\\beta}_1)$ is\n",
11171117
"0.0074. As discussed in\n",
1118-
"Section~\\ref{Ch3:secoefsec}, standard formulas can be used to compute\n",
1118+
"Section 3.1.2, standard formulas can be used to compute\n",
11191119
"the standard errors for the regression coefficients in a linear\n",
11201120
"model. These can be obtained using the `summarize()` function\n",
11211121
"from `ISLP.sm`."
@@ -1160,21 +1160,21 @@
11601160
"metadata": {},
11611161
"source": [
11621162
"The standard error estimates for $\\hat{\\beta}_0$ and $\\hat{\\beta}_1$\n",
1163-
"obtained using the formulas from Section~\\ref{Ch3:secoefsec} are\n",
1163+
"obtained using the formulas from Section 3.1.2 are\n",
11641164
"0.717 for the\n",
11651165
"intercept and\n",
11661166
"0.006 for the\n",
11671167
"slope. Interestingly, these are somewhat different from the estimates\n",
11681168
"obtained using the bootstrap. Does this indicate a problem with the\n",
11691169
"bootstrap? In fact, it suggests the opposite. Recall that the\n",
11701170
"standard formulas given in\n",
1171-
" {Equation~\\ref{Ch3:se.eqn} on page~\\pageref{Ch3:se.eqn}}\n",
1171+
" {Equation 3.8 on page~\\pageref{Ch3:se.eqn}}\n",
11721172
"rely on certain assumptions. For example,\n",
11731173
"they depend on the unknown parameter $\\sigma^2$, the noise\n",
11741174
"variance. We then estimate $\\sigma^2$ using the RSS. Now although the\n",
11751175
"formulas for the standard errors do not rely on the linear model being\n",
11761176
"correct, the estimate for $\\sigma^2$ does. We see\n",
1177-
" {in Figure~\\ref{Ch3:polyplot} on page~\\pageref{Ch3:polyplot}} that there is\n",
1177+
" {in Figure 3.8 on page~\\pageref{Ch3:polyplot}} that there is\n",
11781178
"a non-linear relationship in the data, and so the residuals from a\n",
11791179
"linear fit will be inflated, and so will $\\hat{\\sigma}^2$. Secondly,\n",
11801180
"the standard formulas assume (somewhat unrealistically) that the $x_i$\n",
@@ -1187,7 +1187,7 @@
11871187
"Below we compute the bootstrap standard error estimates and the\n",
11881188
"standard linear regression estimates that result from fitting the\n",
11891189
"quadratic model to the data. Since this model provides a good fit to\n",
1190-
"the data (Figure~\\ref{Ch3:polyplot}), there is now a better\n",
1190+
"the data (Figure 3.8), there is now a better\n",
11911191
"correspondence between the bootstrap estimates and the standard\n",
11921192
"estimates of ${\\rm SE}(\\hat{\\beta}_0)$, ${\\rm SE}(\\hat{\\beta}_1)$ and\n",
11931193
"${\\rm SE}(\\hat{\\beta}_2)$."

Ch06-varselect-lab.ipynb

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -182,7 +182,7 @@
182182
"id": "5199f18e",
183183
"metadata": {},
184184
"source": [
185-
"We first choose the best model using forward selection based on $C_p$ (\\ref{Ch6:eq:cp}). This score\n",
185+
"We first choose the best model using forward selection based on $C_p$ (6.2). This score\n",
186186
"is not built in as a metric to `sklearn`. We therefore define a function to compute it ourselves, and use\n",
187187
"it as a scorer. By default, `sklearn` tries to maximize a score, hence\n",
188188
" our scoring function computes the negative $C_p$ statistic."
@@ -245,7 +245,7 @@
245245
"id": "bc283589",
246246
"metadata": {},
247247
"source": [
248-
"The function `sklearn_selected()` expects a scorer with just three arguments --- the last three in the definition of `nCp()` above. We use the function `partial()` first seen in Section~\\ref{Ch5-resample-lab:the-bootstrap} to freeze the first argument with our estimate of $\\sigma^2$."
248+
"The function `sklearn_selected()` expects a scorer with just three arguments --- the last three in the definition of `nCp()` above. We use the function `partial()` first seen in Section 5.3.3 to freeze the first argument with our estimate of $\\sigma^2$."
249249
]
250250
},
251251
{
@@ -997,7 +997,7 @@
997997
"standardize first, in order to find coefficient\n",
998998
"estimates on the original scale, we must *unstandardize*\n",
999999
"the coefficient estimates. The parameter\n",
1000-
"$\\lambda$ in (\\ref{Ch6:ridge}) and (\\ref{Ch6:LASSO}) is called `alphas` in `sklearn`. In order to\n",
1000+
"$\\lambda$ in (6.5) and (6.7) is called `alphas` in `sklearn`. In order to\n",
10011001
"be consistent with the rest of this chapter, we use `lambdas`\n",
10021002
"rather than `alphas` in what follows. {At the time of publication, ridge fits like the one in code chunk [22] issue unwarranted convergence warning messages; we expect these to disappear as this package matures.}"
10031003
]
@@ -9710,7 +9710,7 @@
97109710
"### Evaluating Test Error of Cross-Validated Ridge\n",
97119711
"Choosing $\\lambda$ using cross-validation provides a single regression\n",
97129712
"estimator, similar to fitting a linear regression model as we saw in\n",
9713-
"Chapter~\\ref{Ch3:linreg}. It is therefore reasonable to estimate what its test error\n",
9713+
"Chapter 3. It is therefore reasonable to estimate what its test error\n",
97149714
"is. We run into a problem here in that cross-validation will have\n",
97159715
"*touched* all of its data in choosing $\\lambda$, hence we have no\n",
97169716
"further data to estimate test error. A compromise is to do an initial\n",
@@ -12101,11 +12101,11 @@
1210112101
"`PCA()` from the `sklearn.decomposition`\n",
1210212102
"module. We now apply PCR to the `Hitters` data, in order to\n",
1210312103
"predict `Salary`. Again, ensure that the missing values have\n",
12104-
"been removed from the data, as described in Section~\\ref{Ch6-varselect-lab:lab-1-subset-selection-methods}.\n",
12104+
"been removed from the data, as described in Section 6.5.1.\n",
1210512105
"\n",
1210612106
"We use `LinearRegression()` to fit the regression model\n",
1210712107
"here. Note that it fits an intercept by default, unlike\n",
12108-
"the `OLS()` function seen earlier in Section~\\ref{Ch6-varselect-lab:lab-1-subset-selection-methods}."
12108+
"the `OLS()` function seen earlier in Section 6.5.1."
1210912109
]
1211012110
},
1211112111
{
@@ -12757,7 +12757,7 @@
1275712757
"The `explained_variance_ratio_`\n",
1275812758
"attribute of our `PCA` object provides the *percentage of variance explained* in the predictors and in the response using\n",
1275912759
"different numbers of components. This concept is discussed in greater\n",
12760-
"detail in Section~\\ref{Ch10:sec:pca}."
12760+
"detail in Section 12.2."
1276112761
]
1276212762
},
1276312763
{

0 commit comments

Comments
 (0)