000 | 25701nam a2204993 i 4500 | ||
---|---|---|---|
001 | 15512507 | ||
003 | KOHA | ||
005 | 20220317092112.0 | ||
008 | 220303s2009 nyua b 001 0 eng d | ||
020 |
_a9780387848570 _q(hardback) |
||
020 |
_a9780387848587 _q(e-Book) |
||
022 |
_a0172-7397 _q(hardback) |
||
040 |
_aDLC _beng _cDLC _dDLC _dTR-IsMEF _erda |
||
041 | 0 | _aeng | |
050 | 0 | 0 |
_aQ325.5 _b.H39 2009 |
100 | 1 |
_aHastie, Trevor, _eauthor. |
|
245 | 1 | 4 |
_aThe elements of statistical learning : _bdata mining, inference, and prediction / _cTrevor Hastie, Robert Tibshirani, Jerome Friedman. |
250 | _aSecond edition. | ||
264 | 1 |
_aNew York, NY : _bSpringer, _c2009. |
|
300 |
_axxii, 745 pages : _billustrations ; _c25 cm. |
||
336 |
_atext _2rdacontent |
||
337 |
_aunmediated _2rdamedia |
||
338 |
_avolume _2rdacarrier |
||
490 | 1 | _aSpringer Series in Statistics. | |
504 | _aIncludes bibliographical references (pages 699-727) and index (pages 729-737). | ||
505 | 0 | _a1. Introduction. | |
505 | 0 | _a2. Overwiew of supervised learning. | |
505 | 0 | _a3. Linear methods for regression. | |
505 | 0 | _a4. Linear methods for classification. | |
505 | 0 | _a5. Basis expansions and regularization. | |
505 | 0 | _a6. Kernel smoothing methods. | |
505 | 0 | _a7. Model assessment and selection. | |
505 | 0 | _a8. Model inference and averaging. | |
505 | 0 | _a9. Additive models, trees and related methods. | |
505 | 0 | _a10. Boosting and additive trees.. | |
505 | 0 | _a11. Neural networks. | |
505 | 0 | _a12. Support vector machines and flexible discriminants. | |
505 | 0 | _a13. Prototype methods and nearest- neighboors. | |
505 | 0 | _a14. Unsupervised learning. | |
505 | 0 | _a15. Random forests. | |
505 | 0 | _a16. Ensemble learning. | |
505 | 0 | _a17. Undirected grephical models. | |
505 | 0 | _a18. High- dimensional problems. | |
520 | 0 |
_aDuring the past decade there has been an explosion in computation and information technology. With it have come vast amounts of data in a variety of fields such as medicine, biology, finance, and marketing. The challenge of understanding these data has led to the development of new tools in the field of statistics, and spawned new areas such as data mining, machine learning, and bioinformatics. Many of these tools have common underpinnings but are often expressed with different terminology. This book describes the important ideas in these areas in a common conceptual framework. While the approach is statistical, the emphasis is on concepts rather than mathematics. Many examples are given, with a liberal use of color graphics. It is a valuable resource for statisticians and anyone interested in data mining in science or industry. The book's coverage is broad, from supervised learning (prediction) to unsupervised learning. The many topics include neural networks, support vector machines, classification trees and boosting---the first comprehensive treatment of this topic in any book.
This major new edition features many topics not covered in the original, including graphical models, random forests, ensemble methods, least angle regression and path algorithms for the lasso, non-negative matrix factorization, and spectral clustering. There is also a chapter on methods for ``wide'' data (p bigger than n), including multiple testing and false discovery rates.
Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R/S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting. _uhttps://link.springer.com/book/10.1007/978-0-387-21606-5#about |
|
650 | 7 | _aMachine learning | |
650 | 0 |
_aStatistics _xMethodology |
|
650 | 7 | _aData mining | |
650 | 0 | _aBioinformatics | |
650 | 0 | _aInference | |
650 | 0 | _aForecasting | |
650 | 0 | _aComputational intelligence | |
700 | 1 |
_aTibshirani, Robert, _eauthor. |
|
700 | 1 |
_aFriedman, Jerome, _eauthor. |
|
830 | 0 | _aSpringer Series in Statistics. | |
900 | _aMEF Üniversitesi Kütüphane katalog kayıtları RDA standartlarına uygun olarak üretilmektedir / MEF University Library Catalogue Records are Produced Compatible by RDA Rules | ||
910 | _aPandora | ||
942 |
_2lcc _cBKS |
||
970 | 0 | 1 | _aContents |
970 | 1 | 1 |
_aPreface to the First Edition, _pxi. |
970 | 1 | 1 |
_tPreface to the Second Edition, _pvii. |
970 | 1 | 2 |
_l1, _tIntroduction, _p1. |
970 | 1 | 2 |
_l2, _tOverview of Supervised Learning, _p9. |
970 | 1 | 1 |
_l2.1, _tIntroduction, _p9. |
970 | 1 | 1 |
_l2.2, _tVariable Types and Terminology, _p9. |
970 | 1 | 1 |
_l2.3, _tTwo Simple Approaches to Prediction : Least Squares and Nearest Neighbors, _p11. |
970 | 1 | 1 |
_l2.4, _tStatistical Decision Theory, _p18. |
970 | 1 | 1 |
_l2.5, _tLocal Methods in High Dimensions, _p22. |
970 | 1 | 1 |
_l2.6, _tStatistical Models, Supervised Learning and Function Approximation, _p28. |
970 | 1 | 1 |
_l2.6.1, _tA Statistical Model for the Joint Distribution Pr (X.Y), _p28. |
970 | 1 | 1 |
_l2.6.2, _tSupervised Learning, _p29. |
970 | 1 | 1 |
_l2.6.3, _tFunction Approximation, _p29. |
970 | 1 | 1 |
_l2.7, _tStructured Regression Models, _p32. |
970 | 1 | 1 |
_l2.7.1, _tDifficulty of the Problem, _p32. |
970 | 1 | 1 |
_l2.8, _tClasses of Restricted Estimators, _p33. |
970 | 1 | 1 |
_l2.8.1, _tRoughness Penalty and Bayesian Methods, _p34. |
970 | 1 | 1 |
_l2.8.2, _tKernel Methods and Local Regression, _p34. |
970 | 1 | 1 |
_l2.8.3, _tBasis Functions and Dictionary Methods, _p35. |
970 | 1 | 1 |
_l2.9, _tModel Selection and the Bias-Variance Tradeoff, _p37. |
970 | 0 | 1 |
_aBibliographic Notes, _p39. |
970 | 0 | 1 |
_aExercises, _p39. |
970 | 1 | 2 |
_l3, _tLinear Methods for Regression, _p43. |
970 | 1 | 1 |
_l3.1, _tIntroduction, _p43. |
970 | 1 | 1 |
_l3.2, _tLinear Regression Models and Least Squares, _p44. |
970 | 1 | 1 |
_l3.2.1, _tExample : Prostate Cancer, _p49. |
970 | 1 | 1 |
_l3.2.2, _tThe Gauss - Markov Theorem, _p51. |
970 | 1 | 1 |
_l3.2.3, _tMultiple Regression from Simole Univariate Regression, _p52. |
970 | 1 | 1 |
_l3.2.4, _tProstate Cancer Data Example (Continued), _p61. |
970 | 1 | 1 |
_l3.4, _tShrinkage Methods, _p61. |
970 | 1 | 1 |
_l3.4, _tShrinkage Methods, _p61. |
970 | 1 | 1 |
_l3.4.1, _tRidge Regression, _p61. |
970 | 1 | 1 |
_l3.4.2, _tThe Lasso, _p68. |
970 | 1 | 1 |
_l3.4.3, _tDiscussion : Submset Selection, Ridge Regression and the Lasso, _p69. |
970 | 1 | 1 |
_l3.4.4, _tLeast Angle Regression, _p73. |
970 | 1 | 1 |
_l3.5, _tMethods Using Derived Input Directions, _p79. |
970 | 1 | 1 |
_l3.5.1, _tPrincipal Compenents Regression, _p79. |
970 | 1 | 1 |
_l3.5.2, _tPartial Least Squares, _p80. |
970 | 1 | 1 |
_l3.6, _tDiscussion : A Comparison of the Selection and Shrinkage Methods, _p82. |
970 | 1 | 1 |
_l3.7, _tMultiple Outcome Shrinkage and Selection, _p84. |
970 | 1 | 1 |
_l3.8, _tMore on the Lasso and Related Path Algorithms, _p86. |
970 | 1 | 1 |
_l3.8.1, _tIncremental Forward Stagewise Regression, |
970 | 1 | 1 |
_l3.8.2, _tPiecewise- Linear Path Algorithms, _p89. |
970 | 1 | 1 |
_l3.8.3, _tThe Dantzig Selector, _p89. |
970 | 1 | 1 |
_l3.8.4, _tThe Grouped Lasso, _p90. |
970 | 1 | 1 |
_l3.8.5, _tFurther Properties of the Lasso, _p91. |
970 | 1 | 1 |
_l3.8.6, _tPathwise Coordinate Optimization, _p92. |
970 | 1 | 1 |
_l3.9, _tComputational Considerations, _p93. |
970 | 0 | 1 |
_aBibliographic Notes, _p94. |
970 | 0 | 1 |
_aExercises, _p94. |
970 | 1 | 2 |
_l4, _tLinear Methods for Classification, _p101. |
970 | 1 | 1 |
_l4.1, _tIntroduction, _p101. |
970 | 1 | 1 |
_l4.2, _tLinear Regression of an Indicator Matrix, _p103. |
970 | 1 | 1 |
_l4.3, _tLinear Discriminant Analysis, _p106. |
970 | 1 | 1 |
_l4.3.1, _tRegularized Discriminant Analysis, _p112. |
970 | 1 | 1 |
_l4.3.2, _tComputations for LDA, _p113. |
970 | 1 | 1 |
_l4.3.3, _tReduced- Rank Linear Discriminant Analysis, _p113. |
970 | 1 | 1 |
_l4.4, _tLogistic Regression, _p119. |
970 | 1 | 1 |
_l4.4.1, _tFitting Logistic Regression Models, _a120. |
970 | 1 | 1 |
_l4.4.2, _tExample : South African Heart Disease, _p122. |
970 | 1 | 1 |
_l4.4.3, _tQuadratic Approximations and Inference, _p124. |
970 | 1 | 1 |
_l4.4.4, _tL1 Regularized Logistic Regression, _a125. |
970 | 1 | 1 |
_l4.4.5, _tLogistic Regression or LDA ? _p127. |
970 | 1 | 1 |
_l4.5, _tSeparating Hyperplanes, _p129. |
970 | 1 | 1 |
_l4.5.1, _tRosenblatt's Perceptron Learning Algorithm, _a130. |
970 | 1 | 1 |
_l4.5.2, _tOptimal Separating Hyperplanes, _p132. |
970 | 0 | 1 |
_aBibliographic Notes, _p135. |
970 | 1 | 1 |
_aExercises, _p135. |
970 | 1 | 2 |
_l5, _tBasis Expansions and Regularization, _p139. |
970 | 0 | 1 |
_l5.1, _aIntroduction, _p139. |
970 | 1 | 1 |
_l5.2, _tPiecewise Polynomials and Splines, _p141. |
970 | 1 | 1 |
_l5.2.1, _tNatural Cubic Splines, _p144. |
970 | 1 | 1 |
_l5.2.2, _tExample : South African Heart Disease (Continues), _p146. |
970 | 1 | 1 |
_l5.2.3, _tExample : Phoneme Recognition, _p148. |
970 | 1 | 1 |
_l5.4, _tSmoothing Splines, _p151. |
970 | 1 | 1 |
_l5.4.1, _tDegrees of Freedom and Smoother Matrices, _p153. |
970 | 1 | 1 |
_l5.5, _tAutomatic Selection of the Smoothing Parameters, _p156. |
970 | 1 | 1 |
_l5.5.1, _tFixing the Degrees of Freedom, _p158. |
970 | 1 | 1 |
_l5.5.2, _tThe Bias- Variance Tradeoff, _p158. |
970 | 1 | 1 |
_l5.6, _tnonparametric Logistic Regresion, _p161. |
970 | 1 | 1 |
_l5.7, _tMultidimensional Splines, _p162. |
970 | 1 | 1 |
_l5.8, _tRegularization and Reproducing Kernel Hilbert Spaces, _p167. |
970 | 1 | 1 |
_l5.8.1, _tSpaces of Functions Generated by Kernels, _p168. |
970 | 1 | 1 |
_l5.8.2, _tExamples of RKHS, _p170. |
970 | 1 | 1 |
_l5.9, _tWavelet Smoothing, _p174. |
970 | 1 | 1 |
_l5.9.1, _tWavelet Bases and the Wavelet Transform, _p176. |
970 | 1 | 1 |
_l5.9.2, _tAdaptive Wavelet Filtering, _p179. |
970 | 0 | 1 |
_aBilbliographic Notes, _p181. |
970 | 1 | 1 |
_tExercises, _p181. |
970 | 0 | 1 |
_aAppendix : Computational Considerations for Splines, _p186. |
970 | 0 | 1 |
_aAppendix : B-splines, _p186. |
970 | 0 | 1 |
_aAppendix : Computations for Smoothing Splines, _p189. |
970 | 1 | 2 |
_l6, _tKernel Smoothing Methods, _p191. |
970 | 1 | 1 |
_l6.1, _tOne- Dimen sional Kernel Smoothers, _p192. |
970 | 1 | 1 |
_l6.1.1, _tLocal Linear Regression, _p194. |
970 | 1 | 1 |
_l6.2, _tSelecting the Width of the Kernel, _p198. |
970 | 1 | 1 |
_l6.3, _tLocal Regression in IR, _p201. |
970 | 1 | 1 |
_l6.4, _tStructured Local Regression Models in IR, _p201. |
970 | 1 | 1 |
_l6.4.1, _tStructured Kernels, _p203. |
970 | 1 | 1 |
_l6.4.2, _tStructured Regression Functions, _p203. |
970 | 1 | 1 |
_l6.5, _tLocal Likelihood and Other Models, _p205. |
970 | 1 | 1 |
_l6.6, _tKernel Density Estimation and Classification, _p208. |
970 | 1 | 1 |
_l6.6.1, _tKernel Density Estimation, _p208. |
970 | 1 | 1 |
_l6.6.2, _tKernel Density Classification, _p210. |
970 | 1 | 1 |
_l6.6.3, _tThe Naive Bayes Classifier, _p210. |
970 | 1 | 1 |
_l6.7, _tRadial Basis Functions and Kernels, _p212. |
970 | 1 | 1 |
_l6.8, _tMixture Models for Density Estimation and Classification, _p214 |
970 | 1 | 1 |
_l6.9, _tComputational Considerations, _p216. |
970 | 0 | 1 |
_aBibliographic Notes, _p216. |
970 | 1 | 1 |
_tExercises, _p216. |
970 | 1 | 2 |
_l7, _tModel Assessment and Selection, _p219. |
970 | 0 | 1 |
_l7.1, _aIntroduction, _p219. |
970 | 1 | 1 |
_l7.2, _tBias, Variance and Model Complexity, _p219. |
970 | 1 | 1 |
_l7.3, _tThe Bias- Variance Decomposition, _p223. |
970 | 1 | 1 |
_l7.3.1, _tExample : Bias- Variance Tradeoff, _p226. |
970 | 1 | 1 |
_l7.4, _tOptimism of the Training Error Rate, _p228. |
970 | 1 | 1 |
_l7.5, _tEstimates of In-Sample Prediction Error, _p230. |
970 | 1 | 1 |
_l7.6, _tThe Effective Number of Parameters, _p232. |
970 | 1 | 1 |
_l7.7, _tThe Bayesian Approach and BIC, _p233. |
970 | 1 | 1 |
_l7.8, _tMinimum Description, _p235. |
970 | 1 | 1 |
_l7.9, _tVapnik- Chervonenkis Dimension, _p237. |
970 | 1 | 1 |
_l7.9.1, _tExample (Continued), _p239. |
970 | 1 | 1 |
_l7.10, _tCross- Validation, _p241. |
970 | 1 | 1 |
_l7.10.1, _tK- Fold Cross- Validation, _p241. |
970 | 1 | 1 |
_l7.10.2, _tThe Wrong and Right Way to Do Cross- validation, _p245. |
970 | 1 | 1 |
_l7.10.3, _tDoes Cross- Validation Really Work ?, _p247. |
970 | 1 | 1 |
_l7.11, _tBootstrap Methods, _p249. |
970 | 1 | 1 |
_l7.11.1, _tExample (Continued) _p252. |
970 | 1 | 1 |
_l7.12, _tConditional or Expected Test Error ?, _p254. |
970 | 0 | 1 |
_aBibliographic Notes, _p257. |
970 | 1 | 1 |
_tExercises, _l257. |
970 | 1 | 1 |
_l8, _tModel Inference and Averaging, _p261. |
970 | 0 | 1 |
_l8.1, _aIntroduction, _p261. |
970 | 1 | 1 |
_l8.2, _tThe Bootstrap and Maximum Likelihood Methods, _p261. |
970 | 1 | 1 |
_l8.2.1, _tA Smoothing Example, _p261. |
970 | 1 | 1 |
_l8.2.2, _tMaximum Likelihood Inference, _p265. |
970 | 1 | 1 |
_l8.2.3, _tBootstrap versus Maximum Likelihood, _p267. |
970 | 1 | 1 |
_l8.3, _tBayesian Methods, _p267. |
970 | 1 | 1 |
_l8.4, _tRelationship Between the Bootstrap and Bayesian Inference, _p271. |
970 | 1 | 1 |
_l8.5, _tThe EM Algorithm, _p272. |
970 | 1 | 1 |
_l8.5.1, _tTwo- Component Mixture Model, _p272. |
970 | 1 | 1 |
_l8.5.2, _tThe EM Algorithm in General, _p276. |
970 | 1 | 1 |
_l8.5.3, _tEM as a Maximization- Maximization Procedure, _p277. |
970 | 1 | 1 |
_l8.6, _tMCMC for Sampling from the Posterior, _p279. |
970 | 1 | 1 |
_l8.7, _tBagging, |
970 | 1 | 1 |
_l8.7.1, _tExample : Trees with Simulated Data, _p283. |
970 | 1 | 1 |
_l8.8, _tModel Averaging and Stacking, _p288. |
970 | 1 | 1 |
_l8.9, _tStochastic Search : Bumping, _p290. |
970 | 0 | 1 |
_aBibliographic Notes, _p292. |
970 | 1 | 1 |
_tExercises, _p293. |
970 | 1 | 2 |
_l9, _tAdditive Models, Trees, and Related Methods, _p295. |
970 | 1 | 1 |
_l9.1, _tGeneralized Additive Models, _p295. |
970 | 1 | 1 |
_l9.1.1, _tFitting Additive Models, _p297. |
970 | 1 | 1 |
_l9.1.2, _tExample : Additive Logistic Regression, _p299. |
970 | 1 | 1 |
_l9.1.3, _tSummary, _p304. |
970 | 1 | 1 |
_l9.2, _tTree- Based Methods, _p305. |
970 | 1 | 1 |
_l9.2.1, _tBackground, _p305. |
970 | 1 | 1 |
_l9.2.2, _tRegression Trees, _p307. |
970 | 1 | 1 |
_l9.2.3, _tClassification Trees, _p308. |
970 | 1 | 1 |
_l9.2.4, _tOther Issues, _p310. |
970 | 1 | 1 |
_l9.2.5, _tSpam Example (Continued) _p313. |
970 | 1 | 1 |
_l9.3, _tPRIM : Bump Hunting, _p317. |
970 | 1 | 1 |
_l9.3.1, _tSpam Example (Continued), _p320. |
970 | 1 | 1 |
_l9.4, _tMARS : Multivariate Adaptive Regression Splines, _p321. |
970 | 1 | 1 |
_l9.4.1, _tSpam Example (Continued), _p326. |
970 | 1 | 1 |
_l9.4.2, _tExample (Simulated Data), _p327. |
970 | 1 | 1 |
_l9.4.3, _tOther Issues, _p328. |
970 | 1 | 1 |
_l9.5, _tHierarchical Mixtures of Experts, _p329. |
970 | 1 | 1 |
_l9.6, _tMissing Data, _p332. |
970 | 1 | 1 |
_l9.7, _tComputational Considerations, _p334. |
970 | 0 | 1 |
_aBibliographic Notes, _p334. |
970 | 1 | 1 |
_tExercises, _p335. |
970 | 1 | 1 |
_l10, _tBoosting and Additive Trees, _p337. |
970 | 1 | 1 |
_l10.1, _tBoosting Methods, _p337. |
970 | 1 | 1 |
_l10.1.1, _tOutline of This Chapter, _p340. |
970 | 1 | 1 |
_l10.2, _tBoosting Fits an Additive Model, _p341. |
970 | 1 | 1 |
_l10.3, _tForward Stagewise Additive Modeling, _p342. |
970 | 1 | 1 |
_l10.4, _tExponential loss and AdaBoost, |
970 | 1 | 1 |
_l10.5, _tWhy Exponential Loss ?, _p345. |
970 | 1 | 1 |
_l10.6, _tLoss Functions and Robustness, _p346. |
970 | 1 | 1 |
_l10.7, _t"Off-the-Shelf" Procedures for Data Mining, _p350. |
970 | 1 | 1 |
_l10.8, _tExample : Spam Data, _p352. |
970 | 1 | 1 |
_l10.9, _tBoosting Trees, _p353. |
970 | 1 | 1 |
_l10.10, _tNumerical Optimization via Gradient Boosting, _p360. |
970 | 1 | 1 |
_l10.10.1, _tSteepest Descent, _p358. |
970 | 1 | 1 |
_l10.10.2, _tGradient Boosting, _p359. |
970 | 1 | 1 |
_l10.10.3, _tImplementations of Gradient Boosting, _p360. |
970 | 1 | 1 |
_l10.11, _tRight- Sized Trees for Boosting, _p361. |
970 | 1 | 1 |
_l10.12, _tRegularization, _p364. |
970 | 1 | 1 |
_l10.12.1, _tShrinkage, _p364. |
970 | 1 | 1 |
_l10.12.2, _tSubsampling, _p365. |
970 | 1 | 1 |
_l10.13, _tInterpretation, _p367. |
970 | 1 | 1 |
_l10.13.1, _tRelative Importance of Predictor Variables, _p367. |
970 | 1 | 1 |
_l10.13.2, _tPartial Dependence Plots, _p369. |
970 | 1 | 1 |
_l10.14, _tIllustrations, _p371. |
970 | 1 | 1 |
_l10.14.1, _tCalifornia Housing, _p371. |
970 | 1 | 1 |
_l10.14.2, _tNew Zealand Fish, _p375. |
970 | 1 | 1 |
_l10.14.3, _tDemographics Data, _p379. |
970 | 0 | 1 |
_aBibliographics Notes, _p380. |
970 | 1 | 1 |
_tExercises, _p384. |
970 | 1 | 2 |
_l11, _tNeural Networks, _p389. |
970 | 0 | 1 |
_l11.1, _aIntroduction, _p389. |
970 | 1 | 1 |
_l11.2, _tProjection Pursuit Regression, _p389. |
970 | 1 | 1 |
_l11.3, _tNeural Networks, _p392. |
970 | 1 | 1 |
_l11.4, _tFitting Neural Networks, _p395. |
970 | 1 | 1 |
_l11.5, _tSome Issues in Training Neural Networks, _p397. |
970 | 1 | 1 |
_l11.5.1, _tStarting Values, _p397. |
970 | 1 | 1 |
_l11.5.2, _tOverfitting, _p398. |
970 | 1 | 1 |
_l11.5.3, _tScaling of the Inputs, _p398. |
970 | 1 | 1 |
_l11.5.4, _tNumber of Hidden Units and Layers, _p400. |
970 | 1 | 1 |
_l11.5.5, _tMultiple Minima, _p400. |
970 | 1 | 1 |
_l11.6, _tExample : Simulated Data, _p401. |
970 | 1 | 1 |
_l11.7, _tExample : ZIP Code Data, _p404. |
970 | 1 | 1 |
_l11.8, _tDiscussion, _p408. |
970 | 1 | 1 |
_l11.9, _tBayesian Neural Nets and the NIPS 2003 Challenge, _p409. |
970 | 1 | 1 |
_l11.9.1, _tBayes, Boosting and Bagging, _p410. |
970 | 1 | 1 |
_l11.9.2, _tPerformance Comparisons, _p412. |
970 | 1 | 1 |
_l11.10, _tComputational Considerations, _p414. |
970 | 0 | 1 |
_aBibliographic Notes, _p415. |
970 | 1 | 1 |
_tExercises, _p415. |
970 | 1 | 2 |
_l12, _tSupport Vector Machines and Flexible Discriminants, _p417. |
970 | 0 | 1 |
_l12.1, _aIntroduction, _p417. |
970 | 1 | 1 |
_l12.2, _tThe Support Vector Classifier, _p417. |
970 | 1 | 1 |
_l12.2.1, _tComputing the Support Vector Classifier, _p420. |
970 | 1 | 1 |
_l12.2.2, _tMixture Example (Continued), _p421. |
970 | 1 | 1 |
_l12.3, _tSupport Vector Machines and Kernels, _p423. |
970 | 1 | 1 |
_l12.3.1, _tComputing the SVM for Classification, _p423. |
970 | 1 | 1 |
_l12.3.2, _tThe SVM as a Penalization Method, _p426. |
970 | 1 | 1 |
_l12.3.3, _tFunction Estimation and Reproducing Kernels, _p428. |
970 | 1 | 1 |
_l12.3.4, _tSVMs and the Curse of Dimensionality, _p431. |
970 | 1 | 1 |
_l12.3.5, _tA Path Algorithm for the SVM Classifier, _p432. |
970 | 1 | 1 |
_l12.3.6, _tSupport Vector Machines for Regression, _p434. |
970 | 1 | 1 |
_l12.3.7, _tRegression and Kernels, _p436. |
970 | 1 | 1 |
_l12.3.8, _tDiscussion, _p438. |
970 | 1 | 1 |
_l12.4, _tGeneralizing Linear Discriminant Analysis, _p438. |
970 | 1 | 1 |
_l12.5, _tFlexible Discriminant Analysis, _p440. |
970 | 1 | 1 |
_l12.5.1, _tComputing the FDA Estimates, _p444. |
970 | 1 | 1 |
_l12.6, _tPenalized Discriminant Analysis, _p446. |
970 | 1 | 1 |
_l12.7, _tMixture Discriminant Analysis, _p449. |
970 | 1 | 1 |
_l12.7.1, _tExample : Waveform Data, _p451. |
970 | 0 | 1 |
_aBibliographic Notes, _p455. |
970 | 1 | 1 |
_tExercises, _p455. |
970 | 1 | 2 |
_l13, _tPrototype Methods and Nearest- Neighbors, _p459. |
970 | 0 | 1 |
_l13.1, _aIntroduction, _p459. |
970 | 1 | 1 |
_l13.2, _tPrototype Methods, _p459. |
970 | 1 | 1 |
_l13.2.1, _tK-means Clustering, _p460. |
970 | 1 | 1 |
_l13.2.2, _tLearning Vector Quantization, _p462. |
970 | 1 | 1 |
_l13.2.3, _tGaussian Mixtures, _p463. |
970 | 1 | 1 |
_l13.3, _tk-Nearest- Neighbor Classifiers, _p463. |
970 | 1 | 1 |
_l13.4, _tAdaptive Nearest- Neighbor Methods, _p475. |
970 | 1 | 1 |
_l13.4.1, _tExample, _p478. |
970 | 1 | 1 |
_l13.4.2, _tGlobal Dimension Reduction for Nearest- Neighbors, _p479. |
970 | 1 | 1 |
_l13.5, _tComputational Considerations, _p480. |
970 | 1 | 1 |
_aBibliographic Notes, _p481. |
970 | 1 | 1 |
_tExercises, _p481. |
970 | 1 | 2 |
_l14, _tUnsupervised Learning, _p485. |
970 | 0 | 1 |
_l14.1, _aIntroduction, _p485. |
970 | 1 | 1 |
_l14.2, _tAssoociation Rules, _p487. |
970 | 1 | 1 |
_l14.2.1, _tMarket Basket Analysis, _p488. |
970 | 1 | 1 |
_l14.2.2, _tThe Apriori Algorithm, _p489. |
970 | 1 | 1 |
_l14.2.3, _tExample : Market Basket Analysis, _p492. |
970 | 1 | 1 |
_l14.2.4, _tUnsupervised as Supervised Learning, _p495. |
970 | 1 | 1 |
_l14.2.5, _tGeneralized Association Rules, _p497. |
970 | 1 | 1 |
_l14.2.6, _tChoice of Supervised Learning Method, _p499. |
970 | 1 | 1 |
_l14.2.7, _tExample : Market Basket Analysis (Continued), _p499. |
970 | 1 | 1 |
_l14.3, _tCluster Analysis, _p501. |
970 | 1 | 1 |
_l14.3.1, _tProximity Matrices, _p503. |
970 | 1 | 1 |
_l14.3.2, _tDissimilarities Based on Attributes, _p503. |
970 | 1 | 1 |
_l14.3.3, _tObject Dissimilarity, _p505. |
970 | 1 | 1 |
_l14.3.4, _tClustering Algorithms, _p507. |
970 | 1 | 1 |
_l14.3.5, _tCombinatorial Algorithms, _p507. |
970 | 1 | 1 |
_l14.3.6, _tK- means, _p509. |
970 | 1 | 1 |
_l14.3.7, _tGaussian Mixtures as Soft K-means Clustering, _p510. |
970 | 1 | 1 |
_l14.3.8, _tExample : Human Tumor Microarray Data, _p512. |
970 | 1 | 1 |
_l14.3.9, _tVector Quantization, _p514. |
970 | 1 | 1 |
_l14.3.10, _tK- medoids, _p515. |
970 | 1 | 1 |
_l14.3.11, _tPractical Issues, _p518. |
970 | 1 | 1 |
_l14.3.12, _tHierarchical Clustering, _p520. |
970 | 1 | 1 |
_l14.4, _tSelf- Organizing Maps, _p528. |
970 | 1 | 1 |
_l14.5, _tPrincipal Components, Curves and Surfaces, _p534. |
970 | 1 | 1 |
_l14.5.1, _tPrincipal Components, _p534. |
970 | 1 | 1 |
_l14.5.2, _tPrincipal Curves and Surfaces, _p541. |
970 | 1 | 1 |
_l14.5.3, _tSpectral Clustering, _p544. _l14.5.4, _tKernel Principal Components, _p547. |
970 | 1 | 1 |
_l14.5.5, _tSparse Principal Components, _p550. |
970 | 1 | 1 |
_l14.6, _tNon- negative Matrix Factorization, _p553. |
970 | 1 | 1 |
_l14.7, _tIndependent Component Analysis and Exploratory Projection Pursuit, _p565. |
970 | 1 | 1 |
_l14.7.1, _tLatent Variables and Factor Analysis, _p558. |
970 | 1 | 1 |
_l14.7.2, _tIndependent Component Analysis, _p560. |
970 | 1 | 1 |
_l14.7.3, _tExploratory Projection Pursuit, _p565. |
970 | 1 | 1 |
_l14.7.4, _tA Direct Approach to ICA, _p565. |
970 | 1 | 1 |
_l14.8, _tMultidimensional Scaling, _p572. |
970 | 1 | 1 |
_l14.10, _tThe Google PageRank Algorithm, _p576. |
970 | 0 | 1 |
_aBibliographic Notes, _p578. |
970 | 1 | 1 |
_tExercises, _p579. |
970 | 1 | 2 |
_l15, _tRandom Forests, _p587. |
970 | 0 | 1 |
_l15.1, _aIntroduction, _p587. |
970 | 1 | 1 |
_l15.2, _tDetails of Random Forests, _p587. |
970 | 1 | 1 |
_l15.3.1, _tOut of Bag Samples, _p592. |
970 | 1 | 1 |
_l15.3.2, _tVariable Importance, _p593. |
970 | 1 | 1 |
_l15.3.3, _tProximity Plots, _p595. |
970 | 1 | 1 |
_l15.3.4, _tRandom Forests and Overfitting, _p596. |
970 | 1 | 1 |
_l15.4, _tAnalysis of Random Forests, _p597. |
970 | 1 | 1 |
_l15.4.1, _tVariance and the De-Correlation Effect, _p597. |
970 | 1 | 1 |
_l15.4.2, _tBias, _p600. |
970 | 1 | 1 |
_l15.4.3, _tAdaptive Nearest Neighbors, _p601. |
970 | 0 | 1 |
_aBibliographic Notes, _p602. |
970 | 1 | 1 |
_tExercises, _p603. |
970 | 1 | 2 |
_l16, _tEnsemble Learning, _p605. |
970 | 1 | 1 |
_l16.1, _aIntroduction, _p605. |
970 | 1 | 1 |
_l16.2, _tBoosting and Regularization Paths, _p607. |
970 | 1 | 1 |
_l16.2.1, _tPenalized Regression, _p607. |
970 | 1 | 1 |
_l16.2.2, _tThe "Bet on Sparsity" Principle, _p610. |
970 | 1 | 1 |
_l16.2.3, _tRegularization Paths, Over-fitting and Margins, _p613. |
970 | 1 | 1 |
_l16.3, _tLearning ensembles, _p616. |
970 | 1 | 1 |
_l16.3.1, _tLearning a Good Ensemble, _p617. |
970 | 1 | 1 |
_l16.3.2, _tRule Ensembles, _p622. |
970 | 0 | 1 |
_aBibliographic Notes, _l623. |
970 | 1 | 1 |
_tExercises, _p624. |
970 | 1 | 2 |
_l17, _tUndirected Graphical Models, _p625. |
970 | 0 | 1 |
_t17.1, _aIntroduction, _p625. |
970 | 1 | 1 |
_l17.2, _tMarkov Graphs and Their Properties, _p627. |
970 | 1 | 1 |
_l17.3, _tUndirected Graphical Models for Continuous Variables, _p630. |
970 | 1 | 1 |
_l17.3.1, _tEstimation of the Parameters when the Graph Structure is Known, _p631. |
970 | 1 | 1 |
_l17.3.2, _tEstimation of the Graph Structure, _p635. |
970 | 1 | 1 |
_l17.4, _tUndirected Graphical Models for Discrete Veriables, _p638. |
970 | 1 | 1 |
_l17.4.1, _tEstimation of the Parameters when the Graph Structure is Known, _p639. |
970 | 1 | 1 |
_l17.4.2, _tHidden Nodes, _p641. |
970 | 1 | 1 |
_l17.4.3, _tEstimation of the Graph Structure, _p642. |
970 | 1 | 1 |
_l17.4.4, _tRestricted Boltzmann Machines, _p643. |
970 | 1 | 1 |
_tExercises, _p645. |
970 | 1 | 2 |
_l18, _tHigh- Dimensional Problems : p>N, _p649. |
970 | 1 | 1 |
_l18.1, _tWhen p is Much Bigger than N, _p649. |
970 | 1 | 1 |
_l18.2, _tDaigonal Linear Discriminant Analysis and Nearest Shrunken Centroids, _p651. |
970 | 1 | 1 |
_l18.3, _tLinear Classifiers with Quadratic Regularization, _p654. |
970 | 1 | 1 |
_l18.3.1, _tRegularized Discriminant Analysis, _p656. |
970 | 1 | 1 |
_l18.3.2, _tLogistic Regression with Quadratic Regularization, _p657. |
970 | 1 | 1 |
_l18.3.3, _tThe Support Vector Classifier, _p657. |
970 | 1 | 1 |
_l18.3.4, _tFeature Selection, _p658. |
970 | 1 | 1 |
_l18.3.5, _tComputational Shortcuts When p>>N, _p659. |
970 | 1 | 1 |
_l18.4, _tLinear Classifiers with L1 Regularization, _p661. |
970 | 1 | 1 |
_l18.4.1, _tApplication of Lasso to Protein Mass Spectroscopy, _p664. |
970 | 1 | 1 |
_l18.4.2, _tThe Fused Lasso for Functional Data, _p666. |
970 | 1 | 1 |
_l18.5, _tClassification When Features are Unavailable, _p668. |
970 | 1 | 1 |
_l18.5.1, _tExample : String Kernels and Protein Classification, _p668. |
970 | 1 | 1 |
_l18.5.2, _tClassification and Other Models Using Inner- Product Kernels and Pairwise Distances, _p670. |
970 | 1 | 1 |
_l18.5.3, _tExample : Abstracts Classification, _p672. |
970 | 1 | 1 |
_l18.6, _tHigh- Dimensional Regression : Supervised Principal Components, _p674. |
970 | 1 | 1 |
_l18.6.1, _tConnection to Latent- Variable Modeling, _p678. |
970 | 1 | 1 |
_l18.6.2, _tRelationship with Partial Least Squares, _p680. |
970 | 1 | 1 |
_l18.6.3, _tPre- Conditioning for Feature Selection, _p681. |
970 | 1 | 1 |
_l18.7, _tFeature Assessment and the Multiple- Testing Problem, _p683. |
970 | 1 | 1 |
_l18.7.1, _tThe False Discvery Rate, _p687. |
970 | 1 | 1 |
_l18.7.2, _tAsymmetric Cutpoints and the SAM Procedure, _p690. |
970 | 1 | 1 |
_l18.7.3, _tA Bayesian Interpretation of the FDR, _p692. |
970 | 0 | 1 |
_l18.8, _aBibliographic Notes, _p693. |
970 | 1 | 1 |
_tExercises, _p694. |
970 | 0 | 1 |
_aReferences, _p699. |
970 | 0 | 1 |
_aAuthor Index, _p729. |
970 | 0 | 1 |
_aIndex, _p737. |
999 |
_c28084 _d28084 |