统计学习基础(第2版)(英文)

书名:统计学习基础(第2版)(英文)
作者:TrevorHastie/RobertTibsiranl/JeromeFriedman
译者:
ISBN:9787510084508
出版社:世界图书出版公司
出版时间:2015-1-1
格式:epub/mobi/azw3/pdf
页数:745
豆瓣评分: 9.5

书籍简介:

This book is our attempt to bring together many of the important new ideas in learning, and explain them in a statistical framework. While some mathematical details are needed, we emphasize the methods and their conceptual underpinnings rather than their theoretical properties. As a result, we hope that this book will appeal not just to statisticians but also to researchers and practitioners in a wide variety of fields.

作者简介:

Trevor Hastie, Robert Tibshirani, and Jerome Friedman are professors of statistics at Stanford University. They are prominent researchers in this area: Hastie and Tibshirani developed generalized additive models and wrote a popular book of that title. Hastie co-developed much of the statistical modeling software and environment in R/S-PLUS and invented principal curves and surfaces. Tibshirani proposed the lasso and is co-author of the very successful An Introduction to the Bootstrap. Friedman is the co-inventor of many data-mining tools including CART, MARS, projection pursuit and gradient boosting.

书友短评:

@ 流萤天下 感觉翻译为《统计学习精要》好一些,私以为这是目前统计学习领域首屈一指的参考书 @ 氥氲 书名翻译有误。应该译为《统计学习精要》比较好,数学基础不好的可以对照着《统计学习导论》学习,从事机器学习理论研究的应该要看看《统计学习理论》这本著作。总的来说,如果时间充裕的话,还是必须要高屋建瓴,看一些深刻的书籍的。只有打好严谨扎实的基础,才能跟上机器学习领域的发展呐== @ Autoz 本书叫基础,其实是统计学习精要,数学基础比较差的连符号都费劲,虽然满满的文字解析,有另外一本《统计学习导论》,书是 好书就得品着读 @ 琴弦 楼下有几位兄台对“基础”的要求未免太苛刻了,这是面向研究生的书籍,应该用评价GTM的标准来衡量它啊。而且本书的门槛是本科那些知识学扎实就可以读了,做学问来说这难道还不够基础么? @ 智障儿童欢乐多 ESL:ML专著。可翻 @ Roy 太牛批了 有些算法虽然已经了解过 但是从没想过居然会有那么深的理论基础 每次阅读都有一种 哇!这个居然这么精妙 的感觉 @ lllylll 讨论班一直在讲这本书,难度对我来说挺高的,但是读透了就会有种醍醐灌顶的感觉,机器学习的必读书目之一吧。 @ 蘧公孙 本书面向的读者对象为精通概率统计的人,即你差不多是个统计学博士就可以了,不然很多结论直接就来,也不推导。本书重概率统计直觉,我觉得此书很尴尬,厉害的人觉得就是个提纲,水平差的人又觉得太跳跃。其实不如直接看相关算法论文,这本书的三位作者真不会写书 @ 金融民工 名气很大,内容很散,不如直接读论文

书籍目录

Preface to the Second Edition
Preface to the First Edition
1 Introduction
2 Overview of Supervised Learning
3 Linear Methods for Regression
4 Linear Methods for Classification
5 Basis Expansions and Regularization
6 Kernel Smoothing Methods
7 Model Assessment and Selection
8 Modellnference and Averaging
9 Additive Models, Trees, and Related Methods
10 Boosting and Additive Trees
11 Neural Networks
12 Support Vector Machines and Flexible Discriminants
13 Prototype Methods and Nearest-Neighbors
14 Unsupervised Learning
15 Random Forests
16 Ensemble Learning
17 Undirected Graphical Models
18 High-Dimensional Problems: p≥N
References
Author Index
Index
· · · · · ·

  • LAR uses least squares directions in the active set of variables.Lasso uses least square directions; if a variable crosses zero, it is removed from the active set.Boosting uses non-negative least squares directions in the active set.
    —— 引自章节:第3章 – Least Angle Regression
  • the bias of the 1-nearest-neighbor estimate is often low, but the variance is high.
    —— 引自第417页
  •   Springer Series in Statistics 影印版(共11册),这套丛书还有《基于回归视野的统计学习》《线性模型》《统计学习基础》《统计学中的渐近性》《高维数据统计学》等。

    添加微信公众号:好书天下获取

    添加微信公众号:“好书天下”获取书籍好书天下 » 统计学习基础(第2版)(英文)
    分享到: 更多 (0)

    评论 抢沙发

    评论前必须登录!

     

    添加微信公众号:“好书天下”获取书籍

    添加微信公众号:“好书天下”获取书籍添加微信公众号:“好书天下”获取书籍