Pattern Recognition and Neural NetworksCambridge University Press, 2007 - 403 страница This 1996 book explains the statistical framework for pattern recognition and machine learning, now in paperback. |
Садржај
Introduction and Examples | 1 |
11 How do neural methods differ? | 4 |
12 The pattern recognition task | 5 |
13 Overview of the remaining chapters | 9 |
14 Examples | 10 |
15 Literature | 15 |
Statistical Decision Theory | 17 |
21 Bayes rules for known distributions | 18 |
63 Learning vector quantization | 201 |
64 Mixture representations | 207 |
Treestructured Classifiers | 213 |
71 Splitting rules | 216 |
72 Pruning rules | 221 |
73 Missing values | 231 |
74 Earlier approaches | 235 |
75 Refinements | 237 |
22 Parametric models | 26 |
23 Logistic discrimination | 43 |
24 Predictive classification | 45 |
25 Alternative estimation procedures | 55 |
26 How complex a model do we need? | 59 |
27 Performance assessment | 66 |
28 Computational learning approaches | 77 |
Linear Discriminant Analysis | 91 |
31 Classical linear discrimination | 92 |
32 Linear discriminants via regression | 101 |
33 Robustness | 105 |
34 Shrinkage methods | 106 |
35 Logistic discrimination | 109 |
36 Linear separation and perceptrons | 116 |
Flexible Discriminants | 121 |
41 Fitting smooth parametric functions | 122 |
42 Radial basis functions | 131 |
43 Regularization | 136 |
Feedforward Neural Networks | 143 |
51 Biological motivation | 145 |
52 Theory | 147 |
53 Learning algorithms | 148 |
54 Examples | 160 |
55 Bayesian perspectives | 163 |
56 Network complexity | 168 |
57 Approximation results | 173 |
Nonparametric Methods | 181 |
62 Nearest neighbour methods | 191 |
76 Relationships to neural networks | 240 |
77 Bayesian trees | 241 |
Belief Networks | 243 |
81 Graphical models and networks | 246 |
82 Causal networks | 262 |
83 Learning the network structure | 275 |
84 Boltzmann machines | 279 |
85 Hierarchical mixtures of experts | 283 |
Unsupervised Methods | 287 |
91 Projection methods | 288 |
92 Multidimensional scaling | 305 |
93 Clustering algorithms | 311 |
94 Selforganizing maps | 322 |
Finding Good Pattern Features | 327 |
101 Bounds for the Bayes error | 328 |
102 Normal class distributions | 329 |
103 Branchandbound techniques | 330 |
104 Feature extraction | 331 |
Statistical Sidelines | 333 |
A2 The EM algorithm | 334 |
A3 Markov chain Monte Carlo | 337 |
A4 Axioms for conditional independence | 339 |
A5 Optimization | 342 |
Glossary | 347 |
References | 355 |
391 | |
399 | |
Друга издања - Прикажи све
Чести термини и фразе
algorithm applied approach approximation asymptotic average Bayes risk Bayesian binary bound Breiman canonical variate choose class densities classifier clique clusters codebook conditional independence consider convergence covariance matrix cross-validation Cushing's syndrome d-separated density estimation deviance dimension distance distribution error rate example Figure Gibbs sampler gives graph hidden layer hidden units IEEE IEEE Transactions inputs Journal kernel Kohonen linear discriminant log-likelihood logistic Machine Learning Mahalanobis distance Markov maximize maximum likelihood measure methods minimize mixture moral graph multivariate Neural Computation Neural Networks node normal optimal outliers parameters pattern recognition perceptron posterior probabilities Pr{X predictive Pregnanetriol principal components prior problem procedure projection pursuit Proposition pruning quadratic random variables regression Royal Statistical Society sample Section shows smoothing splines split Statistical Society series subset test set Tetrahydrocortisone Theory training set update values variance VC dimension vector vertex vertices WinF zero