| Titre : | 
					Neural network learning : theoretical foundations | 
				 
					| Type de document :  | 
					texte imprimé | 
				 
					| Auteurs :  | 
					Martin (1941-....) Anthony, Auteur ; Bartlett, Peter L., Auteur | 
				 
					| Editeur : | 
					Cambridge | 
				 
					| Année de publication :  | 
					1999 | 
				 
					| Importance :  | 
					XIV, 389 p | 
				 
					| Présentation :  | 
					ill. | 
				 
					| Format :  | 
					23 cm | 
				 
					| ISBN/ISSN/EAN :  | 
					978-0-521-11862-0 | 
				 
					| Langues : | 
					Anglais (eng) | 
				 
					| Mots-clés :  | 
					Neural networks (Computer science) 
Ordinateurs neuronaux 
Algorithmes 
Réseaux neuronaux (informatique) | 
				 
					| Index. décimale :  | 
					681.3.022 Périphérique.Connecté(on-line).Terminaux. | 
				 
					| Résumé :  | 
					This book describes theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the Vapnik-Chervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a 'large margin' is demonstrated. The authors explain the role of scale-sensitive versions of the Vapnik-Chervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics. | 
				 
					| Note de contenu :  | 
					Summary : 
Part I. Pattern Recognition with Binary-output Neural Networks 
2. The pattern recognition problem 
3. The growth function and VC-dimension 
4. General upper bounds on sample complexity 
5. General lower bounds 
6. The VC-dimension of linear threshold networks 
7. Bounding the VC-dimension using geometric techniques 
8. VC-dimension bounds for neural networks 
Part II. Pattern Recognition with Real-output Neural Networks 
9. Classification with real values 
10. Covering numbers and uniform convergence 
11. The pseudo-dimension and fat-shattering dimension 
12. Bounding covering numbers with dimensions 
13. The sample complexity of classification learning 
14. The dimensions of neural networks 
15. Model selection 
Part III. Learning Real-Valued Functions 
16. Learning classes of real functions 
17. Uniform convergence results for real function classes 
18. Bounding covering numbers 
19. The sample complexity of learning function classes 
20. Convex classes 
21. Other learning problems 
Part IV. Algorithmics 
22. Efficient learning 
23. Learning as optimisation 
24. The Boolean perceptron 
25. Hardness results for feed-forward networks 
26. Constructive learning algorithms for two-layered networks | 
				  
 
					Neural network learning : theoretical foundations [texte imprimé] /  Martin (1941-....) Anthony, Auteur ;  Bartlett, Peter L., Auteur . -  Cambridge, 1999 . - XIV, 389 p : ill. ; 23 cm. ISBN : 978-0-521-11862-0 Langues : Anglais ( eng) 
					| Mots-clés :  | 
					Neural networks (Computer science) 
Ordinateurs neuronaux 
Algorithmes 
Réseaux neuronaux (informatique) | 
				 
					| Index. décimale :  | 
					681.3.022 Périphérique.Connecté(on-line).Terminaux. | 
				 
					| Résumé :  | 
					This book describes theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Research on pattern classification with binary-output networks is surveyed, including a discussion of the relevance of the Vapnik-Chervonenkis dimension, and calculating estimates of the dimension for several neural network models. A model of classification by real-output networks is developed, and the usefulness of classification with a 'large margin' is demonstrated. The authors explain the role of scale-sensitive versions of the Vapnik-Chervonenkis dimension in large margin classification, and in real prediction. They also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient constructive learning algorithms. The book is self-contained and is intended to be accessible to researchers and graduate students in computer science, engineering, and mathematics. | 
				 
					| Note de contenu :  | 
					Summary : 
Part I. Pattern Recognition with Binary-output Neural Networks 
2. The pattern recognition problem 
3. The growth function and VC-dimension 
4. General upper bounds on sample complexity 
5. General lower bounds 
6. The VC-dimension of linear threshold networks 
7. Bounding the VC-dimension using geometric techniques 
8. VC-dimension bounds for neural networks 
Part II. Pattern Recognition with Real-output Neural Networks 
9. Classification with real values 
10. Covering numbers and uniform convergence 
11. The pseudo-dimension and fat-shattering dimension 
12. Bounding covering numbers with dimensions 
13. The sample complexity of classification learning 
14. The dimensions of neural networks 
15. Model selection 
Part III. Learning Real-Valued Functions 
16. Learning classes of real functions 
17. Uniform convergence results for real function classes 
18. Bounding covering numbers 
19. The sample complexity of learning function classes 
20. Convex classes 
21. Other learning problems 
Part IV. Algorithmics 
22. Efficient learning 
23. Learning as optimisation 
24. The Boolean perceptron 
25. Hardness results for feed-forward networks 
26. Constructive learning algorithms for two-layered networks | 
				 
  |   |