Les Inscriptions à la Bibliothèque sont ouvertes en
ligne via le site: https://biblio.enp.edu.dz
Les Réinscriptions se font à :
• La Bibliothèque Annexe pour les étudiants en
2ème Année CPST
• La Bibliothèque Centrale pour les étudiants en Spécialités
A partir de cette page vous pouvez :
Retourner au premier écran avec les recherches... |
Détail d'une collection
Sous-collection Robotics: vision, manipulation and sensors
- Éditeur : Kluwer academic publishers
- Collection : The kluwer international series in engineering and computer science
- ISSN : pas d'ISSN
Documents disponibles dans la sous-collection
Faire une suggestion Affiner la rechercheNeural network perception for mobile robot guidance / Pomerleau, Dean A.
Titre : Neural network perception for mobile robot guidance Type de document : texte imprimé Auteurs : Pomerleau, Dean A., Auteur Editeur : Boston : Kluwer academic publishers Année de publication : 1993 Collection : The kluwer international series in engineering and computer science Sous-collection : Robotics: vision, manipulation and sensors num. 239 Importance : XIV-191p. Présentation : ill. Format : 24 cm ISBN/ISSN/EAN : 978-0-7923-9373-3 Note générale : Bibliogr. p.[179]-186. Index. Langues : Anglais (eng) Mots-clés : Mobile robots
Neural networks (Computer science)
Robots -- Control systems
Commande automatique
Intelligence artificielle
Robotique
Robots mobiles
Réseaux neuronaux (informatique)Index. décimale : 62-52 Machine et processus conduits ou contrôlés automatiquement Résumé : Vision based mobile robot guidance has proven difficult for classical machine vision methods because of the diversity and real time constraints inherent in the task. This book describes a connectionist system called ALVINN (Autonomous Land Vehicle In a Neural Network) that overcomes these difficulties. ALVINN learns to guide mobile robots using the back-propagation training algorithm. Because of its ability to learn from example, ALVINN can adapt to new situations and therefore cope with the diversity of the autonomous navigation task.
But real world problems like vision based mobile robot guidance present a different set of challenges for the connectionist paradigm. Among them are: how to develop a general representation from a limited amount of real training data, how to understand the internal representations developed by artificial neural networks, how to estimate the reliability of individual networks, how to combine multiple networks trained for different situations into a single system, and how to combine connectionist perception with symbolic reasoning.
Neural Network Perception for Mobile Robot Guidance presents novel solutions to each of these problems. Using these techniques, the ALVINN system can learn to control an autonomous van in under 5 minutes by watching a person drive. Once trained, individual ALVINN networks can drive in a variety of circumstances, including single-lane paved and unpaved roads, and multi-lane lined and unlined roads, at speeds of up to 55 miles per hour. The techniques also are shown to generalize to the task of controlling the precise foot placement of a walking robot.Note de contenu : Contents:
* Network Architecture.
* Training Networks "On-The-Fly".
* Training Networks With Structured Noise.
* Driving Results and Performance.
* Analysis of Network Representations.
* Rule-Based Multi-network Arbitration.
* Output Appearance Reliability Estimation.
* Input Reconstruction Reliability Estimation.
* Other Applications. The SM[superscript 2].
* Other Vision-based Robot Guidance Methods.Neural network perception for mobile robot guidance [texte imprimé] / Pomerleau, Dean A., Auteur . - Kluwer academic publishers, 1993 . - XIV-191p. : ill. ; 24 cm. - (The kluwer international series in engineering and computer science. Robotics: vision, manipulation and sensors; 239) .
ISBN : 978-0-7923-9373-3
Bibliogr. p.[179]-186. Index.
Langues : Anglais (eng)
Mots-clés : Mobile robots
Neural networks (Computer science)
Robots -- Control systems
Commande automatique
Intelligence artificielle
Robotique
Robots mobiles
Réseaux neuronaux (informatique)Index. décimale : 62-52 Machine et processus conduits ou contrôlés automatiquement Résumé : Vision based mobile robot guidance has proven difficult for classical machine vision methods because of the diversity and real time constraints inherent in the task. This book describes a connectionist system called ALVINN (Autonomous Land Vehicle In a Neural Network) that overcomes these difficulties. ALVINN learns to guide mobile robots using the back-propagation training algorithm. Because of its ability to learn from example, ALVINN can adapt to new situations and therefore cope with the diversity of the autonomous navigation task.
But real world problems like vision based mobile robot guidance present a different set of challenges for the connectionist paradigm. Among them are: how to develop a general representation from a limited amount of real training data, how to understand the internal representations developed by artificial neural networks, how to estimate the reliability of individual networks, how to combine multiple networks trained for different situations into a single system, and how to combine connectionist perception with symbolic reasoning.
Neural Network Perception for Mobile Robot Guidance presents novel solutions to each of these problems. Using these techniques, the ALVINN system can learn to control an autonomous van in under 5 minutes by watching a person drive. Once trained, individual ALVINN networks can drive in a variety of circumstances, including single-lane paved and unpaved roads, and multi-lane lined and unlined roads, at speeds of up to 55 miles per hour. The techniques also are shown to generalize to the task of controlling the precise foot placement of a walking robot.Note de contenu : Contents:
* Network Architecture.
* Training Networks "On-The-Fly".
* Training Networks With Structured Noise.
* Driving Results and Performance.
* Analysis of Network Representations.
* Rule-Based Multi-network Arbitration.
* Output Appearance Reliability Estimation.
* Input Reconstruction Reliability Estimation.
* Other Applications. The SM[superscript 2].
* Other Vision-based Robot Guidance Methods.Exemplaires
Code-barres Cote Support Localisation Section Disponibilité Etat_Exemplaire 041556 62-52 POM Papier Bibliothèque Centrale Automatique Disponible Qualitative motion understanding / Wilhelm Burger
Titre : Qualitative motion understanding Type de document : texte imprimé Auteurs : Wilhelm Burger, Auteur ; Bir Bhanu, Auteur Editeur : Boston : Kluwer academic publishers Année de publication : 1992 Collection : The kluwer international series in engineering and computer science Sous-collection : Robotics: vision, manipulation and sensors num. 184 Importance : XIII-210 p. Présentation : ill. Format : 24 cm ISBN/ISSN/EAN : 978-0-7923-9251-4 Note générale : Bibliogr. p. [199]-206. Index Langues : Anglais (eng) Mots-clés : Robots -- Motion
Computer vision
Artificial intelligenceIndex. décimale : 62-52 Machine et processus conduits ou contrôlés automatiquement Résumé : Qualitative motion understanding describes a qualitative approach to dynamic scene and motion analysis, called DRIVE (Dynamic Reasoning from Integrated Visual Evidence). The DRIVE system addresses the observed 3-D scene structure, and (c) evaluating the motion of individual objects from a sequence of monocular images. The approach is based on the FOE (focus of expansion) concept but it takes a somewhat unconventional route. The DRIVE system uses a qualitative scene model and a fuzzy focus of expansion to estimate robot motion from visual cues, to detect and track moving objects, and to construct and maintain a global dynamic reference model.
Note de contenu : Contents:
* Framework for qualitative motion understanding.
* Effects of camera motion.
* Decomposing image motion.
* The fueey foe.
* Reasoning about structure and motion.
* The qualitative scene model.
* Examples.Qualitative motion understanding [texte imprimé] / Wilhelm Burger, Auteur ; Bir Bhanu, Auteur . - Kluwer academic publishers, 1992 . - XIII-210 p. : ill. ; 24 cm. - (The kluwer international series in engineering and computer science. Robotics: vision, manipulation and sensors; 184) .
ISBN : 978-0-7923-9251-4
Bibliogr. p. [199]-206. Index
Langues : Anglais (eng)
Mots-clés : Robots -- Motion
Computer vision
Artificial intelligenceIndex. décimale : 62-52 Machine et processus conduits ou contrôlés automatiquement Résumé : Qualitative motion understanding describes a qualitative approach to dynamic scene and motion analysis, called DRIVE (Dynamic Reasoning from Integrated Visual Evidence). The DRIVE system addresses the observed 3-D scene structure, and (c) evaluating the motion of individual objects from a sequence of monocular images. The approach is based on the FOE (focus of expansion) concept but it takes a somewhat unconventional route. The DRIVE system uses a qualitative scene model and a fuzzy focus of expansion to estimate robot motion from visual cues, to detect and track moving objects, and to construct and maintain a global dynamic reference model.
Note de contenu : Contents:
* Framework for qualitative motion understanding.
* Effects of camera motion.
* Decomposing image motion.
* The fueey foe.
* Reasoning about structure and motion.
* The qualitative scene model.
* Examples.Exemplaires
Code-barres Cote Support Localisation Section Disponibilité Etat_Exemplaire 041502 62-52 BUR Papier Bibliothèque Centrale Automatique Disponible Efficient dynamic simulation of robotic mechanisms / Kathryn W. Lilly
Titre : Efficient dynamic simulation of robotic mechanisms Type de document : texte imprimé Auteurs : Kathryn W. Lilly, Auteur Editeur : Boston : Kluwer academic publishers Année de publication : 1993 Collection : The kluwer international series in engineering and computer science Sous-collection : Robotics: vision, manipulation and sensors num. 203 Importance : 136 p. Présentation : ill. Format : 24 cm ISBN/ISSN/EAN : 978-0-7923-9286-6 Note générale : Bibliogr. p. 129-132. Index Langues : Anglais (eng) Mots-clés : Robots -- Dynamics -- Mathematical models
Robots -- Computer simulation
Commande automatique
Robots industrielsIndex. décimale : 62-52 Machine et processus conduits ou contrôlés automatiquement Résumé : Presents computationally efficient algorithms for the dynamic simulation of closed-chain robotic systems. This work investigates in particular, the simulation of single closed chains and simple closed-chain mechanisms. In addition to computational efficiency, it retains as much physical insight as possible during algorithm derivation. Note de contenu : Contents:
* System Modeling and Notation.
* Alternate Formulations for the Joint Inertia Matrix.
* Alternate Formulations for the Operational Space Inertia Matrix.
* Efficient Dynamic Simulation of a Single Closed Chain.
* Efficient Dynamic Simulation of Simple Closed-Chain Mechanisms.Efficient dynamic simulation of robotic mechanisms [texte imprimé] / Kathryn W. Lilly, Auteur . - Kluwer academic publishers, 1993 . - 136 p. : ill. ; 24 cm. - (The kluwer international series in engineering and computer science. Robotics: vision, manipulation and sensors; 203) .
ISBN : 978-0-7923-9286-6
Bibliogr. p. 129-132. Index
Langues : Anglais (eng)
Mots-clés : Robots -- Dynamics -- Mathematical models
Robots -- Computer simulation
Commande automatique
Robots industrielsIndex. décimale : 62-52 Machine et processus conduits ou contrôlés automatiquement Résumé : Presents computationally efficient algorithms for the dynamic simulation of closed-chain robotic systems. This work investigates in particular, the simulation of single closed chains and simple closed-chain mechanisms. In addition to computational efficiency, it retains as much physical insight as possible during algorithm derivation. Note de contenu : Contents:
* System Modeling and Notation.
* Alternate Formulations for the Joint Inertia Matrix.
* Alternate Formulations for the Operational Space Inertia Matrix.
* Efficient Dynamic Simulation of a Single Closed Chain.
* Efficient Dynamic Simulation of Simple Closed-Chain Mechanisms.Exemplaires
Code-barres Cote Support Localisation Section Disponibilité Etat_Exemplaire 041525 62-52 LIL Papier Bibliothèque Centrale Automatique Disponible Bayesian modeling of uncertainty in low-level vision / Richard Szeliski
Titre : Bayesian modeling of uncertainty in low-level vision Type de document : texte imprimé Auteurs : Richard Szeliski, Auteur ; Takeo Kanade, Préfacier, etc. Editeur : Boston : Kluwer academic publishers Année de publication : 1989 Collection : The kluwer international series in engineering and computer science Sous-collection : Robotics: vision, manipulation and sensors num. 79 Importance : XVII-198 p. Présentation : ill. Format : 24 cm ISBN/ISSN/EAN : 978-0-7923-9039-8 Note générale : Index Langues : Anglais (eng) Mots-clés : Théorème de Bayes
Perception visuelle
Algorithmes
Computer vision -- Mathematical models
Vision par ordinateur -- Modèles mathématiquesIndex. décimale : 621.391 Notions générales sur l'ingénierie des Communications électriques.Cybernétique.Théorie de l'information.Théorie des signaux Résumé : Bayesian modeling of uncertainty in low-level vision develops a probablistic model for low-level vision problems such as surface interpolation and depth from motion. The model allows us to describe the uncertainty in the output of low-level vision algorithms. This uncertainty is inherent in all vision applications because of the noisy nature of real sensors.
Modeling of this uncertainty allows us to create algorithms that are more robust with respect to this noise.Note de contenu : Contents:
* Introduction.
* Representations for low-level vision.
* Bayesian models and Markov Random Fields.
* Prior models.
* Sensor models.
* Posterior estimates.
...Bayesian modeling of uncertainty in low-level vision [texte imprimé] / Richard Szeliski, Auteur ; Takeo Kanade, Préfacier, etc. . - Kluwer academic publishers, 1989 . - XVII-198 p. : ill. ; 24 cm. - (The kluwer international series in engineering and computer science. Robotics: vision, manipulation and sensors; 79) .
ISBN : 978-0-7923-9039-8
Index
Langues : Anglais (eng)
Mots-clés : Théorème de Bayes
Perception visuelle
Algorithmes
Computer vision -- Mathematical models
Vision par ordinateur -- Modèles mathématiquesIndex. décimale : 621.391 Notions générales sur l'ingénierie des Communications électriques.Cybernétique.Théorie de l'information.Théorie des signaux Résumé : Bayesian modeling of uncertainty in low-level vision develops a probablistic model for low-level vision problems such as surface interpolation and depth from motion. The model allows us to describe the uncertainty in the output of low-level vision algorithms. This uncertainty is inherent in all vision applications because of the noisy nature of real sensors.
Modeling of this uncertainty allows us to create algorithms that are more robust with respect to this noise.Note de contenu : Contents:
* Introduction.
* Representations for low-level vision.
* Bayesian models and Markov Random Fields.
* Prior models.
* Sensor models.
* Posterior estimates.
...Exemplaires
Code-barres Cote Support Localisation Section Disponibilité Etat_Exemplaire 041567 621.391 SZE Papier Bibliothèque Centrale Electronique Disponible En bon état Measurement of image velocity / David J. Fleet
Titre : Measurement of image velocity Type de document : texte imprimé Auteurs : David J. Fleet, Auteur ; D. Jepson Allan, Préfacier, etc. Editeur : Boston : Kluwer academic publishers Année de publication : 1992 Collection : The kluwer international series in engineering and computer science Sous-collection : Robotics: vision, manipulation and sensors num. 169 Importance : XIII-203 p. Présentation : ill. Format : 24 cm ISBN/ISSN/EAN : 978-0-7923-9198-2 Note générale : Bibliogr. p. 111-200. Index Langues : Anglais (eng) Mots-clés : Computer vision
Image processing
Motion
MeasurementIndex. décimale : 621.397 Technologie vidéo. Ingénierie de la télévision. Enregistrement vidéo, transmission et reproduction. Equipement et réseaux vidéo Résumé :
Presents a computational framework for computing motion information from sequences of images. Its specific goal is the measurement of image velocity (or optical flow), the projection of 3-D object motion onto the 2-D image plane.Note de contenu : Contents:
* Background.
* Phase-Based Velocity Measurement.
* On Phase Properties of Band-Pass Signals.
* Conclusions.Measurement of image velocity [texte imprimé] / David J. Fleet, Auteur ; D. Jepson Allan, Préfacier, etc. . - Kluwer academic publishers, 1992 . - XIII-203 p. : ill. ; 24 cm. - (The kluwer international series in engineering and computer science. Robotics: vision, manipulation and sensors; 169) .
ISBN : 978-0-7923-9198-2
Bibliogr. p. 111-200. Index
Langues : Anglais (eng)
Mots-clés : Computer vision
Image processing
Motion
MeasurementIndex. décimale : 621.397 Technologie vidéo. Ingénierie de la télévision. Enregistrement vidéo, transmission et reproduction. Equipement et réseaux vidéo Résumé :
Presents a computational framework for computing motion information from sequences of images. Its specific goal is the measurement of image velocity (or optical flow), the projection of 3-D object motion onto the 2-D image plane.Note de contenu : Contents:
* Background.
* Phase-Based Velocity Measurement.
* On Phase Properties of Band-Pass Signals.
* Conclusions.Exemplaires
Code-barres Cote Support Localisation Section Disponibilité Etat_Exemplaire 041603 621.397 FLE Papier Bibliothèque Centrale Electronique Disponible Task-directed sensor fusion and planning / Gregory D. Hager
PermalinkComputer analysis of visual textures / Tomita , Fumiaki
PermalinkControl of machines with friction / Armstrong-Hélouvry, Brian
PermalinkRobotic object recognition using vision and touch / Peter K. Allen
PermalinkIntelligent robotic systems / Kimon P. Valavanis
PermalinkVision and navigation
PermalinkParallel architectures and parallel algorithms for integrated vision systems / Alok N. Choudhary
PermalinkSpace robotics
PermalinkIntelligent robotic systems for space exploration
PermalinkA pyramid framework for early vision / Jean-Michel Jolion
Permalink