Author | Bayerl, Pierre | dc.contributor.author |
Date of accession | 2016-03-14T13:38:46Z | dc.date.accessioned |
Available in OPARU since | 2016-03-14T13:38:46Z | dc.date.available |
Year of creation | 2005 | dc.date.created |
Abstract | The neural mechanisms underlying the segregation and integration of detected motion still remain unclear to a large extent. Motion of an extended boundary can locally be measured by neurons only orthogonal to its orientation (aperture problem) while this ambiguity is resolved for localized image features. In this thesis, a novel neural model of visual motion processing is developed that involves early stages of the cortical, dorsal and ventral pathways of the primate brain to integrate and segregate visual motion and in particular to solve the motion aperture problem. Our model makes predictions concerning the time course of cells in area MT and V1 and serves as a means to link physiological mechanisms with perceptual behavior. We further demonstrate that our model also successfully processes natural image sequences. Moreover we present several extensions of the neural model to investigate the influence of form information, the effects of attention, and the perception of transparent motion. The major computational bottleneck of the presented neural model is the amount of memory necessary for the representation of neural activity. In order to derive a computational mechanism for large-scale simulations we propose a sparse coding framework for neural motion activity patterns and suggest a means how initial activities are detected efficiently. The presented work combines concepts and findings from computational neuroscience, neurophysiological observations, psychophysical observations, and computer science. The outcome of our investigations is a biologically plausible model of motion segmentation together with a fast algorithmic implementation which explains and predicts perceptual and neural effects of motion perception and allows to extract optic flow from given image sequences. | dc.description.abstract |
Language | en | dc.language.iso |
Publisher | Universität Ulm | dc.publisher |
License | Standard (Fassung vom 03.05.2003) | dc.rights |
Link to license text | https://oparu.uni-ulm.de/xmlui/license_v1 | dc.rights.uri |
Keyword | Computational models of vision | dc.subject |
Keyword | Feature attention | dc.subject |
Keyword | Motion aperture problem | dc.subject |
Keyword | Motion estimation | dc.subject |
Keyword | Motion transparency | dc.subject |
Keyword | Optic flow | dc.subject |
Keyword | Recurrent information processing | dc.subject |
Keyword | Visual attention | dc.subject |
Dewey Decimal Group | DDC 004 / Data processing & computer science | dc.subject.ddc |
LCSH | Algorithms | dc.subject.lcsh |
LCSH | Computer vision | dc.subject.lcsh |
LCSH | Motion perception. Vision | dc.subject.lcsh |
LCSH | Neural networks: Computer science | dc.subject.lcsh |
LCSH | Visual perception | dc.subject.lcsh |
Title | A model of visual motion perception | dc.title |
Resource type | Dissertation | dc.type |
DOI | http://dx.doi.org/10.18725/OPARU-361 | dc.identifier.doi |
URN | http://nbn-resolving.de/urn:nbn:de:bsz:289-vts-56293 | dc.identifier.urn |
GND | Bewegungssehen | dc.subject.gnd |
GND | Bewegungswahrnehmung | dc.subject.gnd |
Faculty | Fakultät für Informatik | uulm.affiliationGeneral |
Date of activation | 2006-07-04T11:35:28Z | uulm.freischaltungVTS |
Peer review | nein | uulm.peerReview |
Shelfmark print version | Z: J-H 11.167 ; W. W-H 9.285 | uulm.shelfmark |
DCMI Type | Text | uulm.typeDCMI |
VTS ID | 5629 | uulm.vtsID |
Category | Publikationen | uulm.category |
Bibliography | uulm | uulm.bibliographie |