Show simple item record

AuthorWeidenbacher, Ulrichdc.contributor.author
Date of accession2016-03-15T06:22:56Zdc.date.accessioned
Available in OPARU since2016-03-15T06:22:56Zdc.date.available
Year of creation2010dc.date.created
AbstractThe human visual system segments 3D scenes in surfaces and objects which can appear at different depths with respect to the observer. The projection from 3D to 2D leads partially to occlusions of objects depending on their position in depth. There is experimental evidence that surface-based features (e.g. occluding contours or junctions) are used as cues for the robust segmentation of surfaces. These features are characterized by their robustness against variations of illumination and small changes in viewpoint. We demonstrate that this feature representation can be used to extract a sketch-like representation of salient features that captures and emphasizes perceptually relevant regions on objects and surfaces. Furthermore, this representation is also suitable for learning more complex form patterns such as faces and bodies in different posture. In this thesis, we present a biologically inspired model which extracts and interprets surface-based features from a 2D grayscale intensity image. The neural model architecture is characterized by feedforward and feedback processing between functional areas in the dorsal and ventral stream of the primate visual system. In the ventral pathway, prototypical views of head and body poses (snapshots) as well as their temporal appearances were learned unsupervised in a two-layer network. In the dorsal pathway, velocity patterns are generated and learned from local motion detectors. Activity from both pathways is finally integrated to extract a combined signal from motion and form features. Based on these initial feature representations we demonstrate a multi-layered learning scheme that is capable of learning form and motion features utilized for the detection of specific behaviorally relevant motion patterns. We show that the combined representation of form and motion features is superior compared to single pathway based model approaches.dc.description.abstract
Languageendc.language.iso
PublisherUniversität Ulmdc.publisher
LicenseStandard (Fassung vom 01.10.2008)dc.rights
Link to license texthttps://oparu.uni-ulm.de/xmlui/license_v2dc.rights.uri
KeywordForm and motion combinationdc.subject
KeywordForm feature extractiondc.subject
KeywordLearning mechanismsdc.subject
KeywordMirrored objectsdc.subject
KeywordNeural modelsdc.subject
KeywordSketch-like representationdc.subject
Dewey Decimal GroupDDC 004 / Data processing & computer sciencedc.subject.ddc
LCSHVisual pathwaysdc.subject.lcsh
TitleNeural mechanisms of feature extraction for the analysis of shape and behavioral patternsdc.title
Resource typeDissertationdc.type
DOIhttp://dx.doi.org/10.18725/OPARU-1745dc.identifier.doi
URNhttp://nbn-resolving.de/urn:nbn:de:bsz:289-vts-75297dc.identifier.urn
GNDVerhaltensmusterdc.subject.gnd
FacultyFakultät für Ingenieurwissenschaften und Informatikuulm.affiliationGeneral
Date of activation2011-02-03T09:30:23Zuulm.freischaltungVTS
Peer reviewneinuulm.peerReview
Shelfmark print versionZ: J-H 13.935; W: W-H 12.401uulm.shelfmark
DCMI TypeTextuulm.typeDCMI
VTS ID7529uulm.vtsID
CategoryPublikationenuulm.category
Bibliographyuulmuulm.bibliographie


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record