Show simple item record

AuthorDietrich, Christian R.dc.contributor.author
Date of accession2016-03-14T13:38:36Zdc.date.accessioned
Available in OPARU since2016-03-14T13:38:36Zdc.date.available
Year of creation2003dc.date.created
AbstractClassifying species by their sounds is a fundamental challenge in the study of animal vocalisations. Most of existing studies are based on manual inspection and labelling of acoustic features, e.g. amplitude signals and sound spectra, which rely on the agreement between human experts. But during the last ten years, systems for the automated classification of animal vocalisations have been developed. In this thesis a system for the classification of Orthoptera species by their sounds is described and the behaviour of this approach is demonstrated on a large data set containing sounds of 53 different species. The system consists of multiple classifiers, since in previous studies it has been shown that for many applications the classification performance of a single classifier system can be improved by combining the decisions of multiple classifiers. In particular, this thesis deals with classifier ensemble methods for time series classification applied to bioacoustic data. Hereby a set of local features is extracted inside a sliding time window which moves over the whole sound signal. The temporal combination of local features and the combination over the feature space are studied. Static combining paradigms where the classifier outputs are simply combined through a fixed fusion mapping, and adaptive combining paradigms where an additional fusion layer is trained through a second supervised learning procedure are proposed and discussed. The decision template (DT) fusion scheme is an intuitive approach for such a trainable fusion scheme, which is typically applied to recognize static objects. During the second supervised learning step the DT algorithm uses confusion matrix data to adapt the fusion layer. Many linear trainable decision fusion mappings, e.g. the combination with the linear associative memory, the pseudoinverse matrix and the naive Bayes fusion scheme are based on the same idea. Links between these methods are given. However, regarding the classification of temporal sequences these methods do not consider the temporal variation of the classifier outputs. In order to deal with variations of classifier decisions within time series we propose to calculate multiple decision templates (MDTs) per class. Two new methods called temporal decision templates (TDTs) and clustered decision templates (CDTs) are introduced and the behaviour of these new methods is discussed on real data from the field of bioacoustics and artificially generated data.dc.description.abstract
Languageendc.language.iso
PublisherUniversität Ulmdc.publisher
LicenseStandard (Fassung vom 03.05.2003)dc.rights
Link to license texthttps://oparu.uni-ulm.de/xmlui/license_v1dc.rights.uri
KeywordAssociative memorydc.subject
KeywordClassifier fusiondc.subject
KeywordDecision templatedc.subject
KeywordMultiple classifier systemdc.subject
KeywordMultiple decision templatedc.subject
KeywordNaive Bayes fusiondc.subject
LCSHAnimal soundsdc.subject.lcsh
LCSHBioacousticsdc.subject.lcsh
LCSHOrthopteradc.subject.lcsh
TitleTemporal sensorfusion for the classification of bioacoustic time seriesdc.title
Resource typeDissertationdc.type
DOIhttp://dx.doi.org/10.18725/OPARU-312dc.identifier.doi
URNhttp://nbn-resolving.de/urn:nbn:de:bsz:289-vts-43227dc.identifier.urn
GNDBioakustikdc.subject.gnd
GNDPseudoinverse Matrixdc.subject.gnd
FacultyFakultät für Informatikuulm.affiliationGeneral
Date of activation2004-07-05T15:29:19Zuulm.freischaltungVTS
Peer reviewneinuulm.peerReview
Shelfmark print versionZ: J-H 8.193 ; W: W-H 7.663 ; ZAV: J-H 9.182uulm.shelfmark
DCMI TypeTextuulm.typeDCMI
VTS ID4322uulm.vtsID
CategoryPublikationenuulm.category
Bibliographyuulmuulm.bibliographie


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record