Show simple item record

AuthorBäuerle, Nicoledc.contributor.author
Date of accession2016-03-14T11:53:52Zdc.date.accessioned
Available in OPARU since2016-03-14T11:53:52Zdc.date.available
Year of creation1999dc.date.created
AbstractIn manufacturing and telecommunication systems we often encounter the situation that there are different timescales for the occurrence of events. Thus, to obtain appropriate models we replace quantities that vary faster with their averages, whereas we keep the slower process stochastic. Formulations of this type are commonly used and important in stochastic modeling. In this paper we give a unified approach towards the optimal control of such systems which we will call Stochastic Fluid Programs. We investigate the discounted cost criterion as well as the average cost criterion. Stochastic Fluid Programs are a special class of piecewise deterministic Markov processes with one exception: in our model we allow for constraints on the actions and the process can move along the boundary of the state space. First we will prove that an optimal stationary policy exists for the discounted problem. The optimal stationary policy is the solution of a deterministic control problem. The average cost problem is solved via the vanishing discount approach. Moreover, we show that the value functions in both cases are constrained viscosity solutions of a Hamilton-Jacobi-Bellman equation and derive verification theorems. We apply our results to several examples, e.g. the stochastic single-server scheduling problem and the problem of routing to parallel queues. In a second part we approximate control problems in stochastic queueing networks (which are known to be very hard) by fluid problems which are special (purely deterministic) Stochastic Fluid Programs. The fluid problems are rather easy to solve. We show that the fluid value function provides an asymptotic lower bound on the value function of the stochastic network under fluid scaling. Moreover, we construct a so-called Tracking-Policy for the stochastic queueing network which achieves the lower bound as the fluid scaling parameter tends to infinity. In this case the Tracking-Policy is called asymptotically optimal. This statement is true for multiclass queueing networks and admission and routing problems. The convergence is monotone under some convexity assumptions. The Tracking-Policy approach also shows that a given fluid model solution can be attained as a fluid limit of the original discrete model.dc.description.abstract
Languageendc.language.iso
PublisherUniversität Ulmdc.publisher
LicenseStandard (Fassung vom 03.05.2003)dc.rights
Link to license texthttps://oparu.uni-ulm.de/xmlui/license_v1dc.rights.uri
KeywordAsymptotic optimalitydc.subject
KeywordFluid modelsdc.subject
KeywordHamilton-Jacobi-Bellman equationdc.subject
KeywordManufacturing modelsdc.subject
KeywordMarkov decision processesdc.subject
KeywordPiecewise deterministic Markov processesdc.subject
KeywordQueueing networksdc.subject
KeywordScheduling and routingdc.subject
KeywordStochastic fluid programsdc.subject
KeywordWeak convergencedc.subject
LCSHViscosity solutionsdc.subject.lcsh
TitleOptimal control of Stochastic Fluid Programsdc.title
Resource typeHabilitationsschriftdc.type
DOIhttp://dx.doi.org/10.18725/OPARU-39dc.identifier.doi
URNhttp://nbn-resolving.de/urn:nbn:de:bsz:289-vts-4358dc.identifier.urn
FacultyFakultät für Mathematik und Wirtschaftswissenschaftenuulm.affiliationGeneral
Date of activation2000-02-11T14:21:40Zuulm.freischaltungVTS
Peer reviewneinuulm.peerReview
Shelfmark print versionN: J-H 5.065 ; W: W-H 6.310uulm.shelfmark
DCMI TypeTextuulm.typeDCMI
VTS-ID435uulm.vtsID
CategoryPublikationenuulm.category


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record