Zur Kurzanzeige

AutorBäuerle, Nicoledc.contributor.author
Aufnahmedatum2016-03-14T11:53:52Zdc.date.accessioned
In OPARU verfügbar seit2016-03-14T11:53:52Zdc.date.available
Jahr der Erstellung1999dc.date.created
ZusammenfassungIn manufacturing and telecommunication systems we often encounter the situation that there are different timescales for the occurrence of events. Thus, to obtain appropriate models we replace quantities that vary faster with their averages, whereas we keep the slower process stochastic. Formulations of this type are commonly used and important in stochastic modeling. In this paper we give a unified approach towards the optimal control of such systems which we will call Stochastic Fluid Programs. We investigate the discounted cost criterion as well as the average cost criterion. Stochastic Fluid Programs are a special class of piecewise deterministic Markov processes with one exception: in our model we allow for constraints on the actions and the process can move along the boundary of the state space. First we will prove that an optimal stationary policy exists for the discounted problem. The optimal stationary policy is the solution of a deterministic control problem. The average cost problem is solved via the vanishing discount approach. Moreover, we show that the value functions in both cases are constrained viscosity solutions of a Hamilton-Jacobi-Bellman equation and derive verification theorems. We apply our results to several examples, e.g. the stochastic single-server scheduling problem and the problem of routing to parallel queues. In a second part we approximate control problems in stochastic queueing networks (which are known to be very hard) by fluid problems which are special (purely deterministic) Stochastic Fluid Programs. The fluid problems are rather easy to solve. We show that the fluid value function provides an asymptotic lower bound on the value function of the stochastic network under fluid scaling. Moreover, we construct a so-called Tracking-Policy for the stochastic queueing network which achieves the lower bound as the fluid scaling parameter tends to infinity. In this case the Tracking-Policy is called asymptotically optimal. This statement is true for multiclass queueing networks and admission and routing problems. The convergence is monotone under some convexity assumptions. The Tracking-Policy approach also shows that a given fluid model solution can be attained as a fluid limit of the original discrete model.dc.description.abstract
Spracheendc.language.iso
Verbreitende StelleUniversität Ulmdc.publisher
LizenzStandard (Fassung vom 03.05.2003)dc.rights
Link zum Lizenztexthttps://oparu.uni-ulm.de/xmlui/license_v1dc.rights.uri
SchlagwortAsymptotic optimalitydc.subject
SchlagwortFluid modelsdc.subject
SchlagwortHamilton-Jacobi-Bellman equationdc.subject
SchlagwortManufacturing modelsdc.subject
SchlagwortMarkov decision processesdc.subject
SchlagwortPiecewise deterministic Markov processesdc.subject
SchlagwortQueueing networksdc.subject
SchlagwortScheduling and routingdc.subject
SchlagwortStochastic fluid programsdc.subject
SchlagwortWeak convergencedc.subject
LCSHViscosity solutionsdc.subject.lcsh
TitelOptimal control of Stochastic Fluid Programsdc.title
RessourcentypHabilitationsschriftdc.type
DOIhttp://dx.doi.org/10.18725/OPARU-39dc.identifier.doi
URNhttp://nbn-resolving.de/urn:nbn:de:bsz:289-vts-4358dc.identifier.urn
FakultätFakultät für Mathematik und Wirtschaftswissenschaftenuulm.affiliationGeneral
Datum der Freischaltung2000-02-11T14:21:40Zuulm.freischaltungVTS
Peer-Reviewneinuulm.peerReview
Signatur DruckexemplarN: J-H 5.065 ; W: W-H 6.310uulm.shelfmark
DCMI MedientypTextuulm.typeDCMI
VTS-ID435uulm.vtsID
KategoriePublikationenuulm.category


Dateien zu dieser Ressource

Thumbnail

Das Dokument erscheint in:

Zur Kurzanzeige