• English
    • Deutsch
  • English 
    • English
    • Deutsch
  • Login
View Item 
  •   Home
  • Universität Ulm
  • Publikationen
  • View Item
  •   Home
  • Universität Ulm
  • Publikationen
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Optimal control of Stochastic Fluid Programs

Thumbnail
vts_435.pdf (1.604Mb)
118 Seiten
Veröffentlichung
2000-02-11
Authors
Bäuerle, Nicole
Habilitationsschrift


Faculties
Fakultät für Mathematik und Wirtschaftswissenschaften
Abstract
In manufacturing and telecommunication systems we often encounter the situation that there are different timescales for the occurrence of events. Thus, to obtain appropriate models we replace quantities that vary faster with their averages, whereas we keep the slower process stochastic. Formulations of this type are commonly used and important in stochastic modeling. In this paper we give a unified approach towards the optimal control of such systems which we will call Stochastic Fluid Programs. We investigate the discounted cost criterion as well as the average cost criterion. Stochastic Fluid Programs are a special class of piecewise deterministic Markov processes with one exception: in our model we allow for constraints on the actions and the process can move along the boundary of the state space. First we will prove that an optimal stationary policy exists for the discounted problem. The optimal stationary policy is the solution of a deterministic control problem. The average cost problem is solved via the vanishing discount approach. Moreover, we show that the value functions in both cases are constrained viscosity solutions of a Hamilton-Jacobi-Bellman equation and derive verification theorems. We apply our results to several examples, e.g. the stochastic single-server scheduling problem and the problem of routing to parallel queues. In a second part we approximate control problems in stochastic queueing networks (which are known to be very hard) by fluid problems which are special (purely deterministic) Stochastic Fluid Programs. The fluid problems are rather easy to solve. We show that the fluid value function provides an asymptotic lower bound on the value function of the stochastic network under fluid scaling. Moreover, we construct a so-called Tracking-Policy for the stochastic queueing network which achieves the lower bound as the fluid scaling parameter tends to infinity. In this case the Tracking-Policy is called asymptotically optimal. This statement is true for multiclass queueing networks and admission and routing problems. The convergence is monotone under some convexity assumptions. The Tracking-Policy approach also shows that a given fluid model solution can be attained as a fluid limit of the original discrete model.
Date created
1999
Subject headings
[LCSH]: Viscosity solutions
[Free subject headings]: Asymptotic optimality | Fluid models | Hamilton-Jacobi-Bellman equation | Manufacturing models | Markov decision processes | Piecewise deterministic Markov processes | Queueing networks | Scheduling and routing | Stochastic fluid programs | Weak convergence
License
Standard (Fassung vom 03.05.2003)
https://oparu.uni-ulm.de/xmlui/license_v1

Metadata
Show full item record

DOI & citation

Please use this identifier to cite or link to this item: http://dx.doi.org/10.18725/OPARU-39

Bäuerle, Nicole (2000): Optimal control of Stochastic Fluid Programs. Open Access Repositorium der Universität Ulm und Technischen Hochschule Ulm. http://dx.doi.org/10.18725/OPARU-39
Citation formatter >



Policy | kiz service OPARU | Contact Us
Impressum | Privacy statement
 

 

Advanced Search

Browse

All of OPARUCommunities & CollectionsPersonsInstitutionsPublication typesUlm SerialsDewey Decimal ClassesEU projects UlmDFG projects UlmOther projects Ulm

My Account

LoginRegister

Statistics

View Usage Statistics

Policy | kiz service OPARU | Contact Us
Impressum | Privacy statement