Value function approximation in complex queueing systems

Research output: Chapter in Book / Report / Conference proceedingChapterAcademicpeer-review

Abstract

The application of Markov decision theory to the control of queueing
systems often leads to models with enormous state spaces. Hence, direct computation of optimal policies with standard techniques and algorithms is almost impossible for most practical models. A convenient technique to overcome this issue is to use one-step policy improvement. For this technique to work, one needs to have a good understanding of the queueing system under study, and its (approximate) value function under policies that decompose the system into less complicated systems. This warrants the research on the relative value functions of simple queueing models, that can be used in the control of more complex queueing systems. In this chapter we provide a survey of value functions of basic queueing models and show how they can be applied to the control of more complex queueing systems.
Original languageEnglish
Title of host publicationMarkov Decision Processes in Practice
EditorsNico van Dijk, Richard Boucherie
PublisherSpringer_Verlag
Pages33-62
ISBN (Electronic)978-3-319-47766-4
ISBN (Print)978-3-319-47764-0
Publication statusPublished - 2017

Fingerprint

Dive into the research topics of 'Value function approximation in complex queueing systems'. Together they form a unique fingerprint.

Cite this