## Abstract

The application of Markov decision theory to the control of queueing

systems often leads to models with enormous state spaces. Hence, direct computation of optimal policies with standard techniques and algorithms is almost impossible for most practical models. A convenient technique to overcome this issue is to use one-step policy improvement. For this technique to work, one needs to have a good understanding of the queueing system under study, and its (approximate) value function under policies that decompose the system into less complicated systems. This warrants the research on the relative value functions of simple queueing models, that can be used in the control of more complex queueing systems. In this chapter we provide a survey of value functions of basic queueing models and show how they can be applied to the control of more complex queueing systems.

systems often leads to models with enormous state spaces. Hence, direct computation of optimal policies with standard techniques and algorithms is almost impossible for most practical models. A convenient technique to overcome this issue is to use one-step policy improvement. For this technique to work, one needs to have a good understanding of the queueing system under study, and its (approximate) value function under policies that decompose the system into less complicated systems. This warrants the research on the relative value functions of simple queueing models, that can be used in the control of more complex queueing systems. In this chapter we provide a survey of value functions of basic queueing models and show how they can be applied to the control of more complex queueing systems.

Original language | English |
---|---|

Title of host publication | Markov Decision Processes in Practice |

Editors | Nico van Dijk, Richard Boucherie |

Publisher | Springer_Verlag |

Pages | 33-62 |

ISBN (Electronic) | 978-3-319-47766-4 |

ISBN (Print) | 978-3-319-47764-0 |

Publication status | Published - 2017 |