Skip to main content

Unit information: Stochastic Optimisation in 2015/16

Please note: you are viewing unit and programme information for a past academic year. Please see the current academic year for up to date information.

Unit name Stochastic Optimisation
Unit code MATHM6005
Credit points 10
Level of study M/7
Teaching block(s) Teaching Block 2C (weeks 13 - 18)
Unit director Dr. Tadic
Open unit status Not open
Pre-requisites

MATH11300 Probability 1 and MATH 11400 Statistics 1, while MATH21400 Applied Probability 2 is desirable, but not essential.

Co-requisites

None

School/department School of Mathematics
Faculty Faculty of Science

Description

Unit aims

The underlying aim is to use a combination of models, techniques and theory from stochastic control and equilibrium selection to determine behaviour that is optimal with regard to some given reward structure.

General Description of the Unit

Stochastic optimisation covers a broad framework of problems at the interface of applied probability and optimisation. The main focus of this unit is on Markov decision processes and game theory. Markov decision processes describe a class of single decision-maker optimisation problems that arise when applied probability models (eg Markov chains) are extended to allow for action-dependent transition distributions and associated rewards. Game theory problems are more complex in that they involve two or more decision makers (players), so the optimal action for each player will depend on the actions of other players. Here, we focus on Nash equilibria - strategies that are conditionally optimal in the sense that a player can not do do better by changing their own strategy while other players stay with their current strategy

Each module covers an area of statistics and applied probability relevant to the research and other interests of members of academic staff. Details are given in the Syllabus section below.

Relation to Other Units

This unit is a first course on stochastic optimisation.

Further information is available on the School of Mathematics website: http://www.maths.bris.ac.uk/study/undergrad/

Intended learning outcomes

Learning Objectives

Students who successfully complete this unit should be able to:

  • recognise and construct appropriate formal Markov decision process (MDP) models and game theoretic models from informal problem descriptions;
  • construct appropriate optimality equations for optimisation problems;
  • understand and use appropriate computational techniques (including dynamic programming and policy and value iteration) to solve finite horizon, and infinite horizon discounted and average cost MDPs;
  • understand the concept of a Nash equilibrium and an evolutionarily stable stategy;
  • compute equilibrium policies for standard and simple non-standard games.

Transferable Skills

In addition to the general skills associated with other mathematical units, students will also have the opportunity to gain practice in the following: report writing, oral presentations, use of information resources, use of initiative in learning material in other than that provided by the lectures themselves, time management, general IT skills and word-processing.

Teaching details

Lectures, supported by problem and solution sheets.

Assessment Details

100% Examination.

Raw scores on the examinations will be determined according to the marking scheme written on the examination paper. The marking scheme, indicating the maximum score per question, is a guide to the relative weighting of the questions. Raw scores are moderated as described in the Undergraduate Handbook.

Reading and References

  1. M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming, Wiley, 2005.
  2. D. P. Bertsekas, Dynamic Programming and Optimal Control, vol. 1 and 2, 2nd edition, Athena Scientific, 2005.
  3. P. Whittle, Optimal Control: Basics and Beyond, Wiley, 1996.
  4. R. Gibbons, A Primer in Game Theory, Prentice-Hall, 1992.
  5. A. I. Houston and J. M. McNamara, Models of Adaptive Behaviour, Cambridge University Press, 1999.
  6. J. Maynard Smith, Evolution and the Theory of Games, Cambridge University Press, 1982.

Feedback