Dynamical Systems and Control - 76929
Syllabus (tentative) Course Description
  • Introduction: Dynamical systems and the motor control problem
  • Part 0 (will be integrated into parts I-III):
    1. Muscles and sensors
    2. Multi joint kinematics and inverse kinematics
    3. Jacobians, velocities, forces and dynamics
    4. Motor cortex, motor control models
  • Part I: Linear Control Theory
    1. Linear dynamical systems, basics
    2. State space solutions and realizations
    3. Stability, Lyapunov theory
    4. Controllability and Observaility
    5. State Feedback and State Estimation
  • Part II: Elements of Optimal Control
    1. Optimization problems for dynamic systems
    2. Kalman Filter
    3. Optimization problems with path constraints
    4. Optimal feedback control
    5. Linear systems with quadratic criteria
    6. Optimal feedback control in the presence of uncertainty
    7. Bellman's equation and dynamical programming
      1. Calculus of variations
      2. Computational aspects
  • Part III: Applied Nonlinear Control
    1. Nonlinear System Analysis
      1. Phase Plane Analysis
      2. Lyapunov Theory
      3. Advanced Stability Theory
    2. Nonlinear Control Systems Design
      1. Feedback Linearization
      2. Sliding Control
      3. Adaptive Control
      4. Control of Multi-Input Physical Systems
      5. Stochastic and adaptive control

Dynamical systems and linear control theory is one of the fundamentals of theoretical engineering and a jewel of applied math, too often missed by scientists. The recent interest in biological control processes, both in computational neuroscience and in bioinformatics, makes control theory an essential background for advanced students and researchers in these fields.

This course is intended for Computational Neuroscience (ICNC) students, advanced undergraduate physics or engineering students, advanced computational biology students and other graduate science students with the proper background. The course assumes some knowledge of linear dynamical systems and basics of optimization methods. Either a linear systems course or the ICNC's neural  networks  (I) course will suffice. In addition to theory, the course is accompanied by Matlab exercises and will have a final project. Emphasis will be given to control problems in biological systems.


Text Book(s)
  • Chi-Tsong Chen, Linear System Theory and Design, Oxford University Press, 1999
  • Robert F. Stengel, Optimal Control and Estimation, Dover Publications, 1994
  • A.E. Bryson and Yu-Chi Ho, Applied Optimal Control, Hemisphere Publishing Co. New York, 1975
  • Richard S. Sutton and Andrew G. Barto, Reinforcement learning, an Introduction, MIT Press, Cambridge, MA, 1998
  • J.J.E. Slotine and W. Li, Applied nonlinear control, Prentice Hall, Englewood cliffs, New Jersey, 1991

Other References
  • T. Kailath, A.H. Sayed, B. Hassibi, Linear Estimation, Prentice Hall, New Jersey, 2000
  • H. K. Khalil, Non-linear Systems, Prentice Hall, 2001
  • R. Bellman, Adaptive Control Process, Princeton University Press, 1961