In this talk, we shall examine a class of discrete-time stochastic control problems for which the observations available to the controller are not fixed, but there is a number of options to choose from, and each choice has a cost associated with it. The observation costs are added to the running cost of the optimization criterion and the resulting optimal control problem is investigated. This problem is motivated by the wide deployment of networked control systems and data fusion. Since only part of the observation information is available at each time step, the controller has to balance the system performance with the penalty of the requested information (query). We will formulate the problem in the partially observed Markov decision process framework and in particular specialize to the stochastic LQG problem, where we show that the separation principle partially holds. We will focus primarily on the infinite horizon ergodic control problem.
Ari Arapostathis received the B.S. degree from Massachusetts Institute of Technology, Cambridge, and the Ph.D. degree from UC Berkeley in 1982. He is currently a Professor in the Dept. of Electrical and Computer Engineering at Univ. of Texas, Austin. His research interests include analysis and estimation techniques for stochastic systems, the application of differential geometric methods to the design and analysis of control systems, stability properties of large-scale interconnected power systems, and stochastic and adaptive control theory. He is an IEEE Fellow and he was an Associate Editor of the IEEE Transactions on Automatic Control and the Journal of Mathematical Systems and Control.