1887

Abstract

Static Team Theory is a mathematical formalism of decision problems with multiple Decision Makers (DMs) that have access to different information and aim at optimizing a common pay-off or reward functional. It is often used to formulate decentralized decision problems, in which the decision-making authority is distributed through a collection of agents or players, and the information available to the DMs to implement their actions is non-classical. Static team theory and decentralized decision making originated from the fields of management, organization behavior and government by Marschak and Radner. However, it has far reaching implications in all human activity, including science and engineering systems, that comprise of multiple components, in which information available to the decision making components is either partially communicated to each other or not communicated at all. Team theory and decentralized decision making can be used in large scale distributed systems, such as transportation systems, smart grid energy systems, social network systems, surveillance systems, communication networks, financial markets, etc. As such, these concepts are bound to play key roles in emerging cyber-physical systems and align well with ARC'14 themes on Computing and Information Technology and Energy and Environment. Since the late 1960's several attempts have been made to generalize static team theory to dynamic team theory, in order to account for decentralized decision-making taken sequentially over time. However, to this date, no mathematical framework has been introduced to deal with non-classical information structures of stochastic dynamical decision systems, much as it is successfully done over the last several decades for stochastic optimal control problems, which presuppose centralized information structures. In this presentation, we put forward and analyze two methods, which generalize static team theory to dynamic team theory, in the context of discrete-time stochastic nonlinear dynamical problems, with team strategies, based on non-classical information structures. Both approaches are based on transforming the original discrete-time stochastic dynamical decentralized decision problem to an equivalent one in which the observations and/or the unobserved state processes are independent processes, and hence the information structures available for decisions are not affected by any of the team decisions. The first method is based on deriving team optimality conditions by direct application of static team theory to the equivalent transformed team problem. The second method is based on discrete-time stochastic Pontryagin's maximum principle. The team optimality conditions are captured by a "Hamiltonian System" consisting of forward and backward discrete-time stochastic dynamical equations, and a conditional variational Hamiltonian with respect to the information structure of each team member, while all other team members hold the optimal values.

Loading

Article metrics loading...

/content/papers/10.5339/qfarc.2014.ITPP0911
2014-11-18
2024-04-16
Loading full text...

Full text loading...

http://instance.metastore.ingenta.com/content/papers/10.5339/qfarc.2014.ITPP0911
Loading
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error