Minh Q. Phan

Associate Professor of Engineering

Research Interests

System identification; iterative learning control; model predictive control; control of robotic swarms; intelligent control; adaptive control


  • BS, Mechanical Engineering, University of California, Berkeley 1985
  • MS, Mechanical Engineering, Columbia University 1986
  • M.Phil, Mechanical Engineering, Columbia University 1988
  • PhD, Mechanical Engineering, Columbia University 1989

Professional Activities

  • Associate Editor, Journal of Guidance, Control, and Dynamics, American Institute of Aeronautics and Astronautics (2000-2002)

Research Projects

  • Soft computing

    Soft computing

    Soft computing has experienced major advances in actuator and sensor technology, computing technology, and the emergence of a collection of new tools that can solve problems in an unconventional yet effective way:

    • Artificial neural networks are constructed from identical data processing elements arranged in some regular pattern. These networks exhibit surprising abilities to capture non-linear relationships among variables, perform pattern classification and feature extraction, and encode associate memory, among others.
    • Fuzzy logic can emulate human-like rule-based operations using linguistic terms such as "if it starts to become hot, turn the temperature down a little bit."
    • Genetic algorithms give us a new way to perform optimization without actually solving equations in the traditional sense.

    Other soft computing techniques such as DNA computing and simulated annealing are also very intriguing. Our research finds ways to apply these tools to problems such as the control of a magneto-hydrodynamic power generators for hypersonic aircraft, and the evolution of a robot's rule base for obstacle avoidance and target acquisition.

  • Cooperative control of multi-robot systems

    Cooperative control of multi-robot systems

    Cooperative control of multi-robot systems focuses on modeling and control of groups of high-speed mobile robots while accommodating communication latencies and nonlinear vehicle dynamics. In distributed cooperative control, robots communicate information about their state to each other; communication latencies and error depends on the amount of information communicated and the number of robots. We are developing distributed control system modeling and design tools that seek to maximize control bandwidth for a given information set. These tools will also assist in assessing the value of information transmitted in maintaining stability and performance of group dynamics. Both potential function path planning and control and predictive control methods are being developed.

  • Model predictive control

    Model predictive control

    Model predictive control is control action based on a prediction of the system output a number of time steps into the future. Originated from chemical process engineering, model predictive control has found its way into virtually all areas of control engineering. Our research focuses on the development of a general formulation of predictive control that subsumes both the input-output and state-space perspectives. We seek comprehensive answers to questions such as: What is the simplest way to justify the existence and structures of various input-output predictive models? How does one arrive at an input-output controller if the starting point of the derivation is a state-space model? Can explicit state-space model identification be avoided? What is an efficient strategy to synthesize a predictive controller from input-output data directly without having to resort to model identification? What is the role of predictive control in the disturbance rejection problem? How can we design model predictive controllers for a swarm of robots?

  • Iterative learning control

    Iterative learning control

    Iterative learning control refers to the mechanism by which the necessary control can be synthesized by repeated trials—based on the fundamental recognition that repeated practice is a common mode of human learning. Learning control is most suitable for operations where the same task is to be performed over and over again, e.g., robots in a manufacturing line. Available learning techniques range from those requiring no knowledge of the system dynamics to more sophisticated methods involving system identification to make the learning process efficient and successful on difficult problems. Our research finds ways to design optimal iterative learning controllers that are robust to model uncertainty, and capable of producing monotonic convergence.

  • System identification

    System identification

    System identification refers to the general process of extracting information about a system from measured input-output data. A typical outcome is an input-output model which may be static or dynamic, deterministic or stochastic, linear or nonlinear. One can use such an input-output model for simulation, controller design, or analysis. System identification can extract the physical properties of a system such as its mass, stiffness, and damping distribution. System identification methods can also be applied to obtain information other than a model of a system. For example, it can be used to identify an observer or Kalman filter gain, existing feedback controller gain, disturbance environment, or to detect actuator and sensor failure. The same theory can even be used to synthesize feedback or feedforward controller gains directly from input-output data without having to obtain an intermediate model of the system first. System identification has widespread applications in virtually all areas of engineering including chemical, electrical, mechanical, biomedical, aerospace engineering, and economics.

Selected Publications



Identification and Control of Mechanical Systems

Cambridge University Press, 2006