Optimal control theory is a rich mathematical field with a surprisingly interesting history. It dates back to the brachistochrone problem of Johann Bernoulli in 1696, but it genuinely boomed during the Cold War through independent developments in Soviet Union (Steklov Institute) and the United States (RAND Corporation). But at some point, I came across YuChi Ho’s blog post, from 2010, where he reports the bold pronouncement of an NSF program director: “Control is dead!”
Professor Ho explains that perhaps “mature” is a better word, but even this might be seen as a strong claim. As a graduate student, I am in no place to judge the “life” or “death” of such a broad field, but my bias towards my home department compels me to promote and celebrate control theory, which played an important role in the growth of the Division of Applied Mathematics and the development of the associated Lefschetz Center for Dynamical Systems. That is, I would like to believe that this significant part of Brown’s history still plays a serious role in the mathematical community today.
The goal of this post is to point out some modern appearances of control theory, particularly in seemingly unexpected areas (at least, unexpected to this amateur author). Of course, control theory has long played a leading role in applied probability: e.g., in financial engineering, queueing networks, and filtering. But these are somewhat wellrecognized as the usual stomping grounds of control theory, and I would like to highlight some other (possibly more surprising) connections.
In particular, the first three applications described below invoke the following variational formula from Boué, Dupuis (AoP’98). For $W$ a standard $d$dimensional Brownian motion on $[0,1]$, and $f: C([0,1];\mathbb{R}^d) \rightarrow \mathbb{R}$ measurable and bounded from above, we have
where the infimum is over the space of controls $u$ which are progressively measurable with respect to the augmented Brownian filtration. One should view the first term in the infimum as a “running cost” for the effort exerted by the control $u$, and the second term as a “state occupation cost”. In particular, if $f$ only depends on the time 1 state of the controlled input process, then it can be interpreted as the usual “terminal cost”. Under the preceding interpretations, $\log\mathbb{E} e^{f(W)}$ is a representation for the value function of the associated stochastic control problem. In fact, formulas like $(\star)$ arose even earlier in the control literature; e.g., in Fleming (AMO’77).
As promised, here are a few “modern” links to control theory:

Functional inequalities: Lehec (AIHP’13) derives what is essentially the dual formulation of $(\star)$: for $\gamma$ the Wiener measure on $C([0,1];\mathbb{R}^d)$, where the minimum is over all controls $u$ such that the process $W + \int_0^\cdot u_s ds$ has law $\mu$. That is, the control $u$ is related to the optimal change of measure from $\gamma$ to $\mu$. Related analysis of (an) optimizing $u$ combined with basic martingale arguments yield straightforward proofs of Talagrand’s transportation cost inequality, log Sobolev inequality, and BrascampLieb inequality (for the Wiener measure). Similar controllike principles (for the standard Gaussian measure on $\mathbb{R}^d$ instead of the Wiener measure on path space) are employed in Eldan, Lee (preprint’14) to establish uniform decay of the level sets of the Gaussian measure under the OrnsteinUhlenbeck semigroup.

Spin glasses: In seminal work by Talagrand (AoM’06) and Panchenko (AoP’14), it was established that the thermodynamic limit of the free energy of the SherringtonKirkpatrick model (and associated mixed $p$spin model) is given by a minimization problem involving the “Parisi functional”, the solution to a particular nonlinear PDE. Inspired by the variational formula $(\star)$, it is shown in Auffinger, Chen (CMP’14) that the Parisi functional is strictly convex, and thus a unique “Parisi measure” characterizes the limiting free energy of the SK model. The proof of strict convexity is simplified in Jagannath, Tobasco (PAMS’15), by explicitly appealing to the dynamic programming principle from stochastic control theory. This theme of a controltheoretic approach to analysis of the Parisi functional is continued in Chen (preprint’15).

KPZ and rough paths: In Section 7 of Gubinelli, Perkowski (preprint’15), the authors use a generalized version of $(\star)$ to frame the KPZ equation as the value function of a stochastic control problem. This representation is in turn used to prove certain a priori estimates which yield global existence of solutions to the KPZ equation, complementing Hairer’s approach via regularity structures.

Firstpassage percolation: Krishnan (preprint’14) views the firstpassage time on $\mathbb{Z}^d$ as a discrete control problem, where the canonical basis vectors ${\pm e_1, \cdots, \pm e_d}$ act as the “controls” of a minimizing path between two points. This in turn yields a characterization of the associated time constant as the solution to a discrete HamiltonJacobi equation.
I suppose the overarching theme is that many mathematical problems are just (highly sophisticated) optimization problems which, with some work, can be massaged into control problems. In particular, the preceding examples show that adopting a controltheoretic perspective can lead to insightful, meaningful, and productive reformulations of existing problems!