Physics in a nutshell

$ \renewcommand{\D}[2][]{\,\text{d}^{#1} {#2}} $ $\DeclareMathOperator{\Tr}{Tr}$

Linear Ordinary Differential Equations

This article is intended to focus on a specific subset of differential equations, namely those which are

  • ordinary: The unknown function $f$ has only one independent variable, e.g. $f = f(t)$.
  • and linear: The differential-equation can be written in the form \begin{align} \mathcal{L} f(t) = g(t) \label{eq:linear-diff-eq} \end{align} where $\mathcal{L}$ is a linear operator and $g(t)$ is a function that is independent of $f(t)$. This condition implies that if $f_1$ and $f_2$ are two solutions to eq. \eqref{eq:linear-diff-eq}, then any linear combination of them forms a solution as well (superposition principle).[1]

If there is no additional term $g(t)$, then the corresponding linear ordinary differential equation (ODE) is referred to as being homogeneuos. Otherwise it is called inhomogeneous.[2][3]

Special Features of Linear Differential Equations

Linear equations can often easily be studied analytically because methods from the well-developed theory of linear algebra can be applied. In contrast to that, non-linear equations are usually quite hard to deal with because there these methods are not available.

A very important theorem regarding ordinary differential equations is the existence and uniqueness theorem which will be introduced below. It can be used to determine if there actually exists a solution to a certain differential equation and it states the conditions under which the solution is unique. Furthermore, it helps to construct a general solution from individual particular solutions.

General $n$-th Order Case

Let $f^{(i)}(t):= \left(\frac{d}{dt}\right)^i f(t)$ denote the $i$-th derivative of the unknown function $f$. A general (explicit) linear ODE of order $n$ can then be written in the following way: \begin{align} \mathcal{L} f(t) &= f^{(n)}(t) + a_{n-1}(t) f^{(n-1)}(t) + \dots + a_0(t) f(t) \nonumber \\ &= f^{(n)}(t) + \sum_{i=0}^{n-1} a_i(t) f^{(i)}(t) = g(t) \end{align} This representation of a single ODE of order $n$ is equivalent to a system of $n$ coupled differential equations of order one \begin{align} \frac{d}{dt} f^{(0)}(t) &= f^{(1)}(t) \\ \frac{d}{dt} f^{(1)}(t) &= f^{(2)}(t) \\ &\vdots \nonumber \\ \frac{d}{dt} f^{(n-1)}(t) &= - \sum_{i=0}^{n-1} a_i(t) f^{(i)}(t) + g(t) \end{align} which can be written even more compactly in vector-matrix notation \begin{align} \frac{d}{dt} \vec{f} (t) = A(t) \vec{f} (t) + \vec{g} (t) \end{align} where \begin{align} A(t) :=& \begin{pmatrix} 0 & 1 & 0 & \dots & 0 \\ 0 & 0 & 1 & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -a_0(t) & -a_1(t) & -a_2(t) & \dots & -a_{n-1}(t) \end{pmatrix} , \nonumber \\[2ex] &\vec{f}(t) := \begin{pmatrix} f^{(0)} \\ f^{(1)} \\ \vdots \\ f^{(n-1)} \end{pmatrix} \quad \text{and} \quad \vec{g}(t) := \begin{pmatrix} 0 \\ 0 \\ \vdots \\ g(t) \end{pmatrix} . \end{align} [4][5][6][7]

Existence and Uniqueness of Solutions

In general, there is no guarantee that a particular differential equation actually has a solution. And even if there is a solution, one cannot automatically assume that it is a unique solution - there might be others as well. Both cases would not be satisfying in a physical context: A mathematical model of a physical problem is supposed to allow for an unambiguous prediction of the evolution of the physical system. One therefore says, that mathematical models of physical problems are supposed to be well-posed:

A mathematical formulation of a physical problem is said to be well-posed if it has the following properties:[8]

  • Existence of a solution,
  • uniqueness of the solution,
  • and a continuous dependence on the independent variable (this requirement might however be contested in some cases).

Therefore it is helpful to know the conditions under which a unique solution actually exists. Fortunately, there is a helpful theorem for that, i.e.

Existence and Uniqueness Theorem

If the (real- or complex-valued) coefficients $a_i(t), \; i=0,1,\dots n-1$ and $g(t)$ are continuous on some interval $J$ and if $\tau \in J$, then the initial value problem \begin{align} \mathcal{L} f(t) = f^{(n)} + \sum_{i=0}^{n-1} a_i(t) f^{(i)}(t) = g(t) \quad \text{with} \quad f^{(i)}(\tau) =: \eta_i \end{align} has exactly one solution. It exists in all $J$ and depends continuously on the coefficients $a_i(t)$ and $b(t)$ on every compact subset of $J$.[9][10][11]

The proof of this theorem is quite involved and goes beyond the scope of this article. Therefore we will just adopt this result and use it for the applications.

An important implication of this theorem is that one needs to specify $n$ initial values $f^{(0)}(\tau) = \eta_0$, $f^{(1)}(\tau) = \eta_1$, \dots , $f^{(n-1)}(\tau) = \eta_{n-1}$ (taken at some initial time $\tau$) in order to ensure that there is a unique solution. The underlying reason is that the $n$-th order differential equation formally involves $n$ integration steps and hence produces $n$ otherwise undefined constants of integration. By specifying a corresponding set of initial conditions one can remove the ambiguity and obtain a well-posed problem with a unique solution.

Homogeneous Solution

Usually it is a bit tricky to solve inhomogeneous differential equations. Therefore the homogeneous case will be analysed first.

Based on the superposition principle for solutions of linear operators, one can derive the following implications of the existence and uniqueness theorem for homogeneuos solutions:

Corollary

For every $t\in J$ the mapping $\vec{\eta} \rightarrow \vec{f}(t;\tau,\vec{\eta})$ defines a linear isomorphism between the vector space $\mathbb{R}^n$ ($\mathbb{C}^n$ for complex $\vec{\eta}$) and the space of solutions. Hence, the set of solution functions forms an $n$-dimensional vector space as well. Any set of $n$ linearly independent solutions $f_1, f_2, \dots f_n$ provides a so-called fundamental system of this vector space. A linear combination of these functions $f_i$ provides a general solution for the differential equation.[12][13][14]

So there is a one-to-one relationship between any vector $\vec{\eta}$ of initial conditions and the corresponding solution vector $\vec{f}(t;\tau,\vec{\eta})$. What does that imply? If there are $n$ linearly independent vectors of initial conditions $\vec{\eta_1},\dots,\vec{\eta_n}$ (with corresponding solutions $\vec{f}_1,\dots , \vec{f}_n$), one can form any general vector $\vec{\eta}$ of initial conditions by a linear combination \begin{align} \vec{\eta} = \sum_{i=1}^{n} \alpha_i \vec{\eta}_i \label{eq:eta-vector} \end{align} of these. Due to the linearity of the differential equation, the same applies for the general solution function: \begin{align} \vec{f}(t;\tau, \vec{\eta}) &\stackrel{\eqref{eq:eta-vector}}{=} \vec{f}(t;\tau, \sum_{i=1}^{n} \alpha_i \vec{\eta}_i) \\ &\stackrel{\text{linearity}}{=} \sum_{i=1}^{n} \alpha_i \vec{f}_i(t) \end{align} It can be written as a linear combination of $n$ linearly independent particular solutions.[15]

Linear Independence of Functions

One can verify the linear independence of a set of functions by computing a determinant called Wronskian $W$ which is defined by: \begin{align} W := \left| \begin{matrix} f_1^{(0)} & \dots & f_{n}^{(0)} \\ f_1^{(1)} & \dots & f_n^{(1)} \\ \vdots & \vdots & \vdots \\ f_1^{(n-1)} & \dots & f_n^{(n-1)} \end{matrix} \right| \end{align} If the Wronskian is non-zero, then the functions are linearly independent. Otherwise they are not.[16] For instance, the Wronskian of the functions $f_1(t) = \sin(t)$ and $f_2(t) = \cos(t)$ \begin{align} W(t) &= \left| \begin{matrix} \sin( t) & \cos( t) \\ \cos( t) & - \sin( t) \\ \end{matrix} \right| \\[1ex] &= - \sin^2(t) - \cos^2(t) = - 1 \end{align} is equal to minus 1. Hence, the trigonometric functions $\sin(t)$ and $\cos(t)$ are linearly independent.

Example: Second-Order ODE

Let us illustrate the previous results by means of the following initial value problem: \begin{align} \ddot{x}(t) + x(t) = 0 \qquad x(0) = \eta_0 \quad \dot{x}(0) = \eta_1 \label{eq:example} \end{align} This is a linear homogeneuos ODE of order two ($n=2$). According to the existence and uniqueness theorem, one needs to specify two initial values (e.g. $x(0) = \eta_0$ and $\dot{x}(0) = \eta_1$) in order to obtain unique solutions.

Equation \eqref{eq:example} can apparently be solved by any function $x(t)$ which is equal to (minus) its second derivative. After a short contemplation one may find that the trigonometric functions $\sin(t)$ and $\cos(t)$ meet this requirement and are therefore possible candidates (which you can verify by insertion). Each of them is a particular solution of eq. \eqref{eq:example} for one specific vector of initial conditions: \begin{align} x(t) = \sin(t): \qquad &\begin{pmatrix} x(0) \\ \dot{x}(0) \end{pmatrix} = \begin{pmatrix} \sin(0) \\ \cos(0) \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \\[1ex] x(t) = \cos(t): \qquad &\begin{pmatrix} x(0) \\ \dot{x}(0) \end{pmatrix} = \begin{pmatrix} \cos(0) \\ -\sin(0) \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \end{pmatrix} \end{align} As we calculated in the previous section, these are two linearly independent solutions which correspond to two linearly independent vectors of initial conditions. According to the corollary, the general solution of the differential equation is obtained by a linear combination of the two linearly independent particular solutions: \begin{align} x(t) = \alpha_1 \sin(t) + \alpha_2 \cos(t) \end{align} The free paramaters $\alpha_1$ and $\alpha_2$ can then be set in order to match a particular pair of initial conditions \begin{align} x(0) = \alpha_2 := \eta_0 \\ \dot{x}(0) = \alpha_1 := \eta_1 \end{align} such that the general solution is given by: \begin{align} x(t) = \eta_1 \sin(t) + \eta_0 \cos(t) \end{align}

References

[1] Wolfgang Walter Gewöhnliche Differentialgleichungen Springer 2000 (§ 2)
[2] K. T. Tang Mathematical Methods for Engineers and Scientists 2 Springer 2007 (ch. 5.3)
[3] Wolfgang Walter Gewöhnliche Differentialgleichungen Springer 2000 (§ 2)
[4] Wolfgang Walter Gewöhnliche Differentialgleichungen Springer 2000 (§ 19)
[5] K. T. Tang Mathematical Methods for Engineers and Scientists 2 Springer 2007 (ch. 5.8.4)
[6] Günther J. Wirsching Gewöhnliche Differentialgleichungen Teubner 2006 (ch. 5.1)
[7] Christian B. Lang, Norbert Pucker Mathematische Methoden in der Physik Springer Spektrum 2016 (ch. 6.4.1)
[8] Wolfgang Walter Gewöhnliche Differentialgleichungen Springer 2000 (§ 12.1)
[9] Wolfgang Walter Gewöhnliche Differentialgleichungen Springer 2000 (§ 19.I)
[10] Günther J. Wirsching Gewöhnliche Differentialgleichungen Teubner 2006 (ch. 5.2)
[11] Christian B. Lang, Norbert Pucker Mathematische Methoden in der Physik Springer Spektrum 2016 (ch. 6.3.1)
[12] Wolfgang Walter Gewöhnliche Differentialgleichungen Springer 2000 (§ 19.II)
[13] Wolfgang Walter Gewöhnliche Differentialgleichungen Springer 2000 (§ 15.I)
[14] Günther J. Wirsching Gewöhnliche Differentialgleichungen Teubner 2006 (ch. 5.2)
[15] Wolfgang Walter Gewöhnliche Differentialgleichungen Springer 2000 (§ 15)
[16] Wolfgang Walter Gewöhnliche Differentialgleichungen Springer 2000 (§ 15.III)

Your browser does not support all features of this website! more