Method of averaging

Summary

In mathematics, more specifically in dynamical systems, the method of averaging (also called averaging theory) exploits systems containing time-scales separation: a fast oscillation versus a slow drift. It suggests that we perform an averaging over a given amount of time in order to iron out the fast oscillations and observe the qualitative behavior from the resulting dynamics. The approximated solution holds under finite time inversely proportional to the parameter denoting the slow time scale. It turns out to be a customary problem where there exists the trade off between how good is the approximated solution balanced by how much time it holds to be close to the original solution.

More precisely, the system has the following form

of a phase space variable The fast oscillation is given by versus a slow drift of . The averaging method yields an autonomous dynamical system
which approximates the solution curves of inside a connected and compact region of the phase space and over time of .

Under the validity of this averaging technique, the asymptotic behavior of the original system is captured by the dynamical equation for . In this way, qualitative methods for autonomous dynamical systems may be employed to analyze the equilibria and more complex structures, such as slow manifold and invariant manifolds, as well as their stability in the phase space of the averaged system.

In addition, in a physical application it might be reasonable or natural to replace a mathematical model, which is given in the form of the differential equation for , with the corresponding averaged system , in order to use the averaged system to make a prediction and then test the prediction against the results of a physical experiment.[1]

The averaging method has a long history, which is deeply rooted in perturbation problems that arose in celestial mechanics (see, for example in [2]).

First example edit

 
Figure 1: Solution to perturbed logistic growth equation   (blue solid line) and the averaged equation    (orange solid line).

Consider a perturbed logistic growth

 
and the averaged equation
 
The purpose of the method of averaging is to tell us the qualitative behavior of the vector field when we average it over a period of time. It guarantees that the solution   approximates   for times   Exceptionally: in this example the approximation is even better, it is valid for all times. We present it in a section below.

Definitions edit

We assume the vector field   to be of differentiability class   with   (or even we will only say smooth), which we will denote  . We expand this time-dependent vector field in a Taylor series (in powers of  ) with remainder  . We introduce the following notation:[2]

 
where  is the  -th derivative with  . As we are concerned with averaging problems, in general   is zero, so it turns out that we will be interested in vector fields given by
 
Besides, we define the following initial value problem to be in the standard form:[2]
 

Theorem: averaging in the periodic case edit

Consider for every   connected and bounded and every   there exist   and   such that the original system (a non-autonomous dynamical system) given by

 
has solution  , where   is periodic with period   and   both with   bounded on bounded sets. Then there exists a constant   such that the solution   of the averaged system (autonomous dynamical system) is
 
is
 
for   and  .

Remarks edit

  • There are two approximations in this what is called first approximation estimate: reduction to the average of the vector field and negligence of   terms.
  • Uniformity with respect to the initial condition  : if we vary   this affects the estimation of   and  . The proof and discussion of this can be found in J. Murdock's book.[3]
  • Reduction of regularity: there is a more general form of this theorem which requires only   to be Lipschitz and   continuous. It is a more recent proof and can be seen in Sanders et al..[2] The theorem statement presented here is due to the proof framework proposed by Krylov-Bogoliubov which is based on an introduction of a near-identity transformation. The advantage of this method is the extension to more general settings such as infinite-dimensional systems - partial differential equation or delay differential equations.
  • J. Hale presents generalizations to almost periodic vector-fields.[4]

Strategy of the proof edit

Krylov-Bogoliubov realized that the slow dynamics of the system determines the leading order of the asymptotic solution.

In order to proof it, they proposed a near-identity transformation, which turned out to be a change of coordinates with its own time-scale transforming the original system to the averaged one.

Sketch of the proof edit

  1. Determination of a near-identity transformation: the smooth mapping  where   is assumed to be regular enough and   periodic. The proposed change of coordinates is given by  .
  2. Choose an appropriate  solving the homological equation of the averaging theory:  .
  3. Change of coordinates carries the original system to  
  4. Estimation of error due to truncation and comparison to the original variable.

Non-autonomous class of systems: more examples edit

Along the history of the averaging technique, there is class of system extensively studied which give us meaningful examples we will discuss below. The class of system is given by:

 
where   is smooth. This system is similar to a linear system with a small nonlinear perturbation given by  :
 
differing from the standard form. Hence there is a necessity to perform a transformation to make it in the standard form explicitly.[2] We are able to change coordinates using variation of constants method. We look at the unperturbed system, i.e.  , given by
 

which has the fundamental solution  corresponding to a rotation. Then the time-dependent change of coordinates is   where   is the coordinates respective to the standard form.

If we take the time derivative in both sides and invert the fundamental matrix we obtain

 

Remarks edit

  • The same can be done to time-dependent linear parts. Although the fundamental solution may be non-trivial to write down explicitly, the procedure is similar. See Sanders et al. [2] for further details.
  • If the eigenvalues of   are not all purely imaginary this is called hyperbolicity condition. For this occasion, the perturbation equation may present some serious problems even whether   is bounded, since the solution grows exponentially fast.[2] However, qualitatively, we may be able to know the asymptotic solution, such as Hartman-Grobman results and more.[1]
  • Occasionally, polar coordinates may yield standard forms that are simpler to analyze. Consider  , which determines the initial condition   and the system
     

If  we may apply averaging so long as a neighborhood of the origin is excluded (since the polar coordinates fail):

 
where the averaged system is
 

Example: Misleading averaging results edit

 
Figure 2: A simple harmonic oscillator with small periodic damping term given by  .The numerical simulation of the original equation (blue solid line) is compared with averaging system (orange dashed line) and the crude averaged system (green dash-dotted line). The left plot displays the solution evolved in time and the right plot represents on the phase space. We note that the crude averaging disagrees with the expected solution.

The method contains some assumptions and restrictions. These limitations play important role when we average the original equation which is not into the standard form, and we can discuss counterexample of it. The following example in order to discourage this hurried averaging:[2]

 
where we put  following the previous notation.

This systems corresponds to a damped harmonic oscillator where the damping term oscillates between   and  . Averaging the friction term over one cycle of   yields the equation:

 
The solution is
 
which the convergence rate to the origin is  . The averaged system obtained from the standard form yields:
 
which in the rectangular coordinate shows explicitly that indeed the rate of convergence to the origin is   differing from the previous crude averaged system:
 

Example: Van der Pol Equation edit

 
Figure 3: Phase space of a Van der Pol oscillator with  . The stable limit cycle (orange solid line) in the system is captured correctly by the qualitative analysis of the averaged system. For two different initial conditions ( black dots ) we observe the trajectories.(dashed blue line) converging to the periodic orbit.

Van der Pol was concerned with obtaining approximate solutions for equations of the type

 
where   following the previous notation. This system is often called the Van der Pol oscillator. Applying periodic averaging to this nonlinear oscillator provides qualitative knowledge of the phase space without solving the system explicitly.

The averaged system is

 
and we can analyze the fixed points and their stability. There is an unstable fixed point at the origin and a stable limit cycle represented by  .

The existence of such stable limit-cycle can be stated as a theorem.

Theorem (Existence of a periodic orbit)[5]: If  is a hyperbolic fixed point of

 
Then there exists  such that for all  ,
 
has a unique hyperbolic periodic orbit   of the same stability type as  .

The proof can be found at Guckenheimer and Holmes,[5] Sanders et al. [2] and for the angle case in Chicone.[1]

Example: Restricting the time interval edit

 
Figure 4: The plot depicts two fundamental quantities the average technique is based on: the bounded and connected region   of the phase space and how long (defined by the constant  ) the averaged solution is valid. For this case,  . Note that both solutions blow up in finite time.  Hence,   has been chosen accordingly in order to maintain the boundedness of the solution and the time interval of validity of the approximation is  .

The average theorem assumes existence of a connected and bounded region  which affects the time interval   of the result validity. The following example points it out. Consider the

 
where  . The averaged system consists of
 
which under this initial condition indicates that the original solution behaves like
 
where it holds on a bounded region over  .

Damped Pendulum edit

Consider a damped pendulum whose point of suspension is vibrated vertically by a small amplitude, high frequency signal (this is usually known as dithering). The equation of motion for such a pendulum is given by

 
where   describes the motion of the suspension point,   describes the damping of the pendulum, and   is the angle made by the pendulum with the vertical.

The phase space form of this equation is given by

 
where we have introduced the variable   and written the system as an autonomous, first-order system in  -space.

Suppose that the angular frequency of the vertical vibrations,  , is much greater than the natural frequency of the pendulum,  . Suppose also that the amplitude of the vertical vibrations,  , is much less than the length   of the pendulum. The pendulum's trajectory in phase space will trace out a spiral around a curve  , moving along   at the slow rate   but moving around it at the fast rate  . The radius of the spiral around   will be small and proportional to  . The average behaviour of the trajectory, over a timescale much larger than  , will be to follow the curve  .

Extension error estimates edit

Average technique for initial value problems has been treated up to now with an validity error estimates of order  . However, there are circumstances where the estimates can be extended for further times, even the case for all times.[2] Below we deal with a system containing an asymptotically stable fixed point. Such situation recapitulates what is illustrated in Figure 1.

Theorem (Eckhaus [6]/Sanchez-Palencia [7]) Consider the initial value problem

 
Suppose
 
exists and contains an asymptotically stable fixed point   in the linear approximation. Moreover,  is continuously differentiable with respect to   in   and has a domain of attraction  . For any compact   and for all  
 
with   in the general case and   in the periodic case.

References edit

  1. ^ a b c Charles., Chicone, Carmen (2006). Ordinary differential equations with applications (2nd ed.). New York: Springer. ISBN 9780387307695. OCLC 288193020.{{cite book}}: CS1 maint: multiple names: authors list (link)
  2. ^ a b c d e f g h i j Sanders, Jan A.; Verhulst, Ferdinand; Murdock, James (2007). Averaging Methods in Nonlinear Dynamical Systems. Applied Mathematical Sciences. Vol. 59. doi:10.1007/978-0-387-48918-6. ISBN 978-0-387-48916-2.
  3. ^ Murdock, James A. (1999). Perturbations : theory and methods. Philadelphia: Society for Industrial and Applied Mathematics. ISBN 978-0898714432. OCLC 41612407.
  4. ^ Hale, Jack K. (1980). Ordinary differential equations (2nd ed.). Huntington, N.Y.: R.E. Krieger Pub. Co. ISBN 978-0898740110. OCLC 5170595.
  5. ^ a b Guckenheimer, John; Holmes, Philip (1983). Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Applied Mathematical Sciences. Vol. 42. doi:10.1007/978-1-4612-1140-2. ISBN 978-1-4612-7020-1. ISSN 0066-5452.
  6. ^ Eckhaus, Wiktor (1975-03-01). "New approach to the asymptotic theory of nonlinear oscillations and wave-propagation". Journal of Mathematical Analysis and Applications. 49 (3): 575–611. doi:10.1016/0022-247X(75)90200-0. ISSN 0022-247X.
  7. ^ Sanchez-Palencia, Enrique (1976-01-01). "Methode de centrage-estimation de l'erreur et comportement des trajectoires dans l'espace des phases". International Journal of Non-Linear Mechanics. 11 (4): 251–263. Bibcode:1976IJNLM..11..251S. doi:10.1016/0020-7462(76)90004-4. ISSN 0020-7462.