KNOWPIA
WELCOME TO KNOWPIA

The **calculus of variations** (or **variational calculus**) is a field of mathematical analysis that uses variations, which are small changes in functions
and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers.^{[a]} Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations.

A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as *geodesics*. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, which depends upon the material of the medium. One corresponding concept in mechanics is the principle of least/stationary action.

Many important problems involve functions of several variables. Solutions of boundary value problems for the Laplace equation satisfy the Dirichlet's principle. Plateau's problem requires finding a surface of minimal area that spans a given contour in space: a solution can often be found by dipping a frame in soapy water. Although such experiments are relatively easy to perform, their mathematical formulation is far from simple: there may be more than one locally minimizing surface, and they may have non-trivial topology.

The calculus of variations may be said to begin with Newton's minimal resistance problem in 1687, followed by the brachistochrone curve problem raised by Johann Bernoulli (1696).^{[2]} It immediately occupied the attention of Jakob Bernoulli and the Marquis de l'Hôpital, but Leonhard Euler first elaborated the subject, beginning in 1733. Lagrange was influenced by Euler's work to contribute significantly to the theory. After Euler saw the 1755 work of the 19-year-old Lagrange, Euler dropped his own partly geometric approach in favor of Lagrange's purely analytic approach and renamed the subject the *calculus of variations* in his 1756 lecture *Elementa Calculi Variationum*.^{[3]}^{[4]}^{[b]}

Legendre (1786) laid down a method, not entirely satisfactory, for the discrimination of maxima and minima. Isaac Newton and Gottfried Leibniz also gave some early attention to the subject.^{[5]} To this discrimination Vincenzo Brunacci (1810), Carl Friedrich Gauss (1829), Siméon Poisson (1831), Mikhail Ostrogradsky (1834), and Carl Jacobi (1837) have been among the contributors. An important general work is that of Sarrus (1842) which was condensed and improved by Cauchy (1844). Other valuable treatises and memoirs have been written by Strauch (1849), Jellett (1850), Otto Hesse (1857), Alfred Clebsch (1858), and Lewis Buffett Carll (1885), but perhaps the most important work of the century is that of Weierstrass. His celebrated course on the theory is epoch-making, and it may be asserted that he was the first to place it on a firm and unquestionable foundation. The 20th and the 23rd Hilbert problem published in 1900 encouraged further development.^{[5]}

In the 20th century David Hilbert, Oskar Bolza, Gilbert Ames Bliss, Emmy Noether, Leonida Tonelli, Henri Lebesgue and Jacques Hadamard among others made significant contributions.^{[5]} Marston Morse applied calculus of variations in what is now called Morse theory.^{[6]} Lev Pontryagin, Ralph Rockafellar and F. H. Clarke developed new mathematical tools for the calculus of variations in optimal control theory.^{[6]} The dynamic programming of Richard Bellman is an alternative to the calculus of variations.^{[7]}^{[8]}^{[9]}^{[c]}

The calculus of variations is concerned with the maxima or minima (collectively called **extrema**) of functionals. A functional maps functions to scalars, so functionals have been described as "functions of functions." Functionals have extrema with respect to the elements of a given function space defined over a given domain. A functional is said to have an extremum at the function if has the same sign for all in an arbitrarily small neighborhood of ^{[d]} The function is called an **extremal** function or extremal.^{[e]} The extremum is called a local maximum if everywhere in an arbitrarily small neighborhood of and a local minimum if there. For a function space of continuous functions, extrema of corresponding functionals are called **strong extrema** or **weak extrema**, depending on whether the first derivatives of the continuous functions are respectively all continuous or not.^{[11]}

Both strong and weak extrema of functionals are for a space of continuous functions but strong extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema.^{[12]} An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation.^{[13]}^{[f]}

Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions for which the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation.^{[g]}

Consider the functional

- are constants,
- is twice continuously differentiable,
- is twice continuously differentiable with respect to its arguments and

If the functional attains a local minimum at and is an arbitrary function that has at least one derivative and vanishes at the endpoints and then for any number close to 0,

The term is called the **variation** of the function and is denoted by ^{[1]}^{[h]}

Substituting for in the functional the result is a function of

Taking the total derivative of where and are considered as functions of rather than yields

Therefore,

According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e.

In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum.

In order to illustrate this process, consider the problem of finding the extremal function which is the shortest curve that connects two points and The arc length of the curve is given by

The Euler–Lagrange equation will now be used to find the extremal function that minimizes the functional

Since does not appear explicitly in the first term in the Euler–Lagrange equation vanishes for all and thus,

Thus

In physics problems it may be the case that meaning the integrand is a function of and but does not appear separately. In that case, the Euler–Lagrange equation can be simplified to the Beltrami identity^{[16]}

The intuition behind this result is that, if the variable is actually time, then the statement implies that the Lagrangian is time-independent. By Noether's theorem, there is an associated conserved quantity. In this case, this quantity is the Hamiltonian, the Legendre transform of the Lagrangian, which (often) coincides with the energy of the system. This is (minus) the constant in Beltrami's identity.

If depends on higher-derivatives of that is, if

The discussion thus far has assumed that extremal functions possess two continuous derivatives, although the existence of the integral requires only first derivatives of trial functions. The condition that the first variation vanishes at an extremal may be regarded as a **weak form** of the Euler–Lagrange equation. The theorem of Du Bois-Reymond asserts that this weak form implies the strong form. If has continuous first and second derivatives with respect to all of its arguments, and if

Hilbert was the first to give good conditions for the Euler–Lagrange equations to give a stationary solution. Within a convex area and a positive thrice differentiable Lagrangian the solutions are composed of a countable collection of sections that either go along the boundary or satisfy the Euler–Lagrange equations in the interior.

However Lavrentiev in 1926 showed that there are circumstances where there is no optimum solution but one can be approached arbitrarily closely by increasing numbers of sections. The Lavrentiev Phenomenon identifies a difference in the infimum of a minimization problem across different classes of admissible functions. For instance the following problem, presented by Manià in 1934:^{[18]}

Clearly, minimizes the functional, but we find any function gives a value bounded away from the infimum.

Examples (in one-dimension) are traditionally manifested across and but Ball and Mizel^{[19]} procured the first functional that displayed Lavrentiev's Phenomenon across and for There are several results that gives criteria under which the phenomenon does not occur - for instance 'standard growth', a Lagrangian with no dependence on the second variable, or an approximating sequence satisfying Cesari's Condition (D) - but results are often particular, and applicable to a small class of functionals.

Connected with the Lavrentiev Phenomenon is the repulsion property: any functional displaying Lavrentiev's Phenomenon will display the weak repulsion property.^{[20]}

For example, if denotes the displacement of a membrane above the domain in the plane, then its potential energy is proportional to its surface area:

It is often sufficient to consider only small displacements of the membrane, whose energy difference from no displacement is approximated by

The difficulty with this reasoning is the assumption that the minimizing function u must have two derivatives. Riemann argued that the existence of a smooth minimizing function was assured by the connection with the physical problem: membranes do indeed assume configurations with minimal potential energy. Riemann named this idea the Dirichlet principle in honor of his teacher Peter Gustav Lejeune Dirichlet. However Weierstrass gave an example of a variational problem with no solution: minimize

A more general expression for the potential energy of a membrane is

The preceding reasoning is not valid if vanishes identically on In such a case, we could allow a trial function where is a constant. For such a trial function,

Both one-dimensional and multi-dimensional **eigenvalue problems** can be formulated as variational problems.

The Sturm–Liouville eigenvalue problem involves a general quadratic form

The next smallest eigenvalue and eigenfunction can be obtained by minimizing under the additional constraint

The variational problem also applies to more general boundary conditions. Instead of requiring that vanish at the endpoints, we may not impose any condition at the endpoints, and set

Eigenvalue problems in higher dimensions are defined in analogy with the one-dimensional case. For example, given a domain with boundary in three dimensions we may define

Fermat's principle states that light takes a path that (locally) minimizes the optical length between its endpoints. If the -coordinate is chosen as the parameter along the path, and along the path, then the optical length is given by

After integration by parts of the first term within brackets, we obtain the Euler–Lagrange equation

The light rays may be determined by integrating this equation. This formalism is used in the context of Lagrangian optics and Hamiltonian optics.

There is a discontinuity of the refractive index when light enters or leaves a lens. Let

The factor multiplying is the sine of angle of the incident ray with the axis, and the factor multiplying is the sine of angle of the refracted ray with the axis. Snell's law for refraction requires that these terms be equal. As this calculation demonstrates, Snell's law is equivalent to vanishing of the first variation of the optical path length.

It is expedient to use vector notation: let let be a parameter, let be the parametric representation of a curve and let be its tangent vector. The optical length of the curve is given by

Note that this integral is invariant with respect to changes in the parametric representation of The Euler–Lagrange equations for a minimizing curve have the symmetric form

It follows from the definition that satisfies

Therefore, the integral may also be written as

This form suggests that if we can find a function whose gradient is given by then the integral is given by the difference of at the endpoints of the interval of integration. Thus the problem of studying the curves that make the integral stationary can be related to the study of the level surfaces of In order to find such a function, we turn to the wave equation, which governs the propagation of light. This formalism is used in the context of Lagrangian optics and Hamiltonian optics.

The wave equation for an inhomogeneous medium is

We may look for solutions in the form

In that case, satisfies

These equations for solution of a first-order partial differential equation are identical to the Euler–Lagrange equations if we make the identification

We conclude that the function is the value of the minimizing integral as a function of the upper end point. That is, when a family of minimizing curves is constructed, the values of the optical length satisfy the characteristic equation corresponding the wave equation. Hence, solving the associated partial differential equation of first order is equivalent to finding families of solutions of the variational problem. This is the essential content of the Hamilton–Jacobi theory, which applies to more general variational problems.

In classical mechanics, the action, is defined as the time integral of the Lagrangian, The Lagrangian is the difference of energies,

The conjugate momenta are defined by

Further applications of the calculus of variations include the following:

- The derivation of the catenary shape
- Solution to Newton's minimal resistance problem
- Solution to the brachistochrone problem
- Solution to the tautochrone problem
- Solution to isoperimetric problems
- Calculating geodesics
- Finding minimal surfaces and solving Plateau's problem
- Optimal control
- Analytical mechanics, or reformulations of Newton's laws of motion, most notably Lagrangian and Hamiltonian mechanics;
- Geometric optics, especially Lagrangian and Hamiltonian optics;
- Variational method (quantum mechanics), one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states;
- Variational Bayesian methods, a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning;
- Variational methods in general relativity, a family of techniques using calculus of variations to solve problems in Einstein's general theory of relativity;
- Finite element method is a variational method for finding numerical solutions to boundary-value problems in differential equations;
- Total variation denoising, an image processing method for filtering high variance or noisy signals.

Calculus of variations is concerned with variations of functionals, which are small changes in the functional's value due to small changes in the function that is its argument. The **first variation**^{[l]} is defined as the linear part of the change in the functional, and the **second variation**^{[m]} is defined as the quadratic part.^{[22]}

For example, if is a functional with the function as its argument, and there is a small change in its argument from to where is a function in the same function space as then the corresponding change in the functional is^{[n]}

The functional is said to be **differentiable** if

The functional is said to be **twice differentiable** if

The second variation is said to be **strongly positive** if

Using the above definitions, especially the definitions of first variation, second variation, and strongly positive, the following sufficient condition for a minimum of a functional can be stated.

Sufficient condition for a minimum:The functional has a minimum at if its first variation at and its second variation is strongly positive at^{[30]}^{[r]}^{[s]}

- First variation
- Isoperimetric inequality
- Variational principle
- Variational bicomplex
- Fermat's principle
- Principle of least action
- Infinite-dimensional optimization
- Finite element method
- Functional analysis
- Ekeland's variational principle
- Inverse problem for Lagrangian mechanics
- Obstacle problem
- Perturbation methods
- Young measure
- Optimal control
- Direct method in calculus of variations
- Noether's theorem
- De Donder–Weyl theory
- Variational Bayesian methods
- Chaplygin problem
- Nehari manifold
- Hu–Washizu principle
- Luke's variational principle
- Mountain pass theorem
- Category:Variational analysts
- Measures of central tendency as solutions to variational problems
- Stampacchia Medal
- Fermat Prize
- Convenient vector space

**^**Whereas elementary calculus is about infinitesimally small changes in the values of functions without changes in the function itself, calculus of variations is about infinitesimally small changes in the function itself, which are called variations.^{[1]}**^**"Euler waited until Lagrange had published on the subject in 1762 ... before he committed his lecture ... to print, so as not to rob Lagrange of his glory. Indeed, it was only Lagrange's method that Euler called Calculus of Variations."^{[3]}**^**See**Harold J. Kushner (2004)**: regarding Dynamic Programming, "The calculus of variations had related ideas (e.g., the work of Caratheodory, the Hamilton-Jacobi equation). This led to conflicts with the calculus of variations community."**^**The neighborhood of is the part of the given function space where over the whole domain of the functions, with a positive number that specifies the size of the neighborhood.^{[10]}**^**Note the difference between the terms extremal and extremum. An extremal is a function that makes a functional an extremum.**^**For a sufficient condition, see section Variations and sufficient condition for a minimum.**^**The following derivation of the Euler–Lagrange equation corresponds to the derivation on pp. 184–185 of Courant & Hilbert (1953).^{[14]}**^**Note that and are evaluated at the*same*values of which is not valid more generally in variational calculus with non-holonomic constraints.**^**The product is called the first variation of the functional and is denoted by Some references define the first variation differently by leaving out the factor.**^**As a historical note, this is an axiom of Archimedes. See e.g. Kelland (1843).^{[15]}**^**The resulting controversy over the validity of Dirichlet's principle is explained by Turnbull.^{[21]}**^**The first variation is also called the variation, differential, or first differential.**^**The second variation is also called the second differential.**^**Note that and the variations below, depend on both and The argument has been left out to simplify the notation. For example, could have been written^{[23]}**^**A functional is said to be**linear**if and where are functions and is a real number.^{[24]}**^**For a function that is defined for where and are real numbers, the norm of is its maximum absolute value, i.e.^{[25]}**^**A functional is said to be**quadratic**if it is a bilinear functional with two argument functions that are equal. A**bilinear functional**is a functional that depends on two argument functions and is linear when each argument function in turn is fixed while the other argument function is variable.^{[27]}**^**For other sufficient conditions, see in Gelfand & Fomin 2000,**Chapter 5: "The Second Variation. Sufficient Conditions for a Weak Extremum" –**Sufficient conditions for a weak minimum are given by the theorem on p. 116.**Chapter 6: "Fields. Sufficient Conditions for a Strong Extremum" –**Sufficient conditions for a strong minimum are given by the theorem on p. 148.

**^**One may note the similarity to the sufficient condition for a minimum of a function, where the first derivative is zero and the second derivative is positive.

- ^
^{a}^{b}Courant & Hilbert 1953, p. 184 **^**Gelfand, I. M.; Fomin, S. V. (2000). Silverman, Richard A. (ed.).*Calculus of variations*(Unabridged repr. ed.). Mineola, New York: Dover Publications. p. 3. ISBN 978-0486414485.- ^
^{a}^{b}Thiele, Rüdiger (2007). "Euler and the Calculus of Variations". In Bradley, Robert E.; Sandifer, C. Edward (eds.).*Leonhard Euler: Life, Work and Legacy*. Elsevier. p. 249. ISBN 9780080471297. **^**Goldstine, Herman H. (2012).*A History of the Calculus of Variations from the 17th through the 19th Century*. Springer Science & Business Media. p. 110. ISBN 9781461381068.- ^
^{a}^{b}^{c}van Brunt, Bruce (2004).*The Calculus of Variations*. Springer. ISBN 978-0-387-40247-5. - ^
^{a}^{b}Ferguson, James (2004). "Brief Survey of the History of the Calculus of Variations and its Applications". arXiv:math/0402357. **^**Dimitri Bertsekas. Dynamic programming and optimal control. Athena Scientific, 2005.**^**Bellman, Richard E. (1954). "Dynamic Programming and a new formalism in the calculus of variations".*Proc. Natl. Acad. Sci*.**40**(4): 231–235. Bibcode:1954PNAS...40..231B. doi:10.1073/pnas.40.4.231. PMC 527981. PMID 16589462.**^**"Richard E. Bellman Control Heritage Award".*American Automatic Control Council*. 2004. Retrieved 2013-07-28.**^**Courant, R; Hilbert, D (1953).*Methods of Mathematical Physics*. Vol. I (First English ed.). New York: Interscience Publishers, Inc. p. 169. ISBN 978-0471504474.**^**Gelfand & Fomin 2000, pp. 12–13**^**Gelfand & Fomin 2000, p. 13**^**Gelfand & Fomin 2000, pp. 14–15**^**Courant, R.; Hilbert, D. (1953).*Methods of Mathematical Physics*. Vol. I (First English ed.). New York: Interscience Publishers, Inc. ISBN 978-0471504474.**^**Kelland, Philip (1843).*Lectures on the principles of demonstrative mathematics*. p. 58 – via Google Books.**^**Weisstein, Eric W. "Euler–Lagrange Differential Equation".*mathworld.wolfram.com*. Wolfram. Eq. (5).**^**Kot, Mark (2014). "Chapter 4: Basic Generalizations".*A First Course in the Calculus of Variations*. American Mathematical Society. ISBN 978-1-4704-1495-5.**^**Manià, Bernard (1934). "Sopra un esempio di Lavrentieff".*Bollenttino dell'Unione Matematica Italiana*.**13**: 147–153.**^**Ball & Mizel (1985). "One-dimensional Variational problems whose Minimizers do not satisfy the Euler-Lagrange equation".*Archive for Rational Mechanics and Analysis*.**90**(4): 325–388. Bibcode:1985ArRMA..90..325B. doi:10.1007/BF00276295. S2CID 55005550.**^**Ferriero, Alessandro (2007). "The Weak Repulsion property".*Journal de Mathématiques Pures et Appliquées*.**88**(4): 378–388. doi:10.1016/j.matpur.2007.06.002.**^**Turnbull. "Riemann biography". UK: U. St. Andrew.**^**Gelfand & Fomin 2000, pp. 11–12, 99**^**Gelfand & Fomin 2000, p. 12, footnote 6**^**Gelfand & Fomin 2000, p. 8**^**Gelfand & Fomin 2000, p. 6**^**Gelfand & Fomin 2000, pp. 11–12**^**Gelfand & Fomin 2000, pp. 97–98**^**Gelfand & Fomin 2000, p. 99**^**Gelfand & Fomin 2000, p. 100**^**Gelfand & Fomin 2000, p. 100, Theorem 2

- Benesova, B. and Kruzik, M.: "Weak Lower Semicontinuity of Integral Functionals and Applications".
*SIAM Review*59(4) (2017), 703–766. - Bolza, O.: Lectures on the Calculus of Variations. Chelsea Publishing Company, 1904, available on Digital Mathematics library. 2nd edition republished in 1961, paperback in 2005, ISBN 978-1-4181-8201-4.
- Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013.
- Clegg, J.C.: Calculus of Variations, Interscience Publishers Inc., 1968.
- Courant, R.: Dirichlet's principle, conformal mapping and minimal surfaces. Interscience, 1950.
- Dacorogna, Bernard: "Introduction"
*Introduction to the Calculus of Variations*, 3rd edition. 2014, World Scientific Publishing, ISBN 978-1-78326-551-0. - Elsgolc, L.E.: Calculus of Variations, Pergamon Press Ltd., 1962.
- Forsyth, A.R.: Calculus of Variations, Dover, 1960.
- Fox, Charles: An Introduction to the Calculus of Variations, Dover Publ., 1987.
- Giaquinta, Mariano; Hildebrandt, Stefan: Calculus of Variations I and II, Springer-Verlag, ISBN 978-3-662-03278-7 and ISBN 978-3-662-06201-2
- Jost, J. and X. Li-Jost: Calculus of Variations. Cambridge University Press, 1998.
- Lebedev, L.P. and Cloud, M.J.: The Calculus of Variations and Functional Analysis with Optimal Control and Applications in Mechanics, World Scientific, 2003, pages 1–98.
- Logan, J. David: Applied Mathematics, 3rd edition. Wiley-Interscience, 2006
- Pike, Ralph W. "Chapter 8: Calculus of Variations".
*Optimization for Engineering Systems*. Louisiana State University. Archived from the original on 2007-07-05. - Roubicek, T.: "Calculus of variations". Chap.17 in:
*Mathematical Tools for Physicists*. (Ed. M. Grinfeld) J. Wiley, Weinheim, 2014, ISBN 978-3-527-41188-7, pp. 551–588. - Sagan, Hans: Introduction to the Calculus of Variations, Dover, 1992.
- Weinstock, Robert: Calculus of Variations with Applications to Physics and Engineering, Dover, 1974 (reprint of 1952 ed.).

- Variational calculus.
*Encyclopedia of Mathematics*. - calculus of variations.
*PlanetMath*. - Calculus of Variations.
*MathWorld*. - Calculus of variations. Example problems.
- Mathematics - Calculus of Variations and Integral Equations. Lectures on YouTube.
- Selected papers on Geodesic Fields. Part I, Part II.