3.15. Delta Function¶
Delta function is defined such that this relation holds:
(3.15.1)¶
No such function exists, but one can find many sequences “converging” to a delta function:
(3.15.2)¶
more precisely:
(3.15.3)¶
one example of such a sequence is:
It’s clear that (3.15.3) holds for any well behaved function . Some mathematicians like to say that it’s incorrect to use such a notation when in fact the integral (3.15.1) doesn’t “exist”, but we will not follow their approach, because it is not important if something “exists” or not, but rather if it is clear what we mean by our notation: (3.15.1) is a shorthand for (3.15.3) and (3.15.2) gets a mathematically rigorous meaning when you integrate both sides and use (3.15.1) to arrive at (3.15.3). Thus one uses the relations (3.15.1), (3.15.2), (3.15.3) to derive all properties of the delta function.
Let’s give an example. Let be the unit vector in 3D and we can label it using spherical coordinates . We can also express it in cartesian coordinates as .
(3.15.4)¶
Expressing as a function of and we have
(3.15.5)¶
Expressing (3.15.4) in spherical coordinates we get
and comparing to (3.15.5) we finally get
In exactly the same manner we get
See also (3.17.4.1) for an example of how to deal with more complex expressions involving the delta function like .
When integrating over finite interval, this formula is very useful:
in other words, the integral vanishes unless . In the limit and we get:
Another integral that converges to a delta function is:
3.16. Distributions¶
Some mathematicians like to use distributions and a mathematical notation for that, which I think is making things less clear, but nevertheless it’s important to understand it too, so the notation is explained in this section, but I discourage to use it – I suggest to only use the physical notation as explained below. The math notation below is put into quotation marks, so that it’s not confused with the physical notation.
The distribution is a functional and each function can be identified with a distribution that it generates using this definition ( is a test function):
besides that, one can also define distributions that can’t be identified with regular functions, one example is a delta distribution (Dirac delta function):
The last integral is not used in mathematics, in physics on the other hand, the first expressions () is not used, so always means that you have to integrate it, as explained in the previous section, so it behaves like a regular function (except that such a function doesn’t exist and the precise mathematical meaning is only after you integrate it, or through the identification above with distributions).
One then defines common operations via acting on the generating function, then observes the pattern and defines it for all distributions. For example differentiation:
so:
Multiplication:
so:
Fourier transform:
so:
But as you can see, the notation is just making things more complex, since it’s enough to just work with the integrals and forget about the rest. One can then even omit the integrals, with the understanding that they are implicit.
Some more examples:
Proof of :
Proof of :
Proof of :
To prove that we do the following calculation:
where the function is bounded and is finite since the test function is infinitely differentiable. From the Riemann–Lebesgue lemma, the integral then converges towards zero as .
3.17. Variations and Functional Derivatives¶
Variations and functional derivatives are generalization of differentials and partial derivatives to functionals. It is important to master this subject just like regular differentials/derivatives in calculus.
3.17.1. Functions of One Variable¶
Let’s first review differentials and derivatives of functions of one variable. We will use an approach that directly generalizes to multivariable functions and functionals. The differential is defined as:
Last equality follows from the fact, that the limit is a linear function of :
Where we used the substitution . We define the derivative as:
To get a formula for , we set and get:
Using the formulas above we get an equivalent expression for the differential:
So we get a general formula (the analogy of which we will use later):
The variable can be treated as a function (a very simple one):
So we define as:
As such, can have two meanings: either (a finite change in the variable ) or a differential (if depends on another variable, thanks to the chain rule everything will work). With this understanding, for all calculations, we only need the following two formulas — the definition of the differential (using a limit):
and the definition of the derivative (using the differential):
where is either a differential or a finite change in the variable .
If for example is a function of then in the above is a differential and we get:
Thanks to the chain rule, this can also be written as:
and so the notation is consistent.
3.17.2. Functions of several variables¶
Let’s have . The function assigns a number to each . We define a differential of in the direction of as:
The last equality follows from the fact, that is a linear function of . We define the partial derivative of with respect to as the -th component of the vector :
This also gives a formula for computing : we set and
The usual way to define partial derivatives is to use the last formula as the definition, but here this formula is a consequence of our definition in terms of the components of . Every variable can be treated as a function (very simple one):
and so we define
and thus we write and and
So has two meanings — it’s either (a finite change in the independent variable ) or a differential, depending on the context. The above is a detailed explanation why things are defined the way they are and what the exact meaning is. With this understanding, the only things that are actually needed for any calculations are the following – the definition of a differential:
Only a regular derivative (defined in the previous section) is needed for this definition. The definition of a partial derivative (and a gradient):
And finally the understanding that means either or a differential depending on the context. That’s all there is to it.
3.17.3. Functionals¶
Let’s now define functional derivatives and variations. Functional assigns a number to each function . The variation is defined as
We define as
This also gives a formula for computing : we set and
(3.17.3.1)¶
Sometimes the functional derivative is defined using the last formula, here this formula just follows from our definition. Every function can be treated as a functional (although a very simple one):
and so we define
thus we write and
so have two meanings — it’s either (a finite change in the function ) or a variation of a functional, depending on the context. It is completely analogous to . Let’s summarize the only formulas needed in actual calculations – the definition of a variation (using a regular derivative):
(3.17.3.2)¶
the definition of the functional derivative:
and the understanding that means either or a variation. The last equation is the best way to calculate functional derivative — apply variation, until you get the integral into the form and then you read off the functional derivative from the expression in the parentheses.
The correspondence between the finite and infinite dimensional case can be summarized using a functional , function of continuous parameter (which can be a scalar or a vector) and its discretized version , together with a function :
In other words, the basic difference is that the continuous parameter has been replaced with a discrete parameter . Then the function becomes a vector of values , variation becomes a differential and functional derivative becomes a partial derivative. To minimize a functional, one must search for zero functional derivative, while in the discrete case one searches for zero partial derivatives (gradient).
We now extend the -variation notation to any any function which contains the function being varied, you just need to replace by and apply to the whole , for example (here and ):
As such, the in (3.17.3.2) can be either a functional or any expression that contains the function . This notation allows us a very convenient computation, as shown in the following examples.
First, when computing a variation of some integral, we can interchange and :
In the expression we must understand from the context if we are treating it as a functional of or . In our case it’s a functional of , so we have .
The second very important note is when taking variation of expression like:
then when is replaced by , one has to keep track of the independent variable, so gets replaced by and gets replaced by . Thus the two variations and are different (independent). If there is only one indepenent variable, one can simply write as it is clear what the independent variable is. This is analogous to using differentials, e.g. , where one has to keep track of the independent variable as well for each .
Another useful formula is differentiation of a functional where the function depends on a parameter :
where we used the definition of a variation and a functional derivative with :
3.17.4. Examples¶
Some of these examples show how to use the delta function definition of the functional derivative in equation (3.17.3.1). However, the simplest way is to calculate variation first and then read off the functional derivative from the result, as explained above.
The next example shows that when taking variation of an expression containing the function of different independent variables, one has to keep track of these variables in the variations:
The last equality follows from (any antisymmetrical part of a would not contribute to the symmetrical integration).
Another example is the derivation of Euler-Lagrange equations for the Lagrangian density :
We can also write it using a functional derivative as:
Another example:
One might think that the above calculation is incorrect, because is undefined. In case of such problems the above notation automatically implies working with some sequence (for example ) and taking the limit :
(3.17.4.1)¶
As you can see, we got the same result, with the same rigor, but using an obfuscating notation. That’s why such obvious manipulations with are tacitly implied. However, the best method is to first calculate the variation:
and immediately read off the functional derivative:
Another example with a metric as a function of coordinates :
And an example of varying with respect to a metric:
Another example (varying energy functional):
Another example (Hartree energy):
we calculate the variation first:
so the functional derivative is:
Another example (functional with gradients):
the variation is:
from which we read off the functional derivative:
3.18. Dirac Notation¶
The Dirac notation allows a very compact and powerful way of writing equations that describe a function expansion into a basis, both discrete (e.g. a Fourier series expansion) and continuous (e.g. a Fourier transform) and related things. The notation is designed so that it is very easy to remember and it just guides you to write the correct equation.
Let’s have a function . We define
The following equation
then becomes
and thus we can interpret as a vector, as a basis and as the coefficients in the basis expansion:
That’s all there is to it. Take the above rules as the operational definition of the Dirac notation. It’s like with the delta function - written alone it doesn’t have any meaning, but there are clear and non-ambiguous rules to convert any expression with to an expression which even mathematicians understand (i.e. integrating, applying test functions and using other relations to get rid of all symbols in the expression – but the result is usually much more complicated than the original formula). It’s the same with the ket : written alone it doesn’t have any meaning, but you can always use the above rules to get an expression that make sense to everyone (i.e. attaching any bra to the left and rewriting all brackets with their equivalent expressions) – but it will be more complex and harder to remember and – that is important – less general.
Now, let’s look at the spherical harmonics:
on the unit sphere, we have
thus
and from (3.30.1) we get
now
from (3.30.3) we get
so we have
so forms an orthonormal basis. Any function defined on the sphere can be written using this basis:
where
If we have a function in 3D, we can write it as a function of and and expand only with respect to the variable :
In Dirac notation we are doing the following: we decompose the space into the angular and radial part
and write
where
Let’s calculate
so
We must stress that only acts in the space (not the space) which means that
and leaves intact. Similarly,
is a unity in the space only (i.e. on the unit sphere).
Let’s rewrite the equation (3.30.4):
Using the completeness relation (3.29.1):
we can now derive a very important formula true for every function :
where
or written explicitly
(3.18.1)¶
3.19. Homogeneous Functions (Euler’s Theorem)¶
A function of several variables is homogeneous of degree if
By differentiating with respect to :
and setting we get the so called Euler equation:
in 3D this can also be written as:
3.19.1. Example 1¶
The function is homogeneous of degree 1, because:
and the Euler equation is:
or
Which is true.
3.19.2. Example 2¶
The function is homogeneous of degree -1, because:
and the Euler equation is:
or
Which is true.
3.20. Green Functions¶
Green functions are an excellent tool for working with a solution to any ODE or PDE. In this text we explain how it works and then show how one can calculate them using FEM.
3.20.1. Introduction¶
Let’s put any ODE or PDE in the form:
(3.20.1.1)¶
Here is a differential operator and can have any dimension, e.g. 1D (ODE), 2D, 3D or more (PDE). Then we can express the solution as
(3.20.1.2)¶
where is a Green function, that needs to satisfy the equation:
(3.20.1.3)¶
Remember, that acts on only, so we can check, that (3.20.1.2) indeed solves the PDE (3.20.1.1):
3.20.2. Boundary Conditions¶
The equation (3.20.1.3) doesn’t determine the Green function uniquely, because one can add to it any solution of the homogeneous equation . We can use this freedom to solve (3.20.1.3) for any boundary condition. So we prescribe a boundary condition and find the Green function (by solving (3.20.1.3)) that satisfies the boundary condition. It can be shown, that determined from (3.20.1.2) then also needs to satisfy the same boundary condition.
3.20.3. Symmetry¶
We write the equation for Green functions at two different points and :
and multiply the first equation by , second by :
substract them and integrate over :
Assuming that the operator is Hermitean, we get:
So the Green function is symmetric for Hermitean operators .
3.20.4. Examples¶
Poisson Equation in 1D¶
Poisson equation:
We calculate the Green function using the Fourier transform:
Check:
Then:
The green function can also be written using and :
Radial Poisson Equation¶
Let’s write and using the Heaviside step function:
and:
Then we can differentiate:
Given:
(3.20.4.1)¶
The Green function is
Let’s differentiate:
and
So we get:
So from (3.20.4.1) is a solution to the radial Poisson equation:
Helmholtz Equation in 1D¶
with boundary conditions . We use the Fourier transform:
Check:
The general solution of the homogeneous equation is:
so the general Green function is:
Satisfying the boundary conditions (for all ):
we get:
and:
and
To show that this really works, let’s take for example . Then
We can use SymPy to evaluate the integrals:
In [1]: u = -cos(x)*integrate(3*sin(2*y)*sin(y), (y, 0, x)) - \
sin(x)*integrate(3*sin(2*y)*cos(y), (y, x, pi/2))
In [2]: u
Out[2]:
-(cos(x)*sin(2*x) - 2*cos(2*x)*sin(x))*cos(x) - (sin(x)*sin(2*x)
+ 2*cos(x)*cos(2*x))*sin(x)
In [3]: simplify(u)
Out[3]:
2 2
- cos (x)*sin(2*x) - sin (x)*sin(2*x)
In [4]: trigsimp(_)
Out[4]: -sin(2*x)
And we get
We can easily check, that :
>>> u = -sin(2*x)
>>> u.diff(x, 2) + u
3*sin(2*x)
and since , we have verified, that is the correct solution.
Poisson Equation in 2D¶
Let and we want to solve:
So we have:
The solution is:
Poisson Equation in 3D¶
with boundary condition at infinity. Then:
and
Helmholtz Equation in 3D¶
with boundary condition at infinity. Then:
Finite Element Method¶
Let’s show it on the Laplace equation. We want to solve:
We will treat as a parameter, so we define :
We set on the boundary and we get:
So we choose and then solve for using FEM and we get the Green function for all and one particular . We can then evaluate the integral (3.20.1.2) numerically – one would have to use FEM for all that are needed in the integral, so that is not efficient, but it should work. One will then be able to play with Green functions and be able to calculate them numerically for any boundary condition (which is not possible analytically).
3.21. Binomial Coefficients¶
For and integers, the binomial coefficients are defined by:
For real, one just uses the second formula as a definition:
Example I:
Example II:
The binomial formula is for integer:
and for real and this can be generalized to:
Example: (for )
so:
Another example:
where we used (3.22.2) and
The are Legendre Polynomials.
3.22. Double Sums¶
When evaluating double sums, one can use triangular summation to reorder them:
(3.22.1)¶
Also it holds
(3.22.2)¶
3.23. Triangle Inequality¶
Triangle inequality (condition) means that none of the three quantities , , is greater than the sum of the other two:
(3.23.1)¶
This is equivalent to just one equation:
(3.23.2)¶
we can do any permutation of the symbols, i.e. the above equation is equivalent to any of these:
So instead of stating the three inequalities (3.23.1) it is more convenient to just write (3.23.2), using any permutation that we like.
To show, that (3.23.1) implies (3.23.2) we rewrite (3.23.1):
so
and we get (3.23.2). To show, that (3.23.2) implies (3.23.1) we rewrite (3.23.2) for first:
so:
rearranging:
since is positive, if then also and we get (3.23.1). Finally, for :
so:
rearranging:
since is positive, if then also and we get (3.23.1).
3.24. Gamma Function¶
The Gamma function is defined by the following properties for :
(3.24.1)¶
(3.24.2)¶
(3.24.3)¶
It can be shown that this determines the function uniquely for (this is called the Bohr-Mollerup theorem) and then it can be extended analytically to the whole complex plane.
The most common formula for that satisfies (3.24.1), (3.24.2) and (3.24.3) is:
(3.24.4)¶
It satisfies (3.24.1) because:
It satisfies (3.24.2) by integrating by parts:
Finally it satisfies (3.24.3) by verifying the convex condition directly ( and ):
And thus (3.24.4) uniquely determines the Gamma function. We can use (3.24.4) to calculate :
From this and the definition of the Gamma function we get for integer :
(3.24.5)¶
and
(3.24.6)¶
3.25. Incomplete Gamma Function¶
The upper incomplete gamma function is defined by:
Integrating by parts we get:
Some special values are:
For integer we get:
and
The lower incomplete gamma function is defined by:
and as such all expressions can be easily derived using the gamma and upper incomplete gamma functions. The recursion relation is then:
Some special values are:
By repeated application of the recursion formula we get:
(3.25.1)¶
where we used:
which can be proven by the following inequality which uses the fact that the function is an increasing function for , so as long as we get:
Using (3.25.1) we can now write using the Kummer confluent hypergeometric function as follows:
3.25.1. Example¶
Consider the class of integrals:
We write them using the lower incomplete gamma function as:
We can also write it using the confluent hypergeometric function as follows:
For we get:
Using the recursion relation we get:
By expressing from the equation we obtain the inverse relation:
From (3.25.1) we get:
3.26. Factorial¶
The factorial is defined as
By (3.24.5) it can be written using the Gamma function as:
3.27. Double Factorial¶
The double factorial is defined as:
One can rewrite double factorial using a factorial as:
For odd it can be written using the Gamma function, see (3.24.6):
3.27.1. Example¶
3.28. Fermi-Dirac Integral¶
The Fermi-Dirac integral (sometimes just called a Fermi integral) is defined as:
Examples:
The Fermi-Dirac integral can also be written using the polylogarithm, see The Series pFq for details.