Several features of the eigenfunctions of the Laplacian on an annulus with homogeneous Neumann boundary condition are discussed. The distribution of the eigenvalues is discussed in detail, making use of a phase angle function called θ. The limiting cases of a disk and a circle are discussed.
Suppose there is an annulus defined by the region {(ρ,φ)∣Rin<ρ<Rout}. What are the functions Φ defined on this region that satisfy
λΦ=∇2Φ=ρ1∂ρ(ρ∂ρΦ)+ρ21∂φ2Φ and the boundary conditions
∂ρΦ(ρ=Rin,φ)=∂ρΦ(ρ=Rout,φ)=0, i.e., eigenfunctions of the Laplacian on the annulus with homogeneous Neumann boundary conditions?
For any boundary condition with azimuthal symmetry, an easy separation of variable gives you solutions of the general form Φ(ρ,φ)=(Ξ(J)Jm(zρ/Rout)+Ξ(Y)Ym(zρ/Rout))eimφ, where Ξ(J),
Ξ(Y), and z are constants fixed by the boundary conditions, and m is an integer and the azimuthal quantum number. The functions Jm and Ym are Bessel functions of the first and second kind, respectively, which are two linearly independent solutions of the Bessel equation z2y′′(z)+zy′(z)+(z2−m2)y(z)=0(1) with some particular normalization. Plugging the general form into the boundary condition, and we have Ξ(J)Jm′(z)+Ξ(Y)Ym′(z)=0,Ξ(J)Jm′(rz)+Ξ(Y)Ym′(rz)=0, where
r:=Rin/Rout. This is regarded as a homogeneous linear system of equations for Ξ(J) and Ξ(Y). In order for it to have nontrivial solutions, the determinant of the coefficient matrix must vanish, which gives g(z):=Jm′(rz)Ym′(z)−Ym′(rz)Jm′(z)=0.(2) We can then assign a radial quantum number n to the eigenfunctions by making the value of z the nth root of g, which we may casually call
zmn.
We then have the final solution (up to normalization) Φmn(ρ,φ)=(Ym′(zmn)Jm(zmnρ/Rout)−Jm′(zmn)Ym(zmnρ/Rout))eimφ,
where m is the azimuthal quantum number, and n is the radial quantum number, and zmn is defined as the nth root of the function g. The eigenvalue is λmn=−zmn2/Rout2. In order for the eigenfunctions to be complete, we need to allow m to be any integer, but we will restrict ourselves to non-negative m in the rest of the article because the negative ones are practically the same as the positive ones.
Distribution of eigenvalues
From the well-known asymptotic form of the Bessel functions (source) Jm(z→∞)=πz2cos(z−2mπ−4π),Ym(z→∞)=πz2sin(z−2mπ−4π), we can easily conclude that the asymptotic form of g is g(z→∞)=πrz2sin((1−r)z). This inspires us to define a function in companion with g as f(z):=Jm′(rz)Jm′(z)+Ym′(rz)Ym′(z),(3) which will have the asymptotic form f(z→∞)=πrz2cos((1−r)z). Therefore, f∼cosθ(z) and g∼sinθ(z) are like the cosine and sine pair of oscillatory functions, with a phase angle θ(z→∞)=(1−r)z asymptotically directly proportional to z.
Here is a plot that shows how θ behaves:
Plot of θ(z) and (1−r)z for m=4, r=1/3
The blue line is arctan(g(z)/f(z)), and the orange line is arctantan((1−r)z). we can see that the two functions are asymptotically equal.
A good thing about this description is that we can now see that zmn as a root of g is just the solution to the equation θ(z)=nπ. In other words, zmn are just the points at which the blue line crosses the horizontal axis in the plot above. Therefore, if we can find the inverse function z(θ) (maybe in a series expansion), we can directly get zmn=z(nπ). We can see from the plot that z(θ)=θ/(1−r) is probably not a good enough approximation. Therefore, we would like to seek higher order terms.
Kummer’s equation
For this part, we employ a method similar to a paper by Bremer.
First, for a general homogeneous second-order linear ODE in the form y′′+2χy′+ψy=0, where χ and ψ are known functions of z, we can define yn:=py,p:=Cexp∫χdz,q:=pψ−p′′, and the ODE will become a normal form yn′′+qyn=0. Therefore, for any second-order linear ODE, we only need to consider those without the first-derivative term.
Then, one can prove that, for the ODE y′′+qy=0, its two linearly independent solutions can be expressed as u=α′cosα,v=α′sinα,(4) where α satisfy Kummer’s equation α′2+2α′α′′′−4α′23α′′2=q.(5) This may seem like converting a problem to a more complicated one, but the advantage is that α is not oscillatory while u and v are oscillatory, which means that it would be easier to expand
α as a series for large z. Notice that α′=1/(u2+v2), which provides a means of finding α if the solutions of the original ODE are known.
Derivation
For the ODE y′′+qy=0, substitute the trial solution y=eα+iβ, where α,β are real functions of z. We can derive the following equations: α′′+2α′β′=0,β′′+β′2−α′2+q=0. The first equation gives β=−lnα′/2, which we can substitute into the second equation to get Equation 5. On the other hand, the real and imaginary parts of the trial solution gives us Equation 4. Technically speaking, it can be called the real and imaginary parts only if α′>0, but they are still solutions to the ODE.
Another way to prove Kummer’s equation is just substituting α′=1/(u2+v2) and check that it satisfies Kummer’s equation if u and v satisfy the ODE and have Wronkskian equal to 1.
Now, we can try to apply this to Jm′ and Ym′. First, we need to find the second-order ODE that those two functions satisfy. We already know that Jm and Ym are solutions to Equation 1, so it would be straightforward to derive that Jm′ and Ym′ satisfy y′′(z)+z1z2−m2z2−3m2y′(z)+(1−z2m2+1−z2−m22m2)y(z)=0.
To reduce this into the normal form without the first-derivative term, define p(z):=2πz2−m2z3/2,q(z):=1−4z24m2−1−(m2−z2)22m2+z2,(6) and we have
p(z)Jm′(z) and p(z)Ym′(z) satisfy y′′(z)+q(z)y(z)=0. Equation 4 then gives Ym′(z)=p(z)α′(z)cosα(z),−Jm′(z)=p(z)α′(z)sinα(z)(7) (the normalization of p and the initial value of α are chosen to make sure they are correct). You may worry that p(z<m) is imaginary, but α′(z) will be negative in that region so that everything works out and is real in the end. Substitute Equation 7 into Equation 3 and 2, and we have f(z)=p(rz)p(z)α′(rz)α′(z)cosθ(z),g(z)=p(rz)p(z)α′(rz)α′(z)sinθ(z), where θ(z):=α(z)−α(rz). By this, we have a workable expression for θ(z) so that we now just need to turn our attention to α(z).
We can substitute q(z) in Equation 6 into Equation 5 and expand on both sides as a series of z at infinity and solve for the series coefficients to get α′(z)=1+8z2−4m2−3+128z4−16m4−184m2+63+⋯,(8) which has a certain convergence radius which is not important at this stage.
Asymptotic expansion
Here we will work out another way of expanding α(z) as a series based on properties of the Bessel functions. From Equation 7, we have α′(z)1=p(z)2(Jm′(z)2+Ym′(z)2). If we were working with
Jm(z)2+Ym(z)2 instead, we would be able to employ the handy Nicholson’s integral, but the same method cannot be applied here.
Why it does not work
Nicholson’s integral is Jm(z)2+Ym(z)2=π28∫0∞cosh(2mt)K0(2zsinht)dt, where K0 is the modified Bessel function. A derivation is given in Section 13.73 of Watson’s A Treatise on the Theory of Bessel Functions.
To apply this to Jm′(z)2+Ym′(z)2, we need to use (source) Jm′(z)=21(Jm−1(z)−Jm+1(z)),Ym′(z)=21(Ym−1(z)−Ym+1(z)). Then, we just need to work out the cross terms
Jm−1(z)Jm+1(z)+Ym−1(z)Ym+1(z). By the same method of deriving Nicholson’s integral, we can get =Jm1(z)Jm2(z)+Ym1(z)Ym2(z)=π24∫0∞(e(m1+m2)tcos((m1−m2)π)+e−(m1+m2)t)Km2−m1(2zsinht)dt,
which is a more general version of Nicholson’s integral. However, this formula is only valid for ∣Re(m1−m2)∣<1 because otherwise the integral diverges near t=0, while we need m2−m1=2.
One can also try to employ the same method of deriving Nicholson’s integral directly on Jm′(z)2+Ym′(z)2, but it will turn out to have the same divergence problem. Briefly speaking, there is a term that is an infinitesimal quantity times the integral of ∂μ2Kμ(2zsinht), which can only be thrown away without any problem if ∣Reμ∣<1, which is not the case here.
We can, however, use the asymptotic expansion of the Bessel functions directly (source): Ym′(z)−Jm′(z)=πz2(cos(z−2mπ−4π)k=0∑∞z2k(−1)kc2k−sin(z−2mπ−4π)k=0∑∞z2k+1(−1)kc2k+1),=πz2(cos(z−2mπ−4π)k=0∑∞z2k+1(−1)kc2k+1+sin(z−2mπ−4π)k=0∑∞z2k(−1)kc2k),
where ck:=ak+(k−21)ak−1,ak:=(−1)k2kk!(1/2−m)k(1/2+m)k, where the raising factorial notation is used. By combining them, we get
Jm′(z)2+Ym′(z)2=π2k=0∑∞z2k+1rk,rk:=(−1)kl=0∑2k(−1)lclc2k−l.
Derivation
First, expand the squares, and the cross terms will cancel, and we have 2πz(Jm′(z)2+Ym′(z)2)=(k∑(−1)kz2kc2k)2+(k∑(−1)kz2k+1c2k+1)2.
For the first term, combine a 1/z2l and a 1/z2(k−l) to get a 1/z2k, so that we can resum it as (k∑(−1)kz2kc2k)2=k=0∑∞z2k1l=0∑k(−1)lc2l(−1)k−lc2k−2l.
Similarly, for the other term, combine a 1/z2l−1 and a 1/z2(k−l)+1 to get a 1/z2k, so that we can resum it as (k∑(−1)kz2k+1c2k+1)2=k=0∑∞z2k1l=1∑k(−1)l−1c2l−1(−1)k−lc2k−2l+1.
Combine to get 2πz(Jm′(z)2+Ym′(z)2)=k=0∑∞z2k1(l=0∑k(−1)kc2lc2k−2l+l=1∑k(−1)k−1c2l−1c2k−2l+1)=k=0∑∞z2k(−1)kl=0∑2k(−1)lclc2k−l.
Then, take the reciprocal of the series, and we have Jm′(z)2+Ym′(z)21=2πk=0∑∞z2k−1rkr, where
rkr is defined recursively as (noticing that r0=1) r0r:=1,rk>0r:=−l=1∑krlrk−lr.
Series reciprocation
For a series ∑k=0∞rkzk, what is the reciprocal (∑k=0∞rkzk)−1? Assume that it is another series ∑k=0∞rkrzk. Then, we have
1=(k=0∑∞rkzk)(k=0∑∞rkrzk)=k=0∑∞zkl=0∑krlrk−lr. Compare the coefficients of
zk on both sides, and we have δk,0=l=0∑krlrk−lr=r0rkr+l=1∑krlrk−lr. Therefore, we get the recursion relation r0r=r01,rk>0r=−r01l=1∑krlrk−lr. I will use the notation of superscript r for series reciprocation in the rest of this article.
There are non-recursive ways of writing those coefficients, but they are either very long or needs notations that require a bit of explanation, so I restrained from that.
Therefore, we get the asymptotic expansion of α(z) as α(z)=∫p(z)2(Jm′(z)2+Ym′(z)2)dz=−2mπ−4π+k=0∑∞z2k−1sk,sk:=2k−1m2rk−1r−rkr
(with r−1r understood as 0; I will be assuming the reference to any expansion coefficients outside their designated range to be interpreted as zero in all occurrences in the rest of this article). The additive constant in α(z) is properly chosen so that Equation 7 is satisfied without being off by a phase (but it is actually unimportant anyway because it gets canceled in the expression of θ). One can check that this agrees with Equation 8.
We can now find an expansion for θ: θ(z)=α(z)−α(rz)=(1−r)k=0∑∞z2k−1dk,dk:=(1−r2k−11)1−rsk.
Then it is the final step to find an expansion for the inverse function z(θ). Take the reciprocal of θ(z): θ(z)1=1−r1k=0∑∞z2k+1dkr,d0r:=1,dk>0r:=−l=1∑kdldk−lr.
Then, invert the series: z(θ)1=k=0∑∞θ2k+1bkr,bkr:=(2k+1)!(1−r)2k+1⎩⎨⎧l=1∑2k(−1)l(2k+1)lB2k,l(0,2!d1r,0,4!d2r,0,…),1,k>0,k=0,
where B2k,l is the Bell polynomial. Finally, take the reciprocal: z(θ)=k=0∑∞θ2k−1bk,b0:=1−r1,bk>0:=−1−r1l=1∑kblrbk−l.
The first few terms are z(θ)=1−rθ+8rθ(4m2+3)(1−r)+⋯ (I only write two terms here because the next term starts to be very long; it will turn out that only the first two terms are useful anyway).
(The capitalized R is the parameter r in this article; I have so poor choices of variable names…)
The result is shown in the plot below.
Plot of θ vs. z with different numbers of terms for m=4, r=1/3
The black line is the exact function θ(z) for m=4, r=1/3, and the blue, green, orange, and red lines are the truncated asymptotic series of the inverse function z(θ) with 1, 2, 3, and 4 terms, respectively. The horizontal grid lines are the integer multiples of π, whose intersections with the black line gives the wanted roots zmn. To be honest, the result is a bit anticlimactic because it turns out that truncating the series to only the first two terms gives the best approximation. This is within expectation, though, because the asymptotic series is never guaranteed to give better approximations with more terms.
The n=0 mode
For all the previous plots, I have only shown m=4. For m being other positive integers, they all look similar, but there is a special feature for m=0. To see this, first study the limiting form of f(z) and g(z) for small z. The limiting forms of the Bessel functions are (source) J0′(z→0)Y0′(z→0)=−2z,=πz2,Jm>0′(z→0)Ym>0′(z→0)=2m(m−1)!1zm−1,=π2mm!zm+11.(9) We can then get f(z→0)g(z→0)=⎩⎨⎧π2rz24,rm+11(π2mm!)2z2m+21,m=0,m>0,=⎩⎨⎧πr1−r2,πm(rm−1−rm+11)z21,m=0,m>0.
From this, we can get the limiting form of θ(z): θ(z→0)=⎩⎨⎧4π(1−r2)z2,−(m!2m)2πm(1−r2m)z2m,m=0,m>0. The particularly interesting thing to note is the negative sign for the case of
m>0. Remember that θ(z→∞)=(1−r)z, which is a positive thing, we would conclude that θ(z) would become zero for some positive z if m>0, while that may not be the case for m=0. Indeed, there is no positive z for θ(z)=0 if m=0, as we can see from the plot below.
Plot of θ(z) for m=0 and m=1.
The red line is θ(z) for m=1, which we can see that crosses the horizontal axis, while the blue line is θ(z) for m=0, which does not cross the horizontal axis. The more lightly colored lines are the limiting forms of θ(z) for small z.
The implication is that the n=0 mode would not exist for m=0 but exist for m>0. However, traditionally, when people talk about eigenmodes of the Laplacian, the quantum number n refers to the 1-based numbering of the roots of the Bessel functions (g in our case). This would mean that what is referred to as the (m>0,n) mode in this article would be traditionally called the (m>0,n+1) mode, while the (m=0,n) mode in this article is the same as what is traditionally called. In other words, the traditional names of the modes for different values of m,n are
m=012⋮n=0(1,1)(2,1)1(0,1)(1,2)(2,2)2(0,2)(1,3)(2,3)⋯⋱
Another way to see this is that, according to Equation 7, in order for Jm′(z) and Ym′(z) to be real-valued, whenever α′(z)<0, we would need p(z) to be purely imaginary, and vice versa. From Equation 6, we can see that p(z) is real when z≥m, and it is purely imaginary when z<m. Therefore, α′(z) goes from negative to positive when
z crosses m. This means that α(z) is monotonically decreasing when z<m but monotonically increasing when z>m, and z=m is the minimum. This means that rz<z<m⟹α(rz)>α(z), which means θ(z<m)<0. This means that when m>0, the root
zm0 must exist, and we also get a lower bound zm0>m. However, when m=0, α(z) would be monotonically increasing for all z, and thus θ(z) would be positive for all z, which means that z00 does not exist. By a similar method, we can also derive an upper bound for zm0. We have
m<rz<z⟹α(rz)<α(z), which means θ(z>m/r)>0. This means that zm0<m/r. This also implies the nonexistence of z00.
Maybe we can still formally define z00=0, and the eigenfunction would be the trivial Φ00(ρ,φ)=const with a zero eigenvalue.
Numerical root-finding
A key thing to note about the asymptotic series truncated to the first two terms is that it is impossible to get z values smaller than a certain bound: ztrunc(θ)=1−rθ+8rθ(4m2+3)(1−r)≥z∗:=2r4m2+3, where the equality holds when θ=θ∗:=(1−r)(4m2+3)/8r. This means that if we use zmn≈ztrunc(nπ) to approximate the roots, we will miss all the roots with z<z∗. The number of missed roots is n∗:=⌈θ∗/π⌉ (this includes the n=0 mode, so for m=0, the number of missed roots is n∗−1). According to the bound on zm0 we derived in the previous section, all the missing roots are in the interval
(z∗,m). In order to numerically find those roots, we can equispacedly sample more than n∗ points (maybe 3(n∗+1) points to be safe) in the interval as initial guesses and use usual root-finding methods such as Newton’s method to find the roots.
Limiting cases
The disk limit
When r→0, the annulus becomes a disk. We will see how the eigenmodes tend to the eigenmodes on a disk, which are given by Φmndisk(ρ,φ)=Jm(zmndiskρ/Rout)eimφ, where zmndisk are the roots of Jm′.
In this case, we can regard rz as small, so using Equation 9 gives f(z)=⎩⎨⎧πrz2Y0′(z),π(rz)m+12mm!Ym′(z),m=0,m>0,g(z)=⎩⎨⎧−πrz2J0′(z),−π(rz)m+12mm!Jm′(z),m=0,m>0.
This means that tanθ(z)=f(z)g(z)=Ym′(z)−Jm′(z)=tanα(z), or just θ(z)=α(z). We can also see this by noting that
α(rz)=0 when r=0, so θ(z)=α(z)−α(rz)=α(z). Therefore, the solutions to θ(z)=nπ would just be the solutions to α(z)=nπ, which are just roots of Jm′. This hence recovers the eigenmodes on a disk.
The circle limit
When r→1, the annulus becomes a circle. In this case, we can suppress the radial dimension to make the problem one-dimensional. The eigenmodes would be Φmcircle(φ)=eimφ, and the eigenvalues are λmcircle=−m2/R2 (no need to distinguish between Rin and Rout here).
This case is interesting in that the radial dimension would become “invisible” from the physics of the system. Note that θ(z→∞)=(1−r)z would be a very flat line, which means that z would need to increase by a very large amount in order to get θ to increase by π. Therefore, the radial quantum number n would be very hard to increase. Formally, the eigenvalues of the modes with n>0 will become infinite, so we then practically only need to consider n=0 for the “low-energy description” of the system.
We have previously derived the lower bound and upper bound for zm0, which gives zm0∈(m,m/r). When r→1, we then get zm0=m by the squeeze theorem. The eigenvalue of the Laplacian is then
λm0=−zm02/R2=−m2/R2=λmcircle.
For the case of m=0, there are two ways to look at it. If we do not regard the trivial mode Φ0(φ)=const as a mode, then we can say that the m=0 mode does not exist on a circle, which fits nicely with the fact that the n=0 mode does not exist for m=0 in the annulus. With the other way, if we regard the m=0 and n=0 mode on the annulus as the trivial mode Φ00(ρ,φ)=const, then it also nicely tends to the trivial mode m=0 on the circle in the limit r→1.