Discrete path integral

As we all know, the Schrödinger equation is ddtψ ⁣(t)=iH ⁣(t)ψ ⁣(t),\fr\d{\d t}\ket{\fc\psi t}=-\i\fc Ht\ket{\fc\psi t}, where ψ\ket\psi is the state vector and HH is the Hamiltonian operator (may be time-dependent). This is an ordinary differential equation (ODE), so we can express its solution as a sum ψ ⁣(t)=n=0ψ(n) ⁣(t),\ket{\fc\psi t}=\sum_{n=0}^\infty\ket{\fc{\psi^{\p n}}t}, where each term in the sum is defined iteratively by (see also my past article) ψ(0) ⁣(t)ψ ⁣(0),ψ(n+1) ⁣(t)i0tdtH ⁣(t)ψ(n) ⁣(t).\ket{\fc{\psi^{\p0}}t}\ceq\ket{\fc\psi0},\quad \ket{\fc{\psi^{\p{n+1}}}t}\ceq-\i\int_0^t\d t'\,\fc H{t'}\ket{\fc{\psi^{\p n}}{t'}}. This iteration is called the Picard iteration, which is most known as a method to prove the Picard–Lindelöf theorem.

Let us actually write out the general term in this sum as ψ(n) ⁣(t)=K(n) ⁣(t)ψ ⁣(0),\ket{\fc{\psi^{\p n}}t}=\fc{K^{\p n}}t\ket{\fc\psi0}, where K(n) ⁣(t)=(i)n0tdtnH ⁣(tn)0tndtn1H ⁣(tn1)0t2dt1H ⁣(t1).\fc{K^{\p n}}t=\p{-\i}^n\int_0^t\d t_n\,\fc H{t_n}\int_0^{t_n}\d t_{n-1}\,\fc H{t_{n-1}}\cdots\int_0^{t_2}\d t_1\,\fc H{t_1}. Now with the trick of time-ordering, we can rewrite this as K(n) ⁣(t)=(i)nn!0tdtn0tdtn10tdt1T ⁣[H ⁣(tn)H ⁣(tn1)H ⁣(t1)]=(i)nn!T(0tdtH ⁣(t))n,\begin{align*} \fc{K^{\p n}}t&=\fr{\p{-\i}^n}{n!}\int_0^t\d t_n\int_0^t\d t_{n-1}\cdots\int_0^t\d t_1\,\bfc{\mcal T}{\fc H{t_n}\fc H{t_{n-1}}\cdots\fc H{t_1}}\\ &=\fr{\p{-\i}^n}{n!}\mcal T\p{\int_0^t\d t'\,\fc H{t'}}^n, \end{align*} where T ⁣[]\bfc{\mcal T}\cdots means to order the operators inside according to their time arguments. The factor of 1/n!1/n! appears because there are n!n! ways to order nn time variables, but another way to see this is to note that the domain of integration is an nn-simplex, whose volume is 1/n!1/n! of the corresponding nn-parallelotope. When we then sum over all nn, we get the time-ordered exponential ψ ⁣(t)=K ⁣(t)ψ ⁣(0),K ⁣(t)=nK(n) ⁣(t)=Texp ⁣(i0tdtH ⁣(t)).\ket{\fc\psi t}=\fc Kt\ket{\fc\psi0},\quad \fc Kt=\sum_n\fc{K^{\p n}}t=\mcal T\fc\exp{-\i\int_0^t\d t'\,\fc H{t'}}. (1)(1) The operator K ⁣(t)\fc Kt has a bunch of equivalent names, such as the time evolution operator, the propagator, the Green’s function, the Dyson operator, and the S-matrix (well, they are not entirely equivalent because they are used under different contexts).

The interesting part comes when we consider the matrix elements of K ⁣(t)\fc Kt and how they relate to the matrix elements of H ⁣(t)\fc Ht. We choose an orthonormal basis {x}\B{\ket x}, and then insert a xxx\sum_x\ket x\bra x between each pair of Hamiltonian operators in the expression of K(n) ⁣(t)\fc{K^{\p n}}t: xnK(n) ⁣(t)x0=(i)nn!0tdntx1,,xn1hxnxn1 ⁣(tn)hxn1xn2 ⁣(tn1)hx1x0 ⁣(t1),\bra{x_n}\fc{K^{\p n}}t\ket{x_0} =\fr{\p{-\i}^n}{n!}\int_0^t\d^nt\sum_{x_1,\ldots,x_{n-1}} \fc{h_{x_nx_{n-1}}}{t_n}\fc{h_{x_{n-1}x_{n-2}}}{t_{n-1}}\cdots\fc{h_{x_1x_0}}{t_1}, where we have abbreviated hxy ⁣(t)xH ⁣(t)y\fc{h_{xy}}t\ceq\bra x\fc Ht\ket y, and t1tnt_1\le\cdots\le t_n are a specific ordering of the integrated time variables. We can pull out the sum over intermediate basis states and call the summand a contribution from a walk 1 from x0x_0 to xnx_n. In other words, yK(n) ⁣(t)x={xi}xn=yx0=xK ⁣({xi},t),\bra y\fc{K^{\p n}}t\ket x =\sum_{\B{x_i}}^{\substack{x_n=y\\x_0=x}}\fc K{\B{x_i},t}, where the sum is over all walks of length nn from xx to yy, and K ⁣({xi},t)(i)nn!0tdnti=1nhxixi1 ⁣(ti).\fc K{\B{x_i},t}\ceq\fr{\p{-\i}^n}{n!}\int_0^t\d^nt\, \prod_{i=1}^n\fc{h_{x_ix_{i-1}}}{t_i}. Now, the matrix elements of the full propagator is then the sum over contributions from all walks: yK ⁣(t)x=walks xyK ⁣(walk,t).\bra y\fc Kt\ket x=\sum_{\text{walks }x\to y}\fc K{\text{walk},t}. We can imagine a “Hamiltonian graph” formed by taking the basis states as vertices and the Hamiltonian matrix elements as edge weights (the weight of the directed edge from xx to yy is iyH ⁣(t)x-\i\bra y\fc Ht\ket x). Note that an edge can be a self-loop. Then, the propagator contribution from a walk is given by integrating the product of all the edge weights along the walk (for a trivial walk, which has zero length, the contribution is simply 11). This formulation may be called the discrete path integral. There is a paper on arXiv that delivers the idea of the Hamiltonian graph. Its difference from the current article is that it only focuses on time-independent Hamiltonians and that it treats self-loops separately instead of just like normal edges. The following two sections (excluding self-loops and the Feynman path integral) largely follow from the ideas from this paper.

Excluding self-loops

In some cases, it may be hard to consider self-loops on the Hamiltonian graph. We may then benefit from counting only walks without self-loops. However, the contributions from each walk will now be different because we have to account for the same walk with self-loops inserted at various positions. In other words, for a walk {xi}\B{x_i} without self-loops, instead of contributing K ⁣({xi},t)\fc K{\B{x_i},t}, we now want to find the contribution L ⁣({xi},t)\fc L{\B{x_i},t} that sums over all ways to insert self-loops into {xi}\B{x_i}: L ⁣({xi},t)=m0,,mn=0K ⁣({ximi},t),\fc L{\B{x_i},t}=\sum_{m_0,\ldots,m_n=0}^\infty \fc K{\B{x_i^{m_i}},t}, where mim_i is the number of self-loops inserted at the vertex xix_i, and {ximi}\B{x_i^{m_i}} is an abbreviation of this walk: x0,,x0m0,x1,,x1m1,,xn,,xnmn.\underbrace{x_0,\ldots,x_0}_{m_0}, \underbrace{x_1,\ldots,x_1}_{m_1},\ldots, \underbrace{x_n,\ldots,x_n}_{m_n}.

I will show that we can find an expression for L ⁣({xi},t)\fc L{\B{x_i},t} for the case when the Hamiltonians at different times commute with each other. In this case, we have K ⁣({xi},t)=1n!i=1n(i0tdthxixi1 ⁣(t)).\fc K{\B{x_i},t}=\fr{1}{n!}\prod_{i=1}^n \p{-\i\int_0^t\d t\,\fc{h_{x_ix_{i-1}}}t}. Therefore, by the definition of L ⁣({xi},t)\fc L{\B{x_i},t}, we have L ⁣({xi},t)=K ⁣({xi},t)m0,,mnn!(n+imi)!i(i0tdthxixi ⁣(t))mi.\fc L{\B{x_i},t}=\fc K{\B{x_i},t}\sum_{m_0,\ldots,m_n} \fr{n!}{\p{n+\sum_i m_i}!} \prod_i\p{-\i\int_0^t\d t\,\fc{h_{x_ix_i}}t}^{m_i}. For abbreviation, for the rest of this section, we denote Sii0tdthxixi ⁣(t)S_i\ceq-\i\int_0^t\d t\,\fc{h_{x_ix_i}}t.

Now, we use a trick to replace the factorial in the denominator with an contour integral: 1N!=12πidzezzN+1,\fr1{N!}=\fr1{2\pi\i}\oint\d z\fr{\e^z}{z^{N+1}}, where the contour is a counterclockwise simple closed curve around the origin in the complex plane. We then have L ⁣({xi},t)n!K ⁣({xi},t)=12πim0,,mndzezzn+imi+1iSimi=12πidzezzn+1imiSimizmi.\fr{\fc L{\B{x_i},t}}{n!\,\fc K{\B{x_i},t}} =\fr1{2\pi\i}\sum_{m_0,\ldots,m_n} \oint\d z\fr{\e^z}{z^{n+\sum_im_i+1}} \prod_i S_i^{m_i} =\fr1{2\pi\i}\oint\d z\fr{\e^z}{z^{n+1}} \prod_i\sum_{m_i}\fr{S_i^{m_i}}{z^{m_i}}. We have thus separated the sums over different mim_i. Each sum is a geometric series that converges when we choose the contour large enough, so L ⁣({xi},t)n!K ⁣({xi},t)=12πidzezzn+1i11Si/z=ieSiji(SiSj),\fr{\fc L{\B{x_i},t}}{n!\,\fc K{\B{x_i},t}} =\fr1{2\pi\i}\oint\d z\fr{\e^z}{z^{n+1}} \prod_i\fr1{1-S_i/z} =\sum_i\fr{\e^{S_i}}{\prod_{j\ne i}\p{S_i-S_j}}, where the last step used the residue theorem for each pole at z=Siz=S_i. This expression is exactly the expanded form of the divided difference of {eSi}\B{\e^{S_i}}, often denoted e[S0,,Sn]\e^{\b{S_0,\ldots,S_n}}. Therefore, L ⁣({xi},t)=n!K ⁣({xi},t)e[S0,,Sn].\fc L{\B{x_i},t}=n!\,\fc K{\B{x_i},t}\e^{\b{S_0,\ldots,S_n}}. (2)(2)

The discrete path integral can then be rewritten in a form that only involves collecting contributions from walks without self-loops: yK ⁣(t)x=walks xyno self-loopsL ⁣(walk,t).\bra y\fc Kt\ket x=\sum_{\text{walks }x\to y}^{\text{no self-loops}}\fc L{\text{walk},t}.

Feynman path integral

This discrete path integral formulation of the propagator already looks similar to the Feynman path integral, but we have to go a step further to take the continuum limit to actually get there. For simplicity, I will only consider a particle with unvarying mass mm moving in a time-independent potential in one dimension. Its Hamiltonian is H=p2/2m+V ⁣(x)H=p^2/2m+\fc Vx, and the orthonormal basis is chosen to be the position basis, also denoted as {x}\B{\ket x}.

The more standard way to derive the Feynman path integral is to slice the time integral in Equation 1, to express the total exponentiation as a product as many small exponentiations, and then to insert dxxx\int\d x\ket x\bra x between each pair of exponentiations (see, e.g., chapter 6 of Quantum Field Theory by Mark Srednicki). However, this approach does not make its connection to the discrete path integral clear. Instead, we will discretize the position space into a lattice with spacing aa, use the discrete path integral formulation on this lattice, and then take the continuum limit a0a\to0 at the end. Now, instead of xRx\in\bR, we have xaZx\in a\bZ. Each basis vector x\ket x now has two nearest neighbors xa\ket{x-a} and x+a\ket{x+a}.

In the position basis, the kinetic part of the Hamiltonian p2/2mp^2/2m is a second derivative operator. From numerical differentiation, we can approximate it on the lattice as p22m=12ma2(2xx+axa)x.\fr{p^2}{2m}=\fr1{2ma^2}\p{2\ket x-\ket{x+a}-\ket{x-a}}\bra x. Therefore, the discretized Hamiltonian is H=x(12ma2(2xx+axa)+V ⁣(x)x)x.H=\sum_x\p{\fr1{2ma^2}\p{2\ket x-\ket{x+a}-\ket{x-a}}+\fc Vx\ket x}\bra x. Its matrix elements are then hyx=12ma2(2δy,xδy,x+aδy,xa)+V ⁣(x)δy,x.h_{yx}=\fr1{2ma^2}\p{2\dlt_{y,x}-\dlt_{y,x+a}-\dlt_{y,x-a}}+\fc Vx\dlt_{y,x}. Conceptually, it consists of on-site energy 1/ma2+V ⁣(x)1/ma^2+\fc Vx and nearest-neighbor hops. The on-site energy looks bothersome, but we can remove it if we only consider walks without self-loops. Equation 2 becomes yK ⁣(t)x=n=0(βma2)n{xi}xn=yx0=xe[S0,,Sn]i=1n(12δxi,xi1+a+12δxi,xi1a),\bra y\fc Kt\ket x =\sum_{n=0}^\infty\p{\fr\beta{ma^2}}^n \sum_{\B{x_i}}^{\substack{x_n=y\\x_0=x}}\e^{\b{S_0,\ldots,S_n}} \prod_{i=1}^n\p{\fr12\dlt_{x_i,x_{i-1}+a}+\fr12\dlt_{x_i,x_{i-1}-a}}, (3)(3) where β=it\beta=\i t is the imaginary time, and Siβ(1/ma2+V ⁣(xi))S_i\ceq-\beta\p{1/ma^2+\fc V{x_i}} is defined for the same abbreviation reason as the previous section. The terms proportional to δxi,xi1\dlt_{x_i,x_{i-1}} in the multiplicant are omitted because we only consider walks without self-loops. The rest of this section is done under a Wick rotation so that β\beta is assumed to be a positive real parameter.

First, let us tackle the divided difference e[S0,,Sn]\e^{\b{S_0,\ldots,S_n}}. Define ΔSiSiSˉ\Dlt S_i\ceq S_i-\bar S, where SˉiS/(n+1)\bar S\ceq\sum_i S/\p{n+1} is the mean of {Si}\B{S_i}. Then, ΔSi\Dlt S_i is of order unity (while SiS_i is of order β/ma2\beta/ma^2, which is much larger than unity for small aa). Then, from the expanded form of the divided difference, we can easily get e[S0,,Sn]=eSˉe[ΔS0,,ΔSn].\e^{\b{S_0,\ldots,S_n}}=\e^{\bar S}\e^{\b{\Dlt S_0,\ldots,\Dlt S_n}}. Recalling how we initially derived the divided difference, we have e[ΔS0,,ΔSn]=m0,,mn1(n+imi)!iΔSimi.\e^{\b{\Dlt S_0,\ldots,\Dlt S_n}}=\sum_{m_0,\ldots,m_n}\fr1{\p{n+\sum_im_i}!}\prod_i\Dlt S_i^{m_i}. When nn is large (the reason of which will be explained in a minute), we have n!(n+1)!(n+2)!n!\ll\p{n+1}!\ll\p{n+2}! etc., while QiQ_i is of the order of unity, so we only need to consider the terms with the lowest imi\sum_im_i. The leading term is the term with imi=0\sum_im_i=0, which is trivially 1/n!1/n!, so we have e[S0,,Sn]=1n!eiSˉ=1n!exp ⁣(βma2βn+1iV ⁣(xi)).\e^{\b{S_0,\ldots,S_n}}=\fr1{n!}\e^{\i\bar S} =\fr1{n!}\fc\exp{-\fr{\beta}{ma^2}-\fr{\beta}{n+1}\sum_i\fc V{x_i}}. (4)(4) This contributes to the potential part of the action.

Substitute Equation 4 into Equation 3, and we have yK ⁣(t)x=n=0eλλnn!{xi}xn=yx0=xexp ⁣(βn+1iV ⁣(xi))i=1n(12δ+12δ),\bra y\fc Kt\ket x =\sum_{n=0}^\infty\fr{\e^{-\lmd}\lmd^n}{n!} \sum_{\B{x_i}}^{\substack{x_n=y\\x_0=x}} \fc\exp{-\fr{\beta}{n+1}\sum_i\fc V{x_i}}\prod_{i=1}^n\p{\fr12\dlt_\cdots+\fr12\dlt_\cdots}, where λβ/ma2\lmd\ceq\beta/ma^2 is a large positive number when aa is small. Observe that the factor eλλn/n!\e^{-\lmd}\lmd^n/n! is the probability mass function of the Poisson distribution with mean λ\lmd evaluated at nn. When λ\lmd is very large, the Poisson distribution can be approximated by a delta distribution because the standard deviation λ\sqrt\lmd is much smaller than λ\lmd. In other words, eλλn/n!δn,λ\e^{-\lmd}\lmd^n/n!\approx\dlt_{n,\lmd}. Therefore, yK ⁣(t)x={xi}xλ=yx0=xexp ⁣(βn+1iV ⁣(xi))i=1λ(12δxi,xi1+a+12δxi,xi1a).\bra y\fc Kt\ket x =\sum_{\B{x_i}}^{\substack{x_\lmd=y\\x_0=x}} \fc\exp{-\fr{\beta}{n+1}\sum_i\fc V{x_i}} \prod_{i=1}^\lmd\p{\fr12\dlt_{x_i,x_{i-1}+a}+\fr12\dlt_{x_i,x_{i-1}-a}}.

Imaginary parameter Poisson distribution

It was this point that got me thinking the most when I originally tried to derive the Feynman path integral without the Wick rotation. While the approximation eλλn/n!δn,λ\e^{-\lmd}\lmd^n/n!\approx\dlt_{n,\lmd} is valid, the problem is whether we can likewise say eiλ(iλ)n/n!δn,λ\e^{-\i\lmd}\p{\i\lmd}^n/n!\approx\dlt_{n,\lmd} (or δn,iλ\dlt_{n,\i\lmd}). While it is true that the left-hand side has a very large magnitude when n=λn=\lmd so that it dominates the sum, it does not actually approximate the right-hand side because the right-hand side is of order unity and is real. In fact, the summand is rapidly oscillating when nn is near λ\lmd, so the numbers of different phases actually cancel each other out and give a number with small magnitude in the end.

If you actually try to walk through the calculation without the Wick rotation, you will find that what you need to justify in the end is something like this (there are some other factors dependent on nn in the summand, but we can remove them by some techniques, so let us ignore them for simplicity): eiλn=0(iλ)nn!eM/neiM/λ,\e^{-\i\lmd}\sum_{n=0}^\infty\fr{\p{\i\lmd}^n}{n!}\e^{-M/n}\approx \e^{\i M/\lmd}, where λ\lmd and MM are both large positive numbers. This is unfortunately false, neither in magnitude nor in phase, and not even up to an overall factor.

While it is true that a lot of things can be carried over by analytic continuation, which is the reason why the Wick rotation can give the correct result in many cases, you can do the analytic continuation only if every step you take is actually analytic. Having an approximation based on the magnitude of each summand is not analytic because a fast oscillation can change the result drastically. Therefore, I am not satisfied with this derivation with the Wick rotation, but I have not found a better way to do it yet.

If the factor involving VV were not there in the summand, the sum of products is exactly the probability that a random walk starting at xx ends at yy after λ\lmd steps, where at each step the walk moves to the left or right nearest neighbor with equal probability 1/21/2. Instead of considering one random walk with λ\lmd steps, we can consider NN random walks with lλ/Nl\ceq\lmd/N steps each, where both ll and NN are large. Because ll is large, we can approximate the distribution of the position at the end of the jjth random walk as a normally distributed random variable qjq_j with variance la2la^2. Note that even though ll is large, la2la^2 is still very small, so the majority of contribution in the sum only comes from those paths where xix_i does not differ too much from the qjq_j of its corresponding part of random walk. Therefore, if we fix a set of {qj}\B{q_j}, the factor involving VV in the summand can be approximated by replacing V ⁣(xi)\fc V{x_i} with V ⁣(qj)\fc V{q_j} for all xix_i in the jjth random walk segment. We can then pull this factor out of the sum over {xi}\B{x_i} (but still inside the integral over {qj}\B{q_j}). Therefore, we get yK ⁣(t)x={qj}qN=yq0=xdq1dqN1{xi}xjl=qjexp ⁣(βn+1iV ⁣(xi))i=1λ(12δxi,xi1+a+12δxi,xi1a)={qj}qN=yq0=xdq1dqN1exp ⁣(βN+1jV ⁣(qj))j=1N(a12πla2exp ⁣((qjqj1)22la2)),\begin{align*} \bra y\fc Kt\ket x &=\int_{\B{q_j}}^{\substack{q_N=y\\q_0=x}}\d q_1\cdots\d q_{N-1} \sum_{\B{x_i}}^{x_{jl}=q_j}\fc\exp{\tfr{-\beta}{n+1}\sum_i\fc V{x_i}} \prod_{i=1}^\lmd\p{\tfr12\dlt_{x_i,x_{i-1}+a}+\tfr12\dlt_{x_i,x_{i-1}-a}}\\ &=\int_{\B{q_j}}^{\substack{q_N=y\\q_0=x}}\d q_1\cdots\d q_{N-1} \fc\exp{\tfr{-\beta}{N+1}\sum_j\fc V{q_j}} \prod_{j=1}^N\p{a\tfr1{\sqrt{2\pi la^2}}\fc\exp{-\tfr{\p{q_j-q_{j-1}}^2}{2la^2}}}, \end{align*} where the extra factor of aa in the multiplicant comes from converting a probability density to a probability (since the probability that the position ends up at the lattice site yy is aa times the probability density at yy).

Combining the product of exponentiations into the exponentiation of a sum, we have j=1N(a12πla2exp ⁣((qjqj1)22la2))=12πlNexp ⁣(j=1N(qjqj1)22la2).\prod_{j=1}^N\p{a\fr1{\sqrt{2\pi la^2}}\fc\exp{-\fr{\p{q_j-q_{j-1}}^2}{2la^2}}} =\fr1{\sqrt{2\pi l}^N} \fc\exp{-\sum_{j=1}^N\fr{\p{q_j-q_{j-1}}^2}{2la^2}}. If we introduce the time step Δtβ/N=mla2\Dlt t\ceq\beta/N=mla^2, for large NN, we have j=1N(qjqj1)22la2=j=1NΔt12m(qjqj1Δt)2=0βdt12mq˙ ⁣(t)2.\sum_{j=1}^N\fr{\p{q_j-q_{j-1}}^2}{2la^2} =\sum_{j=1}^N\Dlt t\,\fr12m\p{\fr{q_j-q_{j-1}}{\Dlt t}}^2 =\int_0^\beta\d t'\,\fr12m\fc{\dot q}{t'}^2. Similarly, for the potential part we introduce the time step Δt=β/(N+1)\Dlt t=\beta/\p{N+1} 2, βN+1j=0NV ⁣(qj)=j=0NΔtV ⁣(qj)=0βdtV ⁣(q ⁣(t)).\fr\beta{N+1}\sum_{j=0}^N\fc V{q_j} =\sum_{j=0}^N\Dlt t\,\fc V{q_j} =\int_0^\beta\d t'\,\fc V{\fc q{t'}}. Here, q ⁣(t)\fc q{t'} and q˙ ⁣(t)\fc{\dot q}{t'} are the position and its time at time tt' for a particle undergoing these random walks. Therefore, we have yK ⁣(t)x=ma22πΔtN{qj}qN=yq0=xdq1dqN1exp ⁣(0βdt(12mq˙ ⁣(t)2+V ⁣(q ⁣(t)))).\bra y\fc Kt\ket x =\sqrt{\fr{ma^2}{2\pi\Dlt t}}^N \int_{\B{q_j}}^{\substack{q_N=y\\q_0=x}}\d q_1\cdots\d q_{N-1} \fc\exp{-\int_0^\beta\d t'\p{\fr12m\fc{\dot q}{t'}^2+\fc V{\fc q{t'}}}}.

Finally, simply revert the Wick rotation by substituting β=it\beta=\i t and rewrite the integral over {qj}\B{q_j} as a path integral over all paths q ⁣(t)\fc q{t'}. Then, we get the Feynman path integral expression of the propagator: yK ⁣(t)x=q(t)=yq(0)=0Dqexp ⁣(i0tdt(12mq˙ ⁣(t)2V ⁣(q ⁣(t)))).\bra y\fc Kt\ket x =\int^{\substack{\fc qt=y\\\fc q0=0}} \mcal Dq\fc\exp{\i\int_0^t\d t'\p{\fr12m\fc{\dot q}{t'}^2-\fc V{\fc q{t'}}}}. This completes the derivation.


  1. You may wonder why I use the word “walk” while the resultant thing is called a “path” integral. This is just because “walk” is the correct term in graph theory that describes the object we use here, and the sum over all the walks is called a “path” integral because it is what physicists call. In graph theory, however, a path is a walk in which all vertices are distinct.↩︎

  2. Do not ask my why it is N+1N+1. It is not important.↩︎