Therefore, a natural idea that comes to my mind is to simply let people know when I am asleep so that they do not expect me to respond immediately.

Discord and GitHub have been among my most used platforms for some time, and they both have a feature to let users set a custom status (with a custom text and an emoji). This opens up a possibility of using a program to automatically set the status to indicate that I am asleep.

For Discord, this is as simple as invoking a REST API (**Notice that this is against Discord’s ToS**):

1 2 3 4 5 6 7 8 9 10 11 12 13 14 | ```
# Set sleeping status
curl -X PATCH \
-H "Content-Type: application/json" \
-H "Authorization: YoUr.DiScOrD.ToKeN" \
-d '{"custom_status":{"text":"Sleeping...","emoji_id":null,"emoji_name":"😴","expires_at":null},"status":"dnd"}' \
https://discordapp.com/api/v8/users/@me/settings
# Clear sleeping status
curl -X PATCH \
-H "Content-Type: application/json" \
-H "Authorization: YoUr.DiScOrD.ToKeN" \
-d '{"custom_status":null,"status":"online"}' \
https://discordapp.com/api/v8/users/@me/settings
``` |

For GitHub, there is not a REST API for that, but you can install the user-status plugin for GitHub CLI:

1 | ```
gh extension install vilmibm/gh-user-status
``` |

Then, you can set the status with:

1 2 3 4 5 | ```
# Set sleeping status
gh user-status set 'Sleeping...' --emoji='sleeping' --limited
# Clear sleeping status
gh user-status set 'null' --expiry=1s
``` |

Now, the next step is to run these commands automatically when I fall asleep and wake up. This can be done with MacroDroid, which can trigger actions based on various triggers. To run arbitrary commands, you can use the Tasker plugin for Termux. MacroDroid supports using the return value of the sleep API to trigger an action, but this tends to be quite unreliable on my device. Therefore, I use it in conjunction with a quick setting tile that I can toggle manually. The macro is like this:

Triggers:

- Fell Asleep / Woke Up (Android sleep API)
- Quick Tile On/Off

Actions:

1 2 3 4 5 6 7 8 9 10 11 12 13 | ```
If Trigger Fired: Woke Up, or Quick Tile Off
If Sleeping = True
Clear sleeping status on Discord and GitHub
# Include other waking up logic here, such as turning off DND mode
End If
Sleeping = False
Else If Trigger Fired: Fell Asleep, or Quick Tile On
If Sleeping = False
Set GitHub and Discord user status to sleeping
# Include other falling asleep logic here, such as turning on DND mode
End If
Sleeping = True
End If
``` |

By the way, I have a bunch of topics that I want to write blog articles about, but I have been quite busy recently, so I may have to pause updating this blog for a while. I hope I can get back to writing soon!

]]>`/YOUR/PATH/TO/ca.pem`

:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | ```
-----BEGIN CERTIFICATE-----
MIIEMjCCAxqgAwIBAgIBATANBgkqhkiG9w0BAQUFADB7MQswCQYDVQQGEwJHQjEb
MBkGA1UECAwSR3JlYXRlciBNYW5jaGVzdGVyMRAwDgYDVQQHDAdTYWxmb3JkMRow
GAYDVQQKDBFDb21vZG8gQ0EgTGltaXRlZDEhMB8GA1UEAwwYQUFBIENlcnRpZmlj
YXRlIFNlcnZpY2VzMB4XDTA0MDEwMTAwMDAwMFoXDTI4MTIzMTIzNTk1OVowezEL
MAkGA1UEBhMCR0IxGzAZBgNVBAgMEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE
BwwHU2FsZm9yZDEaMBgGA1UECgwRQ29tb2RvIENBIExpbWl0ZWQxITAfBgNVBAMM
GEFBQSBDZXJ0aWZpY2F0ZSBTZXJ2aWNlczCCASIwDQYJKoZIhvcNAQEBBQADggEP
ADCCAQoCggEBAL5AnfRu4ep2hxxNRUSOvkbIgwadwSr+GB+O5AL686tdUIoWMQua
BtDFcCLNSS1UY8y2bmhGC1Pqy0wkwLxyTurxFa70VJoSCsN6sjNg4tqJVfMiWPPe
3M/vg4aijJRPn2jymJBGhCfHdr/jzDUsi14HZGWCwEiwqJH5YZ92IFCokcdmtet4
YgNW8IoaE+oxox6gmf049vYnMlhvB/VruPsUK6+3qszWY19zjNoFmag4qMsXeDZR
rOme9Hg6jc8P2ULimAyrL58OAd7vn5lJ8S3frHRNG5i1R8XlKdH5kBjHYpy+g8cm
ez6KJcfA3Z3mNWgQIJ2P2N7Sw4ScDV7oL8kCAwEAAaOBwDCBvTAdBgNVHQ4EFgQU
oBEKIz6W8Qfs4q8p74Klf9AwpLQwDgYDVR0PAQH/BAQDAgEGMA8GA1UdEwEB/wQF
MAMBAf8wewYDVR0fBHQwcjA4oDagNIYyaHR0cDovL2NybC5jb21vZG9jYS5jb20v
QUFBQ2VydGlmaWNhdGVTZXJ2aWNlcy5jcmwwNqA0oDKGMGh0dHA6Ly9jcmwuY29t
b2RvLm5ldC9BQUFDZXJ0aWZpY2F0ZVNlcnZpY2VzLmNybDANBgkqhkiG9w0BAQUF
AAOCAQEACFb8AvCb6P+k+tZ7xkSAzk/ExfYAWMymtrwUSWgEdujm7l3sAg9g1o1Q
GE8mTgHj5rCl7r+8dFRBv/38ErjHT1r0iWAFf2C3BUrz9vHCv8S5dIa2LX1rzNLz
Rt0vxuBqw8M0Ayx9lt1awg6nCpnBBYurDC/zXDrPbDdVCYfeU0BsWO/8tqtlbgT2
G9w84FoVxp7Z8VlIMCFlA2zs6SFz7JsDoeA3raAVGI/6ugLOpyypEBMs1OUIJqsi
l2D4kF501KKaU73yqWjgom7C12yxow+ev+to51byrvLjKzg6CYG1a4XXvi3tPxq3
smPi9WIsgtRqAEFQ8TmDn5XpNpaYbg==
-----END CERTIFICATE-----
``` |

Then, run

1 2 3 4 5 6 7 8 9 10 | ```
nmcli con mod eduroam 802-1x.eap peap
nmcli con mod eduroam 802-11-wireless-security.key-mgmt wpa-eap
nmcli con mod eduroam 802-11-wireless-security.proto rsn
nmcli con mod eduroam 802-11-wireless-security.pairwise ccmp
nmcli con mod eduroam 802-11-wireless-security.group ccmp,tkip
nmcli con mod eduroam 802-1x.ca-cert /YOUR/PATH/TO/ca.pem
nmcli con mod eduroam 802-1x.phase2-autheap mschapv2
nmcli con mod eduroam 802-1x.anonymous-identity anonymous@ucsb.edu
nmcli con mod eduroam 802-1x.identity YOUR_EDU_EMAIL_ADDRESS
nmcli con mod eduroam 802-1x.password YOUR_PASSWORD
``` |

Finally, you can connect to eduroam now!

This may also apply to eduroam in other campuses, but I haven’t tested it yet.

]]>**Theorem.** The conformal map $\fc wz$ transforms the trajectory with energy $-B$ in potential $\fc Uz\ceq A\v{\d w/\d z}^2$ into the trajectory with energy $-A$ in potential $\fc Vw\ceq B\v{\d z/\d w}^2$.

This result is pretty amazing in that it reveals a quite implicit duality between the two potentials, and it looks very symmetric as written.

This theorem, as I know of, was first introduced in the appendix of V. I. Arnold’s book *Huygens and Barrow, Newton and Hooke*. Part of this article is already covered in the relevant part of the book.

Before I show the proof of it, let me first introduce it by a much more well-known example.

As we all know, Bertrand’s theorem states that the only two types of central-force potentials where all bound orbits are closed are $U\propto r^{-1}$ (the Kepler problem) and $U\propto r^2$ (the harmonic oscillator). How the two potentials are special among all sorts of different central-force potentials makes people wonder if there is any connection between them. Fortunately, there is one, and it is obvious once we notice that the complex squaring transforms any center-at-origin ellipses into focus-at-origin ellipses. Inspired by this, it is easy to see that trajectories in the Kepler problem can be transformed into trajectories of harmonic oscillators under complex squaring.

You may ask, how can we notice complex squaring does the said transformation on ellipses? The observation is noticing the simple algebra

$\p{z+\fr1z}^2=z^2+\fr1{z^2}+2,$

which means that the Joukowski transform $z\mapsto z+1/z$ of a unit circle simply translates under complex squaring. We can then try to generalize this to circles of other radii, whose Joukowski transformations are just ellipses! (If you remember, this is the second time Joukowski transformation appears in my blog. The first time was here.)

Then, are the Kepler problem and the harmonic oscillator the only two central-force potentials whose trajectories can be transformed into each other by a complex function? The answer is no. In fact, for any trajectory in almost any power-law central-force potential, we can take some power of it to get a trajectory in another power-law central-force potential.

This result can be summarized as follows. Taking the $\p{\alp/2+1}$th power of a trajectory with energy $E$ in the potential $U=ar^\alp$ ($\alp\ne-2$) gives a trajectory with energy $F$ in the potential $V=br^\beta$, where

$\p{\alp+2}\p{\beta+2}=4,\quad b=-\fr14\p{\alp+2}^2E,\quad F=-\fr14\p{\alp+2}^2a.$

To prove this, we just need to reparameterize the transformed trajectory in a new time coordinate $\tau$ defined as $\d\tau=\v z^\alp\,\d t$, where $z$ is the complex position of the original trajectory. Then, by some calculation and utilizing the energy conservation, we can show that the parameter equation in terms of the new time coordinate satisfy the equation of motion we expect. I will not show the details here because they would be redundant once I prove the more general case using the same methods.

There is an interesting special case, which is $\alp=-2$. There is no potential that is dual to $U\propto r^{-2}$. Another interesting case is $\alp=-4$, which is dual to itself ($\beta=-4$). It kind of means that the coefficient in the potential is “interchangeable” with the energy, and the trajectories can be derived from each other by taking the complex reciprocal.

We can get some interesting results with $a=0$, which is just the case of a free particle, whose trajectories are all straight lines. Since in this case we necessary have $F=0$, we can say that the zero-energy trajectory in any power-law potential is related to a straight line by a power. From this result, we can derive some interesting corollaries. For example, the zero-energy trajectory in the Kepler problem is a parabola (square of a straight line), which is well-known. The zero-energy trajectory in $U\propto-r^{-4}$ is a circle passing through the origin (reciprocal of a straight line), which is a pretty interesting not-so-well-known result.

Another interesting result is that, the deflection angle of an incident zero-energy particle scattered by the potential $U\propto-r^\alp$ is $\tht$ under paraxial limit, if

$\alp=\fr{2\vphi}{\pi-\vphi},\quad\vphi=\pm\tht-2k\pi,\quad k\in\bN.$

This result can be easily derived by using the conformal transform of the real line (actually, a straight line that approaches the real line). The crucial part here is that $k$ cannot take negative integers because we need $\alp>-2$. The reason is that, when $\alp\le-2$, paraxial zero-energy particles are bound to sink into the origin, and thus no scattering actually happens. This small pitfall indicates that the trajectory in the dual potential is not a two-side infinite straight line, either, in that limit, in contrast to being seemingly a free particle.

Let’s go back to the theorem I stated at the beginning of this article.

*Proof.* Consider a new time coordinate $\tau$ defined as $\d\tau=\v{\d w/\d z}^2\,\d t$. Then, the motion of $w$ satisfies

$\begin{align*} m\fr{\d^2w}{\d\tau^2} &=m\fr{\d t}{\d\tau}\fr{\d}{\d t}\p{\fr{\d t}{\d\tau}\fr{\d w}{\d t}}\\ &=m\v{\fr{\d z}{\d w}}^2\fr{\d}{\d t}\p{\v{\fr{\d z}{\d w}}^2\fr{\d w}{\d z}\fr{\d z}{\d t}}\\ &=m\fr{\d z}{\d w}\p{\fr{\d z}{\d w}}^*\p{\p{\fr{\d^2z}{\d w^2}\fr{\d w}{\d z}\fr{\d z}{\d t}}^*\fr{\d z}{\d t} +\p{\fr{\d z}{\d w}}^*\fr{\d^2 z}{\d t^2}}. \end{align*}$

Here we need to substitute $\d^2 z/\d t^2$ by the equation of motion for $z$. By computing the real and imaginary parts separately, we can derive that for any holomorphic function $f$, the gradient of $\v f^2$ expressed as a complex number is $\nabla\v f^2=2\p{\d f/\d z}^*f$. Therefore, the equation of motion for $z$ is

$m\fr{\d^2z}{\d t^2}=-2A\fr{\d w}{\d z}\p{\fr{\d^2w}{\d z^2}}^*.$

According to series reversion, we have $\d^2 w/\d z^2=-\p{\d w/\d z}^3\d^2 z/\d w^2$. Therefore, the equation of motion for $z$ can also be written as

$m\fr{\d^2z}{\d t^2}=2A\v{\fr{\d w}{\d z}}^2\p{\fr{\d w}{\d z}}^{*2}\p{\fr{\d^2 z}{\d w^2}}^*.$

Substitute this, and we have

$m\fr{\d^2w}{\d\tau^2}=\fr{\d z}{\d w}\p{\fr{\d^2z}{\d w^2}}^* \p{m\v{\fr{\d z}{\d t}}^2+2A\v{\fr{\d w}{\d z}}^2}.$

Substitute the energy conservation of the motion of $z$:

$\fr12m\v{\fr{\d z}{\d t}}^2+A\v{\fr{\d w}{\d z}}^2=-B,$

and we have

$m\fr{\d^2w}{\d\tau^2}=-2B\fr{\d z}{\d w}\p{\fr{\d^2z}{\d w^2}}^*,$

which is the equation of motion for $w$ that we expect.

To get the energy of the motion of $w$, we calculate

$\begin{align*} \fr12m\v{\fr{\d w}{\d\tau}}^2+B\v{\fr{\d z}{\d w}}^2 &=\fr12m\v{\fr{\d w}{\d z}\fr{\d z}{\d t}\fr{\d t}{\d\tau}}^2+B\v{\fr{\d z}{\d w}}^2\\ &=\v{\fr{\d w}{\d z}}^2\p{-B-A\v{\fr{\d w}{\d z}}^2}\v{\fr{\d z}{\d w}}^4+B\v{\fr{\d z}{\d w}}^2\\ &=-A, \end{align*}$

which is the energy conservation of the motion of $w$ in the potential $V$ that we expect. $\square$

Noticing that we are only interested in the trajectory, we can just use Maupertuis’ principle to get a simpler proof.

*Proof.*

$\mcal S_0=\int\v{\d z}\sqrt{2m\p{-B-A\v{\fr{\d w}{\d z}}^2}}=\int\v{\d w}\sqrt{2m\p{-A-B\v{\fr{\d z}{\d w}}^2}}.$

The abbreviated action is then exactly the same for the motion of $z$ and the motion of $w$. Therefore, by Maupertuis’ principle, for any physical trajectory of $z$, the trajectory of $w$ is also physical. $\square$

There are two different definitions of a conformal transformation in two dimensions. One is that a function defined on an open subset of $\bC$ is conformal iff it is holomorphic and its derivative is nowhere zero. The other is that a function is conformal iff it is biholomorphic (is bijective and has a holomorphic inverse).

You may think here I have adopted the second definition because when I say $\fc Vw\ceq B\v{\d z/\d w}^2$, I am implicitly assuming that I can take the inverse of $\fc wz$ to get the function $\fc zw$ and then take the derivative of it. However, if that is the case, an immediate problem is that then the duality between the Kepler problem and the harmonic oscillator, from which I introduced the more general result in the first place, would not be actually covered by the “more general” result. This is because $z\mapsto z^2$ is not biholomorphic (because it is not injective).

Then, why did this never become a problem when we were studying the duality between the Kepler problem and the harmonic oscillator? All we have talked about is how we can derive a trajectory in the Kepler problem by squaring the trajectory of a harmonic oscillator, but we have not discussed about how we can reverse this process, as an essential part of the duality. You may think the reverse of the process would be totally natural given how symmetric our theorem is regarding the two potentials. However, the reverse is not actually well-defined since the inverse of squaring, i.e., taking the square root, is not a single-valued function. Nevertheless, it is still well-defined in some sense: starting with whichever branch we like, tracing one point on the trajectory of the Kepler problem, and moving it along this trajectory for two cycles, we will end up with a trajectory of the harmonic oscillator if we take the square root of the position and ensure we always choose the branch so that the mapping is continuously done.

What about other power-law central potentials? In those cases, we have non-closed trajectories, so we cannot just move along the trajectory for two cycles. For example, if we take $w=z^3$, then the potential would be $U=9A\v z^4$. For any non-closed trajectory, we can uniquely map it to a trajectory of the potential $V=B\v w^{-4/3}/9$. However, we cannot uniquely do the reverse mapping. There would be three different trajectories in the potential $U$ that can be mapped to the same trajectory in $V$, and we can in turn map the trajectory in $V$ to any of the three trajectories in $U$ depending on which branch we choose.

Therefore, to generalize this for more general potentials, we can use similar arguments. Because $z\mapsto w$ has non-zero derivative everywhere in our considered region, it is everywhere locally invertible by the Lagrange inversion theorem. We can then bijectively map the trajectories in the two dual potentials locally for every small (and finite) segment and then patch them together to get the global correspondence between the two trajectories. This mapping may not be well-defined globally, but the trajectories can still be considered dual to each other. If the potential also becomes multi-valued due to the mapping $w\mapsto z$ being multi-valued, then we should imagine this situation like this: at some point, the potential may be different when the particle visit here for the second time. This case does not happen if we only look at power-law potentials, but it does happen for more general cases.

What makes this sense of duality weaker is that one trajectory can be dual to multiple different trajectories. A case worth noting is that sometimes one trajectory can be mapped to infinitely many different trajectories. This happens when the trajectory runs around a logarithmic branch point. However, we can gain the sense of duality back if we can also consider the case where $z\mapsto w$ is multi-valued. The notion of conformal transformation is now too limited to cover this case, a better notion is a global analytic function, which generalizes the notion of analytic function to allow for multiple branches.

Not any potential can be expressed as $A\v{\d w/\d z}^2$. How can we determine whether a potential can be expressed in this form?

**Theorem.** A continuous potential $U$ can be expressed in the form of $A\v{\d w/\d z}^2$ (where $\fc wz$ is a conformal transformation) iff one of the following conditions is met:

- $U$ is zero everywhere, or
- $\ln\v U$ is a harmonic function on the domain of $U$.

*Proof.* First, prove the necessity.

An obvious requirement is that the potential must be positive everywhere or negative everywhere (or zero everywhere, but that is trivial). The sign is determined by the sign of $A$. Therefore, without loss of generality, we can assume $A=1$ because we can always absorb a factor of $\sqrt{\v A}$ into $w$ and adjust the overall sign of $U$ accordingly.

We can decompose $\p{\d w/\d z}^2$ in the polar form

$\p{\d w/\d z}^2=\v{\d w/\d z}^2\e^{\i\vphi}=U\e^{\i\vphi},$

where $\vphi$ is a real function of $z$. Applying the Cauchy–Riemann equations to $\p{\d w/\d z}^2$ gives

$\i\partial_x\p{\fr{\d w}{\d z}}^2=\partial_y\p{\fr{\d w}{\d z}}^2 \implies\i\p{\e^{\i\vphi}\partial_xU+\i U\e^{\i\vphi}\partial_x\vphi} =\e^{\i\vphi}\partial_yU+\i U\e^{\i\vphi}\partial_y\vphi.$

Equate the real and imaginary parts, and we have

$\begin{cases}U\partial_x\vphi=-\partial_yU,\\U\partial_y\vphi=\partial_xU.\end{cases}$

Use the symmetry of second derivatives on $\vphi$, and we have

$\partial_x\partial_y\vphi-\partial_y\partial_x\vphi=0 \implies\partial_x\fr{\partial_xU}U+\partial_y\fr{\partial_yU}U=0.$

In the language of vector analysis, this is just $\nabla^2\ln U=0$.

Considering the case where $U$ is negative everywhere, we have that $\ln\v U$ is a harmonic function.

Then, prove the sufficiency.

The case where $U$ is zero everywhere is trivial. Otherwise, because $\ln\v U$ is defined everywhere on the domain of $U$, we must have $U$ is non-zero everywhere. Because $U$ is continuous, we have $U$ is either positive everywhere or negative everywhere.

Without loss of generality, assume $U$ is positive everywhere. Let $\vphi$ be the harmonic conjugate of $\ln U$. Then, $\ln U+\i\vphi$ is a holomorphic function. We can then define

$\fr{\d w}{\d z}=\sqrt U\e^{\i\vphi/2},$

which is also a holomorphic function. $\square$

From now on, we will call this requirement on $U$ as being **log-harmonic** for obvious resons.

We should notice that whether $U$ is log-harmonic does not respect that any potential can have an additive constant and still be essentially the same potential. An immediate example is that a function that is positive everywhere may be negative somewhere if we add a constant to it. We may then want to ask whether $U$ can be log-harmonic if we allow it to be added an additive constant. It is easy to do this: we can just apply the same test to $U+C$, and see if there is some $C$ that makes it work. To illustrate, solve the equation $\nabla^2\ln\v{U+C}=0$ for $C$, and then see whether it is a constant over the whole complex plane.

A property of log-harmonic functions is that the product of two log-harmonic functions is also log-harmonic.

Trajectories often run out of the domain of the potential. For example, in the discussions about power-law potentials before, though not emphasized, the origin is outside the domain of the potential because it is either a pole or a zero of $\d w/\d z$ (except the trivial case where $w$ is simply proportional to $z$). Another example that is rather overlooked is that unbound trajectories go to infinity while infinity is often not in the domain of the potential, either.

What need to take care of is that, when the trajectories run out of the domain, the trajectory is cut off there, and the rest of the trajectory is never considered (even if it may come back to the domain again later). Take the Kepler problem ane the harmonic oscillator as an example. If a trajectory of the harmonic oscillator passes through the origin, which is outside the domain, the trajectory degrades from a closed ellipse to a segment. If you take the square of a segment passing through the origin, you will get a broken line folded into itself, which looks like a particle in the Coulomb field may sink into the origin and then goes back along the exact path it came along. This would confusing if it were physical.

The construction of $z\mapsto w$ is not unique for a given $U$.

First, we can observe that the substitution $w\to w'\ceq w\e^{\i\tht}+w_0$ does not change $\v{\d w/\d z}$ (nor thus $U$). The real number $\tht$ is a function of $z$ in principle, but if we want $w$ to be holomorphic on a connected region, then $\tht$ must be a constant (except the trivial case where $w=0$).

The dual trajectory does change, though, but the dual potential $V$ is also changed, too. Because $\v{\d z/\d w'}=\v{\d z/\d w}$, we have

$\fc{V'}{w'}=\fc Vw=\fc V{\p{w'-w_0}\e^{-\i\tht}}.$

Therefore, the dual trajectory and the dual potential are also rotated and translated by the same amount.

Before introducing scaling, I need to add some words about the unit systems. In the above discussions, I have never mentioned what units or dimensions do $z,w,A,B$ have. The natural way of thinking is to let $z,w$ have the dimension of length and let $A,B$ have the dimension of energy. However, this is not the only way of thinking. We will later see that the $z$-space and the $w$-space can have totally different dimensions.

The dimensions or units of variables in a physical formula can be totally different from what they were originally intended to be. For example, when a particle is rotating, its motion needs to satisfy $\dot{\mbf r}=\bs\omg\times\mbf r$, where $\bs\omg$ is the angular velocity. However, although $\mbf r$ has the dimension of length when it is first introduced, this formula is satisfied by any rotating vectors. A typical example is that the angular momentum changes according to this formula when a rigid body is doing precession. For another example, in classical mechanics and general relativity, the coordinates used to describe the motion of a particle are often not in the dimension of length, but have all sorts of dimensions. For another example that is less well-known, just because the Berry connection has the same gauge transformation as the electromagnetic potential, a bunch of formulas that are useful in electromagnetic theory can be applied to the Berry connection to define all sorts of interesting quantities with rich physical implications. The units of Berry connection are, however, very unimportant because they are literally arbitrary.

Therefore, what does a unit system actually bring us in a physical theory? The only thing it brings us is the ability to conveniently see in what aspects our theories are invariant under the scaling of some quantities. For example, in classical mechanics, we can scale the mass and the potential of any system with the same factor, and then the system will still behave the same in terms of the time-dependent length-based motion. This is because the part of the dimension of energy that is independent of length and time is to the first power of the dimension of mass. For similar reasons, we can derive another two scaling invariances, one about length-scaling and the other about time-scaling. In quantum mechanics, we suffer one less such scaling invariances because of the existence of $\hbar$; in special relativity, we suffer one less such scaling invariances because of the existence of $c$; and in general relativity, we suffer two less such scaling invariances because of the existence of $G$ and $c$. This is the incentive of introducing natural units in physics: they give us a more clear image of how our theory can be scaled leaving the physics invariant.

As for dimensional analysis, the essence of it is to find the required form of theory so that it satisfies some sort of scaling invariance. For example, we can use dimensional analysis to derive that the frequency of a harmonic oscillator is proportional to the square root of the ratio of the stiffness to the mass. We know this must be correct because this is the only theory that is consistent with the three scaling invariances that must be satisfied by any theories under the framework of classical mechanics.

Now, consider the scaling in $w$, i.e., $w\to w'\ceq w/C$ for some non-zero real number $C$. The potential $U$ can be kept invariant by scaling $A\to A'\ceq C^2A$. However, we cannot change $B$ if we want to leave the trajectory of $z$ unchanged because it is determined by the energy of the trajectory of $z$. Therefore, the dual potential $V$ would be scaled to

$\fc{V'}{w'}=C^2\fc Vw=C^2\fc V{Cw'}.$

This means that physics is unchanged if length is scaled by $C$ and energy and potential are both scaled by $C^2$. This corresponds to one of the three scaling invariances in classical mechanics that we talked about before.

What is interesting here is that the length-scaling in the $w$-space is done independently of that in the $z$-space. This means that the length dimension in the two systems are independent of each other, so the two systems can have totally different unit systems.

The transformation from $z$ to $w$ seems like a coordinate transformation, which is covered by canonical transformations. However, here we have an additional requirement about the form of the Hamiltonian:

$H=\fr{p_z^2}{2m}+\fc Uz,\quad K=\fr{p_w^2}{2m}+\fc Vw,$

where $K$ is the transformed Hamiltonian (or called the Kamiltonian in the jargon of canonical transformations). This is not generally true because the transformation in the generalized momentum is restrictively determined when the transformation in the generalized coordinate is already given. From the proof of the original theorem, we can see that a transformation in time is a must, which is given by $\d\tau=\v{\d w/\d z}^2\,\d t$.

The problem is that the canonical transformations covered in most textbooks usually do not allow for a transformation in time, but only for a transformation in the canonical variables. Therefore, I need to first address the problem of integrating the transformation of time into the theory of canonical transformations. I will not do this for the most general case, but only for the case general enough for the purpose of explaining the case interesting this article.

Before diving into the general canonical transformation, let’s first consider the case where the transformation is only in the time variable.

Consider a system with the Lagrangian $\fc L{q,\dot q}$ (not explicitly dependent on time). Then, the action can be expressed as

$S=\int_{t_1}^{t_2}\fc L{q,\dot q}\d t.$

The same integral can be expressed in terms of a new time variable $\tau$ as

$S=\int_{\tau_1}^{\tau_2}\fc L{q,\mathring q\dot\tau}\fr{\d\tau}{\dot\tau},$

where $\mathring q\ceq\d q/\d\tau$ is the generalized velocity in the new time variable. The transformed Lagrangian, or what I want to call the **Magrangian**^{1}, is then

$\fc M{q,\mathring q}\ceq\fc L{q,\mathring q\dot\tau}\fr1{\dot\tau}.$ | $(1)$ |

For the case that we are concerning, $\dot\tau$ is a positive real function of $q$ but does not (explicitly) depend on $t$. The limits $\tau_1,\tau_2$ satisfy the condition

$\tau_2-\tau_1=\int_{t_1}^{t_2}\fc{\dot\tau}q\,\d t.$

This relation is crucial. When finding the variation $\dlt S$, we are fixing $t_1,t_2$. However, we cannot fix both $\tau_1,\tau_2$ because their difference is dependent on the path $\fc qt$. What we can do is to fix $\tau_1$ and to let $\tau_2$ have a variation given by

$\dlt\tau_2=\int_{t_1}^{t_2}\fc{\dot\tau'}q\dlt q\,\d t =\int_{\tau_1}^{\tau_2}\fr{\fc{\dot\tau'}q}{\fc{\dot\tau}q}\dlt q\,\d\tau,$

where $\dot\tau'$ is the derivative (or gradient, in higher dimensions) of $\dot\tau$ as a function of $q$. As can be seen, only if $\dot\tau$ is a constant (i.e., $\tau$ is simply an affine transform of $t$) does $\dlt\tau_2$ vanish for any $\dlt q$.

Using the well-known variation of the action when there is variation in the time coordinate, we have

$\dlt S=\int_{\tau_1}^{\tau_2} \p{\fr{\partial M}{\partial q}-\fr{\d}{\d\tau}\fr{\partial M}{\partial\mathring q}}\dlt q\,\d t -\fc{K}{\fc q{\tau_2},\fc{\mathring q}{\tau_2}}\dlt\tau_2,$

where

$\fc K{q,\mathring q}\ceq\mathring q\fr{\partial M}{\partial\mathring q}-M$

is the energy (or the Kamiltonian, but as a function of generalized coordinates and velocities) of the system.

Because $\fc q{\tau_2}$ is fixed, we have

$\fc q{\tau_2}=\fc q{\tau_2+\dlt\tau_2}+\fc{\dlt q}{\tau_2+\dlt\tau_2} =\fc q{\tau_2}+\fc{\mathring q}{\tau_2}\dlt\tau_2+\fc{\dlt q}{\tau_2} \implies\fc{\dlt q}{\tau_2}=-\fc{\mathring q}{\tau_2}\dlt\tau_2.$

Now, calculate the variation of the action:

$\dlt S=\int_{\tau_1}^{\tau_2} \p{\fr{\partial M}{\partial q}\dlt q+\fr{\partial M}{\partial\mathring q}\dlt\mathring q}\d\tau +\fc M{\fc q{\tau_2},\fc{\mathring q}{\tau_2}}\dlt\tau_2.$

Recall the derivation of the Euler–Lagrange equation. For the second term in the integrand, we can integrate by parts to get

$\int_{\tau_1}^{\tau_2}\fr{\partial M}{\partial\mathring q}\dlt\mathring q\,\d\tau =\abar{\fr{\partial M}{\partial\mathring q}\dlt q}{\tau_1}^{\tau_2} -\int_{\tau_1}^{\tau_2}\fr{\d}{\d\tau}\fr{\partial M}{\partial\mathring q}\dlt q\,\d\tau =\abar{-\fr{\partial M}{\partial\mathring q}\mathring q}{\tau_2}\dlt\tau_2 -\int_{\tau_1}^{\tau_2}\fr{\d}{\d\tau}\fr{\partial M}{\partial\mathring q}\dlt q\,\d\tau.$

Substitute this back into the expression for $\dlt S$, and we have the desired result.

If we let the first term in $\dlt S$ vanish, we would get the well-known Euler–Lagrange equation:

$\fr{\partial M}{\partial q}-\fr{\d}{\d\tau}\fr{\partial M}{\partial\mathring q}=0.$ | $(2)$ |

However, that term is not zero because there is another term in $\dlt S$. If we want the Euler–Lagrange equation to be satisfied, we need the second term to be zero. This means that either $K$ is zero or $\dlt\tau_2$ is zero. The latter case will lead us to the trivial case because we have just derived that $\dlt\tau_2$ is zero only if $\dot\tau$ is a constant. The former case can be satisfied, however. If the Euler–Lagrange equation is satisfied, then $K$ is a conserved quantity due to the symmetry of $M$ in $\tau$-translation. Then, if $K$ happens to be zero at some point, it will be zero over the whole motion, and the stationary-action principle will be satisfied by the motion between any two points.

We can explicitly show that Equation 2 can be derived from the original Euler–Lagrange equation under the zero-energy condition.

*Proof.* We need to first derive the condition of zero energy in the old time variable. Take derivatives of Equation 1 with respect to $\mathring q$, and we have

$\fr{\partial M}{\partial\mathring q}=\fr{\partial L}{\partial\dot q}\dot\tau\fr1{\dot\tau} =\fr{\partial L}{\partial\dot q}.$

Therefore, the Kamiltonian is

$K=\fr{\partial M}{\partial\mathring q}\mathring q-M=\fr{\partial L}{\partial\dot q}\fr{\dot q}{\dot\tau}-\fr L{\dot\tau} =\fr H{\dot\tau},$ | $(3)$ |

where $H\ceq\dot q\partial L/\partial\dot q-L$ is the original Hamiltonian. This relation means that the condition $K=0$ is equivalent to the condition $H=0$.

Then, use Equation 1 to explicitly calculate the lhs of Equation 2:

$\begin{align*} \fr{\partial M}{\partial q}-\fr{\d}{\d\tau}\fr{\partial M}{\partial\mathring q} &=\p{\fr{\partial L}{\partial q}+\fr{\partial L}{\partial\dot q}\mathring q\fc{\dot\tau'}q}\fr1{\fc{\dot\tau}q} -L\fr{\fc{\dot\tau'}q}{\fc{\dot\tau}q^2}-\fr1{\fc{\dot\tau}q}\fr{\d}{\d t}\fr{\partial L}{\partial\dot q}\\ &=\p{\fr{\partial L}{\partial q}-\fr{\d}{\d t}\fr{\partial L}{\partial\dot q}}\fr1{\fc{\dot\tau}q} +\p{\fr{\partial L}{\partial\dot q}\dot q-L}\fr{\fc{\dot\tau'}q}{\fc{\dot\tau}q^2}\\ &=0. \end{align*}$ $\square$

We will see that specifying $\dot\tau$, which is what we have done in the above discussion, is pretty different from specifying $\tau$. The latter is much simpler, but the former is the one that is used for the conformal duality between potentials. Although I do not have to discuss what the transformation should look like when we specify $\tau$ instead of $\dot\tau$, I will still do this because I need to point it out that it is quite different from the case we have discussed.

Recall that the canonical transformation is just a transformation of coordinates in the phase space that preserves the canonical one-form up to a total differential. Adding the idea of time transformation into this has a difficulty that time is not a coordinate in the phase space. Including the time coordinate, the actual one-form that needs to be preserved is

$\d S=p\,\d q-H\,\d t,$

which is exactly the total differential of the action. Therefore, we have

$p\,\d q-H\,\d t=P\,\d Q-K\,\d\tau+\d G,$ | $(4)$ |

where $P,Q$ are the new canonical variables, $K$ is the transformed Hamiltonian, and $G$ is called the generating function of the canonical transformation. Assume $\tau$ and $G$ are both functions of $q,Q,t$. Then, we have

$p\,\d q-H\,\d t=P\,\d Q -K\p{\fr{\partial\tau}{\partial q}\,\d q+\fr{\partial\tau}{\partial Q}\,\d Q+\fr{\partial\tau}{\partial t}\,\d t} +\fr{\partial G}{\partial q}\,\d q+\fr{\partial G}{\partial Q}\,\d Q+\fr{\partial G}{\partial t}\,\d t.$

Compare the coefficients of $\d q,\d Q,\d t$ on both sides, and we have

$p+K\fr{\partial\tau}{\partial q}-\fr{\partial G}{\partial q}=0,\quad P-K\fr{\partial\tau}{\partial Q}+\fr{\partial G}{\partial Q}=0,\quad H-K\fr{\partial\tau}{\partial t}+\fr{\partial G}{\partial t}=0.$ | $(5)$ |

These equations determines $Q,P,K$. They will satisfy Hamilton’s equation:

$\fr{\d Q}{\d\tau}=\fr{\partial K}{\partial P},\quad \fr{\d P}{\d\tau}=-\fr{\partial K}{\partial Q}.$

Consider the Hamiltonian $H=p+q$. The motion is

$q=q_0+t,\quad p=p_0-t.$

Consider the new time variable $\tau=t/q$ and the generating function $G=Qq$. With Equation 5 and the expression for $H$ and $\tau$, we have a set of five equations:

$\begin{dcases} p-K\fr1{q^2}t-Q=0,\\ P+q=0,\\ H-K\fr1q=0,\\ \tau=\fr tq,\\ H=p+q \end{dcases}\implies\begin{dcases} q=-P,\\ p=\fr{Q-P\tau}{1-\tau},\\ K=\fr{\p{P-Q}P}{1-\tau},\\ t=-P\tau,\\ H=\fr{Q-P}{1-\tau}. \end{dcases}$

With the expression for the Kamiltonian $K$, we get the motion of $Q,P$:

$Q=\fr{\p{2-\tau}\tau}{1-\tau}P_0+\p{1-\tau}Q_0,\quad P=\fr{P_0}{1-\tau}.$

This is consistent with the motion of $q,p$ as can be verified with calculation.

It seems that specifying $\tau$ is much easier than specifying $\dot\tau$. We can easily discuss the most general case and perfectly recover the equation of motion without having to impose a bizarre condition like the zero energy. This is because specifying $\dot\tau$ is, in some sense, more general than specifying $\tau$: we can always find the total derivative of $\tau$ for any form of it, but we cannot always find $\tau$ given the form of $\dot\tau$ because of limitations on the integrability.

Now, we can discuss the conformal transformation as a canonical transformation. The procedure is pretty analogous to that in the previous section, but this time the conclusion would only be valid under the zero-energy condition.

Denote the real and imaginary parts of $z$ as $x,y$, and the real and imaginary parts of $w$ as $X,Y$. The Cauchy–Riemann equations give

$u\ceq\fr{\partial X}{\partial x}=\fr{\partial Y}{\partial y},\quad v\ceq\fr{\partial X}{\partial y}=-\fr{\partial Y}{\partial x}.$

Here $u,v$ are two real functions defined for convenience. They can either be functions of $x,y$ or functions of $X,Y$, depending on which are more convenient. With $u,v$, we have

$\d X=u\,\d x+v\,\d y,\quad\d Y=-v\,\d x+u\,\d y,$

The time transformation is given by

$\dot\tau=\v{\fr{\d w}{\d z}}^2=u^2+v^2.$

The original Hamiltonian is

$H=\fr{p_x^2+p_y^2}{2m}+A\p{u^2+v^2}+B$

(the last term is added because we want it to be zero during the motion). Substitute these into Equation 4, and we have ($\d G=0$)

$\begin{align*} &p_x\,\d x+p_y\,\d y-\p{\fr{p_x^2+p_y^2}{2m}+A\p{u^2+v^2}+B}\d t\\ ={}&P_X\p{u\,\d x+v\,\d y}+P_Y\p{-v\,\d x+u\,\d y}-K\p{u^2+v^2}\d t. \end{align*}$

Then, after some calculations, we have perfectly the expected result

$p_x=uP_X-vP_Y,\quad p_y=vP_X+uP_Y,\quad K=\fr{P_X^2+P_Y^2}{2m}+\fr{B}{u^2+v^2}+A.$

The condition $K=0$ specifies the energy of the dual trajectory.

For unknown reasons, the transformed Hamiltonian is called the Kamiltonian just because we often use the symbol $K$ to represent it. However, there is not a similar convention for the transformed Lagrangian, so I would like to use the letter $M$ and call it the Magrangian. The surname “Lagrange” is originated from the French phrase

*la grange*(meaning “the barn”), and correspondingly “Magrange” may refer to the French phrase*ma grange*(meaning “my barn”). This pun then can make “Magrangian” kind of mean “my Lagrangian”.↩︎

You can also embed a Twitter post like this:

Twitter post from UlyssesZhan

However, there is no official way of embedding a Mastodon timeline. At most, you can embed a specific Mastodon post like this:

This just embeds a specific Mastodon post instead of dynamically grabbing the latest posts. Also, this embed requires JavaScript on the client side, which I have been trying to avoid. Another downside of this embed is that it does not have a light-theme version.

Thanks to Mastodon’s API, the community implemented various ways of embedding Mastodon timelines or posts. I then decided to develop my own way of embedding Mastodon posts. Here was the roadmap:

- The home page of my website shows my latest Mastodon post.
- It should be rendered server-side, without the necessity of client-side JavaScript.
- Blend the post in the webpage with a style consistent with rest of the webpage.

How do I ensure the embedded post is always the latest one if it was rendered server-side? This means I have to somehow trigger the building and deployment of by website automatically whenever a new Mastodon post is created. Thanks to the Huginn instance deployed on my self-hosted server, I can monitor my Mastodon account and trigger a GitHub Actions workflow whenever there is a new post.

Here is then the idea of implementing the roadmap:

- On the Jekyll side:
- Write a Jekyll hook at
`:site`

`:after_init`

that reads the RSS feed of my Mastodon account to get all information I need. - Write a Liquid template that can be populated with the collected information.
- Include the Liquid template in the home page of my website and write some SCSS to style it.

- Write a Jekyll hook at
- On the GitHub side:
- Use GitHub Actions to build and deploy on GitHub Pages and make the GitHub Actions triggered by
`workflow_dispatch`

. - Create a GitHub personal access token. It will then be used to trigger GitHub Actions through REST API.

- Use GitHub Actions to build and deploy on GitHub Pages and make the GitHub Actions triggered by
- On the Huginn side:
- Create an agent to monitor the RSS feed of my Mastodon account.
- Create an agent to send HTTP requests to invoke GitHub’s REST API. It receives events from the first agent and triggers the GitHub Actions workflow.

Great! Now, my website can show my latest Mastodon post on the home page.

]]>

Exercise 3.9.Show that five-fold rotation symmetry is inconsistent with lattice translation symmetry in 2D. Since 3D lattices can be formed by stacking 2D lattices, this conclusion holds in 3D as well.

Before I saw this problem, I had never thought about whether a plane lattice can have $m$-fold symmetry for any positive integer $m$. I was surprised at first that I cannot have a translationally symmetric lattice with 5-fold symmetry. After some thinking, I did realize that I cannot imagine a 5-fold symmetric plane lattice, so such a lattice cannot exist intuitively.

Actually, the only allowed rotational symmetries are 2-fold, 3-fold, 4-fold, and 6-fold. This result is known as the crystallographic restriction theorem. Then, how to prove it?

After jiggling around the possible structure of the symmetry group of a plane lattice, I finally proved it. I found that this proof is actually a simple and good example of how algebraic number theory can be used in physics.

Before dive into the proof, we need to first prove a simple lemma about real analysis:

**Lemma 1.** If $G$ is a subgroup of $(\mathbb R^2,+)$ that is discrete and spans $\mathbb R^2$, then there exist two linearly independent elements in $\mathbb R^2$ that generate $G$.

*Proof.* Because $G$ spans $\mathbb R^2$, there exist two linearly independent elements $g_1,g_2\in G$.

Consider the vector subspace $V_1\coloneqq g_1\mathbb R$ and the subgroup $G_1\coloneqq G\cap V_1$. Obviously, $G_1$ should be generated by some element $h_1\in G_1$ (this is because $V_1\simeq\mathbb R$, and $G_1$ as a discrete set must have a smallest positive element under that isomorphism, which must be the generator of $G_0$ because it would otherwise not be the smallest positive element). Therefore, $G_1=h_1\mathbb Z$. Also, because $h_1\ne0$, $\left\{h_1,g_2\right\}$ must span $\mathbb R^2$.

Let

$T\coloneqq\left\{ah_1+bg_2\in G\,\middle|\,a\in\left[0,1\right),b\in\left[0,1\right]\right\}.$

Then, $T$ must be discrete (because $G$ is) and bounded, and contains at least the element $g_2$. Express every element in $T$ as $ah_1+bg_2$ and pick out the one element with the smallest non-zero $b$, and denote it as $h_2=a^\star h_1+b^\star g_2$. Certainly, $\left\{h_1,h_2\right\}$ span $\mathbb R^2$.

Now, for any $g\in G$, we can express it uniquely as $g=ah_1+bg_2$. Define

$c_2\coloneqq\left\lfloor\frac{b}{b^\star}\right\rfloor,\quad c_1\coloneqq\left\lfloor a-a^\star c_2\right\rfloor,\quad g'\coloneqq g-c_1h_1-c_2h_2.$

Then, $g'\in T$, and if we express it as $g'=a'h_1+b'g_2$, then $b'$ is smaller than $b^\star$. By definition of $b^\star$, $b'=0$, so $g'\in G_1$. Hence, $\left\{h_1,h_2\right\}$ generates $G$. $\square$

Now, we are ready to prove our main result:

**Theorem.** There is a discrete subset of $\mathbb R^2$ that has both translational symmetry and $m$-fold symmetry iff $\varphi(m)\le2$, where $\varphi$ is Euler’s totient function.

*Proof.* For the neccessity, prove by contradiction. I instead prove that a set that has the said symmetries must not be discrete.

Denote the plane as $\mathbb C$. Assume that there is an $m$-fold symmetry around point $0$. Then, for any lattice site $z$, the point $Rz\coloneqq\alpha z$ (where $\alpha\coloneqq\mathrm e^{2\pi\mathrm i/m}$) is also a lattice site. Assume that there is a translational symmetry with translation $a$, then the point $Tz\coloneqq z+a$ is also a lattice site. Without loss of generality, we can adjust the orientation of our coordinate system and the length unit so that $a=1$.

The group $G$ generated by $\{R,T\}$ is a subgroup of the symmetry group of the lattice. Its action

$S\coloneqq\left\{g0\,\middle|\,g\in G\right\}$

on the point $0$ is a subset of all the lattice sites (this is only true when $0$ is a lattice site; I will discuss later the other case). Notice that for any $z\in S,n\in\mathbb Z$, we have $T^nRz=n+\alpha z\in S$. Therefore, by expanding any polynomial with integer coefficients using Horner’s rule, we can see that $\mathbb Z[\alpha]\subseteq S$.

Because $\alpha$ is an algebraic integer of degree $\varphi(m)$ (the minimal polynomial of $\alpha$ is known as the $m$th cyclotomic polynomial), the generating set of $\mathbb Z[\alpha]$ must have at least $\varphi(m)$ elements. Therefore, according to Lemma 1, $\mathbb Z[\alpha]$ is discrete iff $\varphi(m)\le2$.

For the case where $0$ is not a lattice site, we can generate $S$ by acting $G$ on any lattice site $z_0$. We can then easily prove that $z_0+\mathbb Z[\alpha]\subseteq S$. To prove this, we just need to see that we can act $R^{-k}$ on $z_0$ before further acting $T^nR$ on it for $k$ times. All the other steps are the same and still valid.

For the sufficiency, because there are only finitely many $m$’s that satisfy $\varphi\!\left(m\right)\le2$. Therefore, we can enumerate these $m$’s and see that we can easily construct a plane lattice with both translational symmetry and $m$-fold symmetry for each $m$. $\square$

I know the original problem in the book was probably not intended to be solved in this way, but it is really amazing how some seemingly purely mathematical areas can have their applications in physics, especially in an exercise problem of a physics textbook where pure mathematics is pretty unexpected.

Unfortunately, this proof, which is based on algebraic properties of certain complex numbers, does not generalize to higher dimensions because we cannot use the complex plane to represent a high-dimensional space.

]]>I have been using MathJax as a client-side equation renderer to render equations on my blog for a long time.

The main problem about the client-side rendering is that it makes people that turn off JavaScript on their browsers (e.g. for privacy reasons) unable to see the equations in my articles. Another problem is that it is annoying to wait for the browser to render all the equations, especially if the site owner could have rendered them for you.

I actually have had some experience in server-side equation rendering in Jekyll. In a past post, I talked about how I used Jekyll and KaTeX to render equations in emails server-side. For the website of Sunniesnow (see here for a related post), I use jekyll-katex to render the equations server-side.

Then, I thought, what is stopping me to render equations server-side on my blog? I then started the migration.

The easiest way to switch to server-side equation rendering is just to use kramdown-math-katex. Install the gem, add an option `math_engine: katex`

into the Kramdown configurations of `_config.yml`

, add the needed CSS to the theme, and… What is my computer doing? It is just stuck at building the site!

By adding the `--verbose`

option to the `jekyll serve`

command, I can see what it was doing. I can see that it is never stuck on any step, but rendering each article that has equations (especially those with a ton of ones) takes seconds. Because I have dozens of articles with equations, it takes minutes to build the site. It seems that although KaTeX has always been advertising itself as the fastest math typesetting library for the web, it is not fast enough for me to use it to render equations server-side.

A way to mitigate this issue is to use the `--incremental`

option of `jekyll serve`

. This makes the building much faster except the first time. I can also expect Jekyll to support lazy building in the future, which will entirely skip the building phase and build the files as needed on the fly.

I found another way to partially mitigate this issue. On my blog, I have been extensively utilizing the `markdownify`

filter to render Markdown inside the templates, including the title of the posts, the excerpt of the posts, and something else. Those are rendered in multiple places, including the homepage, the archive page, the RSS feed, and the search page. Since now rendering Markdown is being very slow, I decided to cache the rendered Markdowns. A very simple strategy is as follows:

1 2 3 | ```
def markdownify input
UlyssesZhan.markdown_snippet_cache[input] ||= Filters.instance_method(:markdownify).bind_call self, input
end
``` |

Also, for most of the time I actually do not need to see the Markdown styling in the titles and excerpts, so I can also disable the `markdownify`

filter depending on the site configuration, like this:

1 2 3 4 | ```
def markdownify input
return input if @context.registers[:site].config['avoid_markdown']
UlyssesZhan.markdown_snippet_cache[input] ||= Filters.instance_method(:markdownify).bind_call self, input
end
``` |

If I do not want to modify the site configuration file, I can also utilize an environment variable. I can use a after-init hook to set the configuration item based on the environment variable.

Rendering archives has also been very slow even with this Markdown disabling trick (for some reason I do not know). I decide to use another environment variable to disable the rendering of archives. Change the line `gem 'jekyll-archives'`

in Gemfile to this:

1 | ```
gem 'jekyll-archives', install_if: !ENV['JEKYLL_NO_ARCHIVE']
``` |

By using `--incremental`

and these two tricks together, I can finally build the site in seconds if I only modify one post during `jekyll serve`

.

It seems that I cannot cross-reference equations using server-side means. First, KaTeX does not support cross-referencing, and the current workarounds are not acceptable for my use cases.

I then looked at kramdown-math-mathjaxnode, which uses the MathJax Node library to render equations server-side. The MathJax Node library itself does support rendering equation numbers, but kramdown-math-mathjaxnode does not support cross-referencing either. What is worse is that it has not been maintained for years, which means I probably had to rewrite the plugin myself, but I did not have spare time.

Even worse, Kramdown is just not suitable for implementing cross-referencing. I briefly looked at Kramdown’s source codes, and I realized that if I was about to write a math engine for Kramdown to support cross-reference, I would have to refactor Kramdown a bit. Actually, cross-referencing is quite a non-trivial feature for markup languages because of references that cannot be resolved during the first compilation. In $\LaTeX$, those references are resolved in the second compilation. I would need to refactor Kramdown to support a similar workflow to make it possible to implement cross-referencing.

Then, I looked at other Markdown engines. For Ruby, the only successful Markdown engine besides Kramdown that I know was Redcarpet (it used to be the default Markdown engine of Jekyll), and it was not designed with cross-referencing in mind either. Its developer even rejected to support math-related features a long time ago.

This is why I looked at non-Ruby Markdown engines. The first option that I came up with and also the option that I finally chose is Pandoc.

Pandoc is power in that its form of customization is *filters*, which transforms the whole parsed AST of the document. Because the whole AST is visible at once for a filter, it is then possible to implement cross-referencing by using a filter. Fortunately, someone has already written such a filter, and it is called pandoc-crossref. What is good about this approach is that it is independent of the math engine that I use: I can use MathJax or KaTeX, client-side or server-side, and it does not matter. The only drawback about it is that it does not support cross-reference a particular line in `align`

or `eqnarray`

environment, which is a feature that I have used in some of my posts. I have to reword those posts to avoid using that feature.

Now that we have a filter, we then need a way to let Pandoc render the math expressions server-side. Fortunately (again), someone has already written a filter for this purpose, and it is called pandoc-katex. Append this filter after the pandoc-crossref filter, and we are done.

The drawback about Pandoc is that it has no Ruby implementations, which means the only way to utilize Pandoc in Jekyll is to write a wrapper of it in Ruby and develop a Jekyll plugin for using that wrapper of Pandoc as the Markdown engine. Fortunately, someone has already done this: the wrapper is called pandoc-ruby, and the Jekyll plugin is called jekyll-pandoc.

Although the math rendering problem is solved, a somewhat unrelated problem arises: Pandoc does not use Rouge to highlight code blocks, but I like Rouge. Unfortunately, no one has written a Pandoc filter to use Rouge to highlight code blocks for me; but fortunately, I can write one myself quickly because it is easy enough, especially if I utilize Paru, which contains an API library to help me with writing Pandoc filters in Ruby.

Paru is actually an alternative to pandoc-ruby. Now that I also use Paru, I started to wonder if I should use pandoc-ruby at all. Considering that jekyll-pandoc has not been maintained for years, I decided to write my own Jekyll plugin to use Paru as the Markdown engine, and the simple plugin is called jekyll-paru.

Using kramdown-math-katex is the only option that I do not need to adjust most of my posts. Another option, jekyll-katex, is not compatible with the markup that I use to write equations. I could not either just wrap the whole `{{ content }}`

inside the `{% katexmm %}`

block (due to some errors that I do not know), and the error messages then were impossible to utilize to help me locate the incompatibilities.

For the option that I finally use, Pandoc, I also have to adjust most of my posts. The major incompatibility is that I need to change all `\label`

and `\ref`

to the format recognizable by pandoc-crossref. Another incompatibility is that I need to use `{target=_blank}`

instead of `{:target="_blank"}`

to indicate a link to be opened in a new tab (as well as other HTML attributes that I use this syntax to embed in Markdown). Also, Pandoc does not allow blank lines inside math display blocks, which I have used in some of my posts (by the way, $\LaTeX$ does not allow those blank lines either, which is pretty annoying).

I then wrote a simple script that use regular expressions to help me with this refactoring task. However, because of the diversity of the syntaxes that I used, I still need to check the posts manually after I ran the script. This makes the refactoring task still very tedious.

Now, to build my site, the machine needs pandoc, pandoc-crossref, and pandoc-katex, none of which are Ruby Gems. I need to set up Haskell environment and Rust environment to install them. In GitHub Actions, I can use haskell-actions/setup to set up Haskell environment and cargo-install to install Cargo packages.

I do not know how I managed to make the GitHub Actions workflow file work expectedly at one shot, but I did.

I have been using jekyll-toc to generate the table of contents for each post. The problem with using it now is that it strips the HTML in headings and only keeps the text, so headings with math expressions will not be rendered with nice math typesetting. It was not a problem previously because the client-side math rendering script will render the math expressions in the table of contents. Now that I switched to server-side math rendering, I had to patch jekyll-toc to make it work.

The search functionality was implemented by myself. It is a simple client-side searching powered by Lunr. I also had to refactor the search functionality a bit to make the search results be rendered with math expressions (which were previously also handled by the client-side math rendering script).

The reason that I updated the theme is actually quite dramatic. This originated from me trying to use kramdown-math-katex. To ensure that the KaTeX CSS has the correct version with the KaTeX renderer used by katex-ruby, I decided to `@import`

the SCSS file found in the repo of katex-ruby into my theme. I found that the SCSS file utilizes a function `asset-path`

to load the fonts, but my CSS pre-processor does not support it, so I tried to extend my CSS pre-processor.

Jekyll uses jekyll-sass-converter to render CSS files, which once (v2) used sassc, but now (v3) uses sass-embedded. The former does not support extension of custom SCSS functions, but the latter does. Therefore, I need to upgrade my jekyll-sass-converter to v3. I actually could have upgraded it earlier because I have been using Jekyll v4 for a long time, but I deliberately kept using jekyll-sass-converter v2 because jekyll-action, which I used, had an issue about using sass-embedded. However, I have long ago migrated from jekyll-action to GitHub’s official upload-pages-artifact, so I can now upgrade jekyll-sass-converter to v3.

Then why does this have anything to do with the theme I used (which is Minima)? After I upgraded jekyll-sass-converter to v3, I found that there are some deprecation warnings in the SCSS files (they are actually already fixed, but I do not know why the issue is still open). This was also when I noticed that Minima has not released a new version **for 4 years**, and the last stable release is v2.5.1.

Then, how did I upgrade to Minima v3? I actually just tried to use the master branch of the Git repo of Minima, and I found that it was great.

I am glad to see Minima v3 introduced the include `custom-head.html`

which allows for custom additional HTML metadata and the SCSS file `minima/custom-variables.scss`

and `minima/custom-styles.scss`

which allows for custom SCSS rules to override the default ones.

Although it took me some time to migrate my already present SCSS files and HTML metadata to the new structure, I am glad that Minima adopted this new structure that is more useful and more modern.

Another feature that I really like about Minima v3 is the support of skins. Minima now comes with several pre-defined skins which I can choose from. The default skin called `classic`

is the one that originated from Minima v2, based on which I wrote my own skin.

I still remember a long time ago I tried to make my site support dark theme. It was such a pain because there are so many colors hardcoded in the theme so that I have to rewrite a large part of the SCSS files provided by Minima to support dark theme. Now, Minima v3 has a pre-defined skin called `auto`

, which adaptively looks the same as `classic`

or `dark`

based on the browser’s `prefers-color-scheme`

. I can now implement my skin based on `auto`

(select my skin in the site’s configuration file and `@import`

the `auto`

skin in my skin’s SCSS file), and the codes are now much cleaner.

## Halloween Challenge

It’s the weekend and you’ve just completed a seance with friends. After communing with the dead, you realize a mysterious message was left behind.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 3🍬4🎃04🎃6👻00🎃62🎃6👻32👻5🎃4🍬42🎃4🎃2🎃6🍬3🎃52🍬3🎃6💀0🎃2🎃6🍬13🍬0🎃432👻4👻4🎃230🎃62🎃1🍬03🎃2🍬6 🍬4👻5👻3🎃220🎃5👻0👻5🎃4🎃6👻42🎃4👻01🎃60🍬1🎃2👻3👻30🎃6💀0👻0🍬3🎃5👻0👻5🎃6👻0🍬30🎃61🍬0🎃1🎃2🎃6🎃42👻3🍬03🎃2💀3 🎃0🎃2👻5🎃22🍬3🎃5🎃6🍬3🎃5🎃2🎃6🎃52🍬4👻5🍬3🎃2🎃1🎃6👻4🍬0🍬0👻5🍬6🎃6👻0🍬30🎃604👻5🍬32💀1🎃6💀0🎃2🎃6🎃2💀1🍬1👻3🍬03🎃2🍬6 4🎃2🍬3🎃6👻0👻5🎃6🍬3🎃5🎃2🎃6💀0🍬03👻3🎃1🎃6🍬0🎃3🎃6🍬13🍬0🎃432👻4👻4👻0👻5🎃4🍬6🎃6👻0🍬3🎃6🍬3🎃53👻0🍬5🎃20🎃6🎃2🍬5🎃2👻5🎃6👻4🍬03🎃2💀3 👻0👻5🎃6🍬3🎃5🎃2🎃6134🍬1🍬3👻01🎃6👻4🎃2🍬3🎃5🍬0🎃10🍬6🎃6💀0🎃5🎃23🎃2🎃6🎃2🎃23👻0🎃2🎃6🎃0🍬4🎃40🎃6👻424🎃6🍬33🎃22🎃1🍬6 0👻2🎃2👻3🎃2🍬3🍬0👻50🎃6🍬0🎃3🎃6🎃233🍬030🎃6👻0👻5🎃6🍬3🎃5🎃2🎃6🎃123👻2🎃623🎃2🎃6💀0👻0🎃1🎃20🍬13🎃22🎃1💀3 👻5🍬0🍬3🎃62🎃6👻32👻5🎃4🍬42🎃4🎃2🎃6🍬0🎃3🎃6🍬3🎃5🎃2🎃6🍬120🍬3🍬6🎃6🎃0🍬4🍬3🎃6🍬0👻5🎃2🎃6💀0🎃2🎃6🎃5🍬0👻3🎃1🎃6🍬0🍬43🎃6🎃5🎃22🎃10🍬6 🍬0🍬5🎃23🎃61🍬0🍬4👻5🍬3👻3🎃200🎃6🍬13🍬0👻1🎃21🍬30🍬6🎃6💀0🎃5🎃23🎃2🎃6👻0🍬30🎃6🍬1🍬0💀0🎃23🎃6🎃520🎃60🍬13🎃22🎃1💀3 🍬3🎃5🎃2🎃6🍬33👻01👻20🎃62👻5🎃1🎃6🍬33🎃22🍬30🎃6🍬0🎃3🎃63🍬4🎃04🍬6🎃6👻3👻0👻2🎃2🎃6💀0👻0🍬31🎃5🎃20👻6🎃61🎃523👻40🍬6🎃61🍬0👻5🍬5🎃2👻5🎃2🍬6 🎃132💀0👻0👻5🎃4🎃6🍬40🎃6👻0👻5🍬3🍬0🎃6👻0🍬30🎃6💀0🍬03👻3🎃1🍬6🎃6💀0🎃5🎃23🎃2🎃6🍬3🎃5🎃2🎃60🍬4🎃0👻3👻0👻4🎃2🎃6👻00🎃60🎃2🎃2👻5💀3 🎃2👻51🎃52👻5🍬3👻0👻5🎃4🎃6🍬40🎃6💀0👻0🍬3🎃5🎃6🎃4🎃2👻40🍬6🎃6👻0🍬3👻60🎃6🎃2🍬5🎃234🎃61🍬0🎃1🎃23👻60🎃6🎃13🎃22👻4🍬6 2🎃1🍬5🎃2👻5🍬3🍬43🎃20🎃6👻0👻5🎃6🍬3🎃5🎃2🎃61🍬0🎃1🎃2🍬6🎃6💀0🎃5🎃23🎃2🎃6🍬3🎃5🎃2🎃6🎃2🎃23👻0🎃2👻60🎃63🍬0🍬4🍬3👻0👻5🎃2💀3 🎃1🎃22🍬3🎃5🎃6👻424🎃60🎃2🎃2👻4🎃6🍬3🍬0🎃6👻3🍬43👻2🍬6🎃6🎃0🍬4🍬3🎃6🎃3🍬03🎃63🍬4🎃04🎃6👻0🍬3👻60🎃6🎃52👻3👻3🍬0💀0🎃2🎃2👻56However, with your unique Ouija board you should have no problem deciphering what they left!

## Objective

Your Ouija board looks like the following straddling checkerboard:

1 2 3 4 5 6 7 8 ================================== | | 0 | 1 | 2 | 3 | 4 | 5 | 6 | | | S | C | A | R | Y | ? | ! | | 🎃 | B | D | E | F | G | H | | | 👻 | I | J | K | L | M | N | ' | | 🍬 | O | P | Q | T | U | V | , | | 💀 | W | X | Z | . | # | $ | : | ==================================Use your Ruby skills and the board above to decrypt the message. I have attached a file to help get you started. You don’t need to use it if you don’t want to.

You may also find this link helpful too.

## Requirements

- Must use Ruby
- Decrypt the message
Determine the hidden messagewithinthe decrypted message

I am too stupid to think of regular expressions at first, so I wrote this:

1 2 3 | ```
" 0123456\n SCARY?!\n🎃BDEFGH \n👻IJKLMN'\n🍬OPQTUV,\n💀WXZ.\#$:".split(?\n).map(&:chars).tap{|b|<<M.chars.reduce(nil){|r,e|e==?\n?print(e): r ?print(b[r][b[0].index e]): b.index{_1[0]==e}||print(b[1][b[0].index e])}}
3🍬4🎃04...
M
``` |

I did not want to code golf, but I did intend to wrote a one-liner. It seems hard to understand, but it is pretty straightforward if it is expanded:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | ```
board = [' ', '0', '1', '2', '3', '4', '5', '6'],
[' ', 'S', 'C', 'A', 'R', 'Y', '?', '!'],
['🎃', 'B', 'D', 'E', 'F', 'G', 'H', ' '],
['👻', 'I', 'J', 'K', 'L', 'M', 'N', "'"],
['🍬', 'O', 'P', 'Q', 'T', 'U', 'V', ','],
['💀', 'W', 'X', 'Z', '.', '#', '$', ':']
message = <<MESSAGE
3🍬4🎃04...
MESSAGE
message.chars.reduce nil do |row, encoded_char|
if encoded_char == ?\n # newline in the message
print encoded_char
elsif row # last char is an emoji, corresponding to a row in the board
print board[row][board[0].index encoded_char]
nil
elsif new_row = board.index { _1[0] == encoded_char }
new_row
else
print board[1][board[0].index encoded_char]
nil
end
end
``` |

Then I realized that I could have used regular expressions, so I wrote a cleaner version:

1 2 3 | ```
puts <<M.gsub(/([🎃👻🍬💀])?([0-6])/){|s|{nil=>'SCARY?!',🎃:'BDEFGH ',👻:"IJKLMN'",🍬:'OPQTUV,',💀:'WXZ.#$:'}[$1&.to_sym][$2.to_i]}
3🍬4🎃04...
M
``` |

Then I suddenly become creative and realized that I can use another regular expression to implement a string-based indexing, and that I can use `-p`

option of Ruby command line to save even more characters (here I smelled code golfing):

1 2 | ```
#!/usr/bin/env ruby -p
gsub(/(\D?)(\d)/){'SCARY?!🎃BDEFGH 👻IJKLMN\'🍬OPQTUV,💀WXZ.#$:'[/#$1.{#$2}(.)/,1]}
``` |

Here are some other solutions. Check them out!

- The
`-p`

option basically wraps the code in a`while gets`

loop, and you can access the current line with`$_`

. Ruby will output the contents of`$_`

after each iteration. - The method
`Kernel#gsub`

modifies`$_`

(the current processing input line). It is only available when running Ruby with`-p`

option. - The method
`String#[]`

returns a substring. What is good about this method is that, if you use a regular expression to find the substring, you can use the second argument to specify which capture group in the regular expression you want to return. - In a string literal, you can use
`#$some_global_variable`

as a shortcut of`#{$some_global_variable}`

. This is also true for instance variables and class variables.

The decoded message is:^{©}

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | ```
RUBY IS A LANGUAGE THAT WE PROGRAMMERS ADORE,
UNLEASHING MAGIC SPELLS WITHIN ITS CODE GALORE.
BENEATH THE HAUNTED MOON, ITS SYNTAX WE EXPLORE,
YET IN THE WORLD OF PROGRAMMING, IT THRIVES EVEN MORE.
IN THE CRYPTIC METHODS, WHERE EERIE BUGS MAY TREAD,
SKELETONS OF ERRORS IN THE DARK ARE WIDESPREAD.
NOT A LANGUAGE OF THE PAST, BUT ONE WE HOLD OUR HEADS,
OVER COUNTLESS PROJECTS, WHERE ITS POWER HAS SPREAD.
THE TRICKS AND TREATS OF RUBY, LIKE WITCHES' CHARMS, CONVENE,
DRAWING US INTO ITS WORLD, WHERE THE SUBLIME IS SEEN.
ENCHANTING US WITH GEMS, IT'S EVERY CODER'S DREAM,
ADVENTURES IN THE CODE, WHERE THE EERIE'S ROUTINE.
DEATH MAY SEEM TO LURK, BUT FOR RUBY IT'S HALLOWEEN!
``` |

(The contents do not share the license of this blog.)

Did you spot the hidden message?

]]>Hi! I said in today’s class that it is just a random choice whether we use $\mathrm i$ or $-\mathrm i$. Here is the justification:

First, mathematically, conjugation is an automorphism of $\mathbb C$ (in the sense of being a field). This fact can be easily verified. It can be easily understood by considering $\mathbb C$ as the extension field $\mathbb R[X]/(X^2+1)$. Furthermore, due to this fact, all theorems in complex analysis are still valid if we replace every number by its conjugate.

Then, consider replacing $-\mathrm i$ with $\mathrm i$ in the SE, namely changing $\psi' = -\mathrm iH\psi$ into $\psi' = \mathrm iH\psi$. Due to the mathematical fact above, the new SE should lead to exactly the same theory as our familiar QM because all physically meaningful quantities are real (so that their conjugate are still themselves). The solution to the SE will be $\psi = \psi_0\exp(\mathrm iHt)$ instead of $\psi = \psi_0\exp(-\mathrm iHt)$, and they are exactly the same except an opposite phase (which does not matter) (given that $\psi_0$ also becomes its original counterpart’s conjugate in the new theory, where by saying “conjugate” here I mean taking the conjugate of all of its coordinates under the basis of eigenvectors of $H$).

What about time reversal? The time reversal is $t\to-t$ in the SE, which is actually slightly different from $\mathrm i\to-\mathrm i$ because when doing the latter I also assume that we make $\psi_0$ its conjugate, while $t\to-t$ leaves $\psi_0$ unchanged. However, the close connection between conjugate and time reversal does give us a hint about what the T-symmetry looks like in QM: QM does have T-symmetry, but $T$ cannot be a linear operator because it unavoidably involves conjugation. Actually, conjugation often does look like time reversal. For example, $[X,P]=\mathrm i$ becoming $[X,P]=-\mathrm i$ can be either due to conjugation (the $\mathrm i\to-\mathrm i$ here) or due to time reversal ($P\to-P$ while $X$ unchanged).

Other than saving some minus signs here or there, there is actually a benefit (though minor) about replacing our familiar QM with its conjugate: this makes equations in QM have the same convention as in electrical engineering. Specifically, QM uses $\exp(-\mathrm i E t)$ while EE uses $\exp(\mathrm i\omega t)$. I don’t know why, but conventions in EM seem to be the same as in QM because they also use $\exp(-\mathrm i\omega t)$. It seems strange that EE does not use the same conventions in EM.

Back to where this topic was brought up: why is infinitesimal translation identity minus $\mathrm i P \varepsilon$ instead of plus? The answer to this question is the choice we made when we wrote the SE, which is just a matter of convention. The question that can be genuinely asked is this: why is infinitesimal translation identity minus $\varepsilon \mathrm d/\mathrm dx$ instead of plus? The arguments made in class are then valid to answer this question.

Best regards,

Ulysses Zhan

]]>`overleaf.your-domain.com`

with your own domain name.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | ```
server {
listen 80;
listen [::]:80;
server_name overleaf.your-domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name overleaf.your-domain.com;
ssl_certificate /path/to/your/cert/fullchain.pem;
ssl_certificate_key /path/to/your/cert/privkey.pem;
location / {
proxy_set_header Host $host;
proxy_pass http://localhost:8444; # use a port you like
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Scheme $scheme;
proxy_buffering off;
}
location ~ /.well-known {
allow all;
}
}
``` |

*Refer to*: quick start guide.

Run the following:

1 2 3 4 | ```
OVERLEAF_HOME=./overleaf # set to whatever you want; sudo in following if not having write access
git clone https://github.com/overleaf/toolkit.git $OVERLEAF_HOME
cd $OVERLEAF_HOME
bin/init
``` |

*Notice*: You may look at `git log -n 1 --pretty=format:"%H"`

. Mine was `cc4d01bb46d4e0d7c08124372ff69a4578e7333d`

. I can guarantee that the following steps work with this version of Overleaf toolkit, but may fail in future versions.

Edit `config/variables.env`

, add the following:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | ```
# See https://github.com/overleaf/overleaf/issues/1044#issuecomment-1741289459
PATH=/usr/local/texlive/2023/bin/x86_64-linux:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
SHARELATEX_BEHIND_PROXY=true
# Do NOT set SHARELATEX_SECURE_COOKIE to true, see https://github.com/overleaf/overleaf/issues/388#issuecomment-1741162658
SHARELATEX_SITE_URL=https://overleaf.your-domain.com
# If you want, set SHARELATEX_APP_NAME, SHARELATEX_NAV_TITLE, SHARELATEX_HEADER_IMAGE_URL, SHARELATEX_ADMIN_EMAIL.
# Email settings, see https://github.com/overleaf/overleaf/issues/816#issuecomment-864665071
SHARELATEX_EMAIL_FROM_ADDRESS=Overleaf <your.email@domain.com>
# SHARELATEX_EMAIL_REPLY_TO does not seem to work.
SHARELATEX_EMAIL_SMTP_HOST=smtp.domain.com
SHARELATEX_EMAIL_SMTP_PORT=465
SHARELATEX_EMAIL_SMTP_SECURE=true
SHARELATEX_EMAIL_SMTP_USER=your.email@domain.com
SHARELATEX_EMAIL_SMTP_PASS=yourpassword
SHARELATEX_EMAIL_SMTP_TLS_REJECT_UNAUTH=false
SHARELATEX_EMAIL_SMTP_IGNORE_TLS=false
# Uncomment this when you want to debug:
#LOG_LEVEL=debug
# Search for process.env in github.com/overleaf/overleaf to see more options (shame for not documenting them):
# https://github.com/search?q=repo%3Aoverleaf%2Foverleaf+process.env&type=code
``` |

Edit these entries in `config/overleaf.rc`

:

1 2 | ```
# Match the port in NGINX conf
SHARELATEX_PORT=8444
``` |

Create a file `config/docker-compose.override.yml`

and write:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | ```
---
version: '2.2'
services:
mongo:
restart: unless-stopped
container_name: overleaf-mongo
redis:
restart: unless-stopped
container_name: overleaf-redis
sharelatex:
restart: unless-stopped
#image: sharelatex/sharelatex:with-texlive-full # will be uncommented later
container_name: overleaf-sharelatex
stop_grace_period: 10s # see https://github.com/overleaf/overleaf/issues/1156
``` |

Run `bin/up`

. Wait for the containers to be up. Then, go to `https://overleaf.your-domain.com/launchpad`

and set up the admin account. Use `bin/logs`

in another shell to check the logs if there are any issues.

*Refer to*: upgrading TeXLive.

While the containers are up, run `bin/shell`

and run `tlmgr install scheme-full`

. You need to wait for a long time.

After that, run `docker commit overleaf-sharelatex sharelatex/sharelatex:with-texlive-full`

. Then, edit `config/docker-compose.override.yml`

and uncomment the line `image: sharelatex/sharelatex:with-texlive-full`

. Then, run

1 2 3 | ```
bin/stop
bin/docker-compose rm -f sharelatex
bin/up
``` |

When you upgrade later, you need to re-comment the line in `config/docker-compose.override.yml`

, delete the container, and do the above steps again.

I have been using Windows since I was a child. The first computer that I have ever used is one with Windows XP installed. It was the one that my grandpa and grandma used to use for stock exchanges, and I got to use it after the stock market closes every day. I do not remember much about the specs of that computer, but I remember how slow it was but how much fun I had with it.

My second computer is a Dell laptop that had been used by my mom. It was a laptop with Windows 7 installed. My mom gave that laptop to me because I was interested in digital drawing at that time, and she was afraid that the old computer that my grandparents used was not good enough to run Photoshop.

I used that laptop for a long time. Due to the rapid development of technology, that laptop gradually failed to catch up. Then, when I was in middle school, my parents bought me a new laptop. It was Dell Inspiron 15 7570, with Windows 10 installed. It was my choice, and the reason for that choice is mainly due to its color—pink. Despite that, its specs were not bad at all, and the price was also very cheap.

For a long time, I equated desktop OS with Windows. I first knew about Linux when I knew OI (Olympiad in Informatics) in 2019. OI is a programming competition for high school students. The programs submitted by the contestants are assessed on Linux computers. I did not dig far into OI ever, but what is important is that I then knew the existence of Linux. I installed VirtualBox on my Inspiron 7570 and installed Ubuntu 19.04 on it.

My first impression of Linux was that package managers are awesome. I was amazed by how easy it is to install software on Linux. I then also knew about what FOSS is, what GNU is, etc., and I realized that GNU/Linux is a very good and important OS. I was trying to move all my workflows (including studying and developing) to Linux in the virtual machine, and I was enjoying it. To have better experience with virtual machines, I replaced the 8 GB memory of my Inspiron 7570 to two 16 GB memory sticks.

However, virtual machines have their limitations. In 2020 and 2021, I stopped using Linux for a while. Fortunately, in 2022, WSL became very popular. I installed WSL on my Inspiron 7570 and started to use Linux again. This time, I only use Linux for developing purposes because I did not install a desktop environment on WSL.

In 2022, I graduated from high school and was going to study abroad. Although my Inspiron 7570 was still working fine, I felt that I needed a new computer for my upcoming 4 years of college life and that I should try Linux as my daily driver on my new computer. The new laptop I got was Lenovo Legion R7000 2021, which is the one I am using now. It is a gaming laptop with Windows 11 installed. I switched the memory sticks of my Inspiron 7570 and my Legion R7000 so that my Legion R7000 has 32 GB memory.

To install Linux on my Legion R7000, I bought an additional 2 TB SSD and put it in the second M.2 slot of my Legion R7000. I installed Ubuntu 22.04 on the SSD and set up dual boot. However, I found that Ubuntu 22.04 was too new and that the newly introduced Wayland is not very stable. I then reinstalled Ubuntu 20.04 on the SSD and set up dual boot again, and this is the same OS that I am still using now on this computer.

Before I installed Linux on my Legion R7000, I made a list of software that I need to use on Linux. I listed every single software that I was currently using on Windows and tried to find alternatives for them on Linux. If there were, I would try the Windows version of the alternative software first. If I liked it, I would continue to use it on Linux after I installed Linux later.

Everything did not worked through very well, but I made it. I transitioned to Linux as my daily driver on my Legion R7000.

There are several things that I do on my computer every day. How are them on Linux desktop?

I have played many games on Linux. The games I mainly play are rhythm games, puzzle games, PvZ series games, and Celeste.

I have to say, thanks to Steam and Lutris, gaming on Linux is not as bad as I thought. It is just as good as gaming on Windows. Sometimes it is even better because some very old games cannot run perfectly on Windows but can run perfectly on Linux with certain versions of Wine or Proton. Another edge case that prefers Linux is when I tried playing Genshin Impact with my Wacom tablet (I was just testing whether it was playable. I do not actually play that game.). It is playable on Linux, but it is not playable on Windows.

To play Android games, I use Genymotion as an Android emulator. It is not a good alternative for Noxplayer or MuMu on Windows, but it is good enough. For more information about Android gaming on Linux, see my article on Zhihu (Chinese).

This is not an aspect of my life, but I want to mention it here. I use my drawing tablet as the pointer device on my computer instead of a mouse because I find it more comfortable, flexible, and precise. However, because it is not a very common input device, I guessed that it might not be well supported on Linux.

I have a Wacom Intuos CTL-690 drawing tablet. It is very old, but it is still working fine, and I am using it right now. It rather surprised me that Ubuntu is preinstalled with drivers for Wacom tablets. I did not need to install any drivers for my Wacom tablet on Ubuntu.

The only annoyance is that Qt applications do not work well with Wacom tablets on Linux. For example, VLC, Olive (video editor), and OBS Studio do not work well with Wacom tablets on Linux. Although Krita, as a Qt application, works well with Wacom tablets on Linux, that is because Krita’s binary releases contains several patches to Qt as well as other dependencies.

I am a college student, so I need to read many papers and textbooks, take notes, and write homework assignments. To be honest, there are no differences between different OSes in this aspect.

The default PDF reader of GNOME is Evince, and it is good enough. However, if SumatraPDF has a Linux version, I would definitely use it instead. I also read many PDFs online (from my own instance of Kavita), and it is done on browser, so it is not very related to OS.

I use Joplin (synchronized using my own instance of Nextcloud) and Write (not open-source) as my note-taking apps. They are very convenient.

For writing homework, I mostly use $\LaTeX$. It is good enough and convenient enough for writing homework. If I have to work with Microsoft Office documents, I use LibreOffice. These aspects are not very related to OS, but what is good about Linux in this aspect is that it is very easy to install $\LaTeX$.

I write programs. Surprisingly, developing on Linux is not actually much better than developing on Windows. First, development tools that are available on Windows are mostly available on Linux, so I do not need to worry about having different experiences due to using different tools. Second, WSL and Windows Terminal are actually decent for development.

The only advantage of developing on Linux, in my opinion, is that I can integrate development with my life more easily. Many of my everyday workflows depend on tools developed on my own, so their development environments are the same as the production environments. I cannot do that on Windows because most of my development environments are in WSL while my everyday workflows are done outside WSL.

I have to say, even with all the efforts of the Linux community, Linux desktop is still not good enough. Although it is a better desktop OS than Windows for me, there are still many things that are not as good as Windows.

For example, specific to my computer (Legion R7000), Linux has very serious vsync issue. It can be solved by tweaking an option in NVIDIA X Server Settings, but that will make the frame rate too low to be usable.

There are many features that have been actively worked on by the Linux community but still not available or very buggy. As typical examples, HDR, fractional scaling, etc. are still not steadily available on Linux desktop.

There are often other commonly complained issues about Linux that I have met with, such as the lack of driver support, lack of application/game software support, audio issues, etc., as well as many other issues that I have never solved or found anyone who also has the same issue. To solve these issues, I have to spend hours on my OS to tweak them or even write programs to solve them (for example, I wrote a Python script just to make my Smartisan stylus usable).

You may wonder what happened to my Inspiron 7570. Well, I did not bring it to college though I wanted to (my mom did not allow me to, and that is another story). If I brought it, I would have been using it as a self-hosting server. Actually, I already installed Arch Linux on it and set up a few services on it before I left for college.

However, I still got to do self-hosting because I brought another computer. Thanks to my friend Xiang, who gave me his old laptop (Lenovo Legion Y7000P), I got another computer for self-hosting. I installed Arch Linux on it and set up a few services on it. I need to say, self-hosting is very fun and useful.

What is awesome about self-hosting in my university (UC Santa Barbara) is that its dorms are very friendly to self-hosting. Every ethernet port in the dorm has a public IPv4 address. Previously I thought that I would have to use tunneling services such as ngrok, but I do not need to do that now.

Unfortunately, due to the low availability of campus housing and various other reasons, I decided to move out of the dorms and live off-campus. I have not moved in to my new apartment unit yet, but I have to investigate about available ISPs in the area that are friendly to self-hosting (has public IP address and does not block inbound traffic).

Self-hosting is very fun and useful. I have been using my self-hosted services every day. I set up more and more services on the Legion Y7000P. Right now, there are more than 20 services running on it.

Among all the services, the most useful one, in my opinion, is my Nextcloud instance. Nextcloud is an cloud storage service, and what is good about it is that its desktop client is the best cloud storage client available on Linux. I use it to synchronize my study notes and homeworks.

Another useful service is my Kavita instance. I have already mentioned it before that I read many PDFs online. I often download PDFs from the Internet on the Legion Y7000P so that I can read them on Kavita. By using Kavita, I do not have to worry about downloading multiple copies on all my devices that I want to read the PDF on.

I also use my ntfy instance to send notifications to my phone. I have previously written articles about how I use ntfy to send notifications to help with my life (this and this).

I do not want to make this article and endless list of my useful services, so I will stop here.

Self-hosting also made me learn a lot. First, because I does not use a desktop environment on my self-hosting server, I get to learn how to do various things without a GUI. Then, to make my services secure while accessible, I learned a lot about networking.

I have to admit that self-hosting is not necessarily better than using cloud services provided by companies.

First, although self-hosting may be cheaper than commercial solutions if you only consider the electricity cost, but it is not necessarily cheaper if you consider the time spent on it. You need to maintain your self-hosting server, and that takes time. You need to regularly update the software on your server, and every time you want to upgrade a service, you need to check the changelog of the service for any breaking changes.

Second, not every place is friendly to self-hosting. Although my university is friendly to self-hosting, this does not apply to other places. If your server is at home, your ISP may not give you a public IP address (CGNAT), may block certain ports (such as port 22, 80, and 443), or may block all inbound traffic. Tunneling services such as Cloudflare tunnel and ngrok may mitigate this issue, but they are not as usable.

]]>