The never-repeating digits of `\pi` can be approximated by 22/7 = 3.142857
to within 0.04%. These pages artistically and mathematically explore rational approximations to `\pi`. This 22/7 ratio is celebrated each year on July 22nd. If you like hand waving or back-of-envelope mathematics, this day is for you: `\pi` approximation day!
What would circles look like if `\pi`=22/7?
Imagine that the circle had a tiny loop at one of its points. The circumference of this loop would be added to the circumference of the circle, but the loop would be so small that we would never notice it.
This is reminiscent of how string theories describe higher dimensions—as tiny loops at each point in space, except in my example the loop is only at one point.
This idea originated with Klein, who explained the fourth dimension as a curled up circle of a very small radius. Another way in which this curling-up is used is to say that the fifth dimension is a curled up Planck length, as explained in this Imagining 10 Dimensions video.
If this idea is difficult to wrap your head around, you're not alone. We cannot think of additional dimensions in the regular spatial sence since we have no means of experiencing such phenomena. We can however imagine how flatlanders might explain the 3rd dimension, since we can perceive it. They would draw the curled up circles in their plane because they would not have the experience of drawing with perspective mimicking our 3rd dimension.
We would draw their explanation as shown on the right in the figure above, borrowing from our concept of the 3rd spatial dimension. Now imagine showing our explanation to a flatlander. They would not see the same thing as you—the circles would not intuitively imply the higher dimension to them.
This is analogous to why we cannot draw folded up dimensions. We are merely solidlanders—flatlanders in 3d space. Creatures that can perceive more spatial dimensions would use us as examples of diminished perceptual ability.
Another way to imagine how a circle might look is a little more realistic. The theory of special relativity tells us that when we travel at speed relative to another object the dimensions of that object appear contracted to us in the direction of motion.
This contraction is always present, but essentially imperceptible unless we're travelling fast enough. For example, in order for a 1 meter object to appear contracted by the length of a hydrogen molecule (0.3 nm) we would have to be travelling at 7.3 km/s (Wolfram Alpha calculation)!
How fast would we have to be going to compress the circle sufficiently so that its circumference and radius ratio embody the `22/7` approximation of `\pi`? Pretty fast, it turns out. If we travel at just over 12,000 km/sec (0.04 times the speed of light, Wolfram Alpha calculation), the circle will compress as shown in the figure above, and the ratio of its circumference to the radius along direction of motion will make `\pi` appear to be `22/7`.
This compression in length would be barely perceptible to us. Below are both circles, shown overlapping, with `delta` being the extra length in radius required.
The value of `\delta`, which is 0.0008049179155 (if `r = 1`), can be calculated by considering the perimeter of an ellipse. The fact that `\delta` is small shouldn't be surprising since `22/7` is an excellent approximation of `\pi`, good to 0.04%.
Calculating the parameter of an ellipse is more complicated than calculating it for a circle because it uses something called an elliptic integral. This integral has no analytical solution and requires numerical approximation. Luckily, we have computers.
We can use the expression shown above for the perimeter of the ellipse to determine how much the circle needs to be deformed. Let's write `a = r + \delta` (original radius with slight deformation `\delta`) and `b=r`. Since `22/7 > \pi` we know that `\delta > 0`.
It remains to solve the equation below for a value of `\delta` that will yield a ratio of circumference to `r` of `2 \times 22/7`.
To make things simpler, let set `r=1`. Solving the equation numerically, I find $$\delta = 0.0008049179155$$
You can verify this solution at Wolfram Alpha.
After all this, we come full-circle to the meaning of full-circle.
You might ask why I didn't change the definition of `\pi` to `22/7` in the upper limit of the integral. After all, why not make the approximation exercise more faithful to the approximation?
It turns out that if I did that I would get `\delta=0`, which brings us back to the original circle. How is this possible?
Technically, this is because the integral returns the upper limit as its answer if the eccentricity is zero (i.e., `E(x,0)=x`).
Intuitively, this is because changing the upper limit of the integral actually redefines the angle of a full revolution. Now, full-circle isn't `2 \pi` radians, but `2 \times 22/7`. Given that the ratio of the circumference of a circle to its radius is exactly the size, in radians, of a full revolution, we don't need to change the shape of the circle if we're willing to change what a full revolution means.
Nature uses only the longest threads to weave her patterns, so that each small piece of her fabric reveals the organization of the entire tapestry. – Richard Feynman
Following up on our Neural network primer column, this month we explore a different kind of network architecture: a convolutional network.
The convolutional network replaces the hidden layer of a fully connected network (FCN) with one or more filters (a kind of neuron that looks at the input within a narrow window).
Even through convolutional networks have far fewer neurons that an FCN, they can perform substantially better for certain kinds of problems, such as sequence motif detection.
Derry, A., Krzywinski, M & Altman, N. (2023) Points of significance: Convolutional neural networks. Nature Methods 20:.
Derry, A., Krzywinski, M. & Altman, N. (2023) Points of significance: Neural network primer. Nature Methods 20:165–167.
Lever, J., Krzywinski, M. & Altman, N. (2016) Points of significance: Logistic regression. Nature Methods 13:541–542.
Nature is often hidden, sometimes overcome, seldom extinguished. —Francis Bacon
In the first of a series of columns about neural networks, we introduce them with an intuitive approach that draws from our discussion about logistic regression.
Simple neural networks are just a chain of linear regressions. And, although neural network models can get very complicated, their essence can be understood in terms of relatively basic principles.
We show how neural network components (neurons) can be arranged in the network and discuss the ideas of hidden layers. Using a simple data set we show how even a 3-neuron neural network can already model relatively complicated data patterns.
Derry, A., Krzywinski, M & Altman, N. (2023) Points of significance: Neural network primer. Nature Methods 20:165–167.
Lever, J., Krzywinski, M. & Altman, N. (2016) Points of significance: Logistic regression. Nature Methods 13:541–542.
Our cover on the 11 January 2023 Cell Genomics issue depicts the process of determining the parent-of-origin using differential methylation of alleles at imprinted regions (iDMRs) is imagined as a circuit.
Designed in collaboration with with Carlos Urzua.
Akbari, V. et al. Parent-of-origin detection and chromosome-scale haplotyping using long-read DNA methylation sequencing and Strand-seq (2023) Cell Genomics 3(1).
Browse my gallery of cover designs.
My cover design on the 6 January 2023 Science Advances issue depicts DNA sequencing read translation in high-dimensional space. The image showss 672 bases of sequencing barcodes generated by three different single-cell RNA sequencing platforms were encoded as oriented triangles on the faces of three 7-dimensional cubes.
More details about the design.
Kijima, Y. et al. A universal sequencing read interpreter (2023) Science Advances 9.
Browse my gallery of cover designs.
If you sit on the sofa for your entire life, you’re running a higher risk of getting heart disease and cancer. —Alex Honnold, American rock climber
In a follow-up to our Survival analysis — time-to-event data and censoring article, we look at how regression can be used to account for additional risk factors in survival analysis.
We explore accelerated failure time regression (AFTR) and the Cox Proportional Hazards model (Cox PH).
Dey, T., Lipsitz, S.R., Cooper, Z., Trinh, Q., Krzywinski, M & Altman, N. (2022) Points of significance: Regression modeling of time-to-event data with censoring. Nature Methods 19:1513–1515.
My 5-dimensional animation sets the visual stage for Max Cooper's Ascent from the album Unspoken Words. I have previously collaborated with Max on telling a story about infinity for his Yearning for the Infinite album.
I provide a walkthrough the video, describe the animation system I created to generate the frames, and show you all the keyframes
The video recently premiered on YouTube.
Renders of the full scene are available as NFTs.