latest news

Distractions and amusements, with a sandwich and coffee.

Tango is a sad thought that is danced.
•
• think & dance
• more quotes

The never-repeating digits of `\pi` can be approximated by `22/7 = 3.142857`

to within 0.04%. These pages artistically and mathematically explore rational approximations to `\pi`. This 22/7 ratio is celebrated each year on July 22nd. If you like hand waving or back-of-envelope mathematics, this day is for you: `\pi` approximation day!

Want more math + art? Discover the Accidental Similarity Number. Find humor in my poster of the first 2,000 4s of `\pi`.

There are two kinds of `\pi` Approximation Day posters.

The first uses the Archimedean spiral for its design, which I've used before for other numerical art. The second packs warped circles, whose ratio of circumference to average diameter is `22/7` into what I call `\pi`-approximate circular packing.

As you probably know, the ratio of the circumference of a circle to its diameter is `\pi`. $$ C / d = \pi $$

For `\pi` approximation day, let's ask what would happen if $$ C / d = 22/7 $$

where now `C` is the circumference of some shape other than a circle. What could this shape be?

A good place to start is to think about an ellipse. I've done this before in the 22/7 Universe article, in which I considered an ellipse with a major axis of `r+\delta` and a minor axis of `r` and solved for `\delta` such that the circumference of the ellipse divided by `2 r` would be `22/7`. Doing so means numerically solving the equation $$ \frac{C(r,r+\delta)}{2r} = 22/7 $$

where `r + \delta` is the major axis, `r` is the minor axis and `C(r,r+\delta)` is the circumference of the ellipse. Substituting the expression for the circumference, $$ 4(r+\delta) \int_0^{\pi/2} \sqrt { 1 - \left(1-\frac{r}{(r+\delta)^2}\right)\sin^2 \theta } d \theta = 2 r \frac{22}{7}$$

If we set `r=1` and solve it turns out that only a very minor deformation is required and `\delta = 0.0008`. You can verify this at Wolfram Alpha.

I wanted to make some art based on the shape of the this ellipse, but a deformation of 0.08% is not perceptible. So I came up with a slightly different approach to how I define the original circumference-to-diameter ratio.

Instead of treating the diameter as `r` and using `r + \delta` as the major axis, I now define the diameter as twice the average radius, or `2r + \delta`. This means that the equation to solve is $$ \frac{C(r,r+\delta)}{2r+\delta} = 22/7 $$

As before, setting `r=1` and substituting the expression for the circumference of an ellipse, we get $$ 4(1+\delta) \int_0^{\pi/2} \sqrt { 1 - \left(1-\frac{1}{(1+\delta)^2}\right)\sin^2 \theta } d \theta = (2+\delta) \frac{22}{7}$$

and solving this for `\delta` find $$ \delta = 0.083599769... $$

You can verify this at Wolfram Alpha.

This is a more useable approach since an 8% warping of a circle can be easily perceived.

Below is matrix of perfect circles along side the 8% deformed circles.

The art posters are based on a packing of these deformed circles.

By superimposing perfect circles on the warped circles, fun patterns appear.

If you pack perfect circles perfectly, the area occupied by the circles is `\pi/4 = 78.5%`.

What is the area occupied by perfect packing of warped and randomly rotated (like in the posters) circles?

To motivate choice of colors, I chose images with a 1970's feel.

Using my color summarizer, I analyzed each image for its representative colors. Using these colors and their proportions, I colored the perfect and warped circles.

For each poster of these color schemes, two poster versions are available. In one, the perfect cirlces are shown with warped circles as a clip mask. In the other, warped circles are shown, clipped by perfect circles.

We discuss the many ways in which analysis can be confounded when data has a large number of dimensions (variables). Collectively, these are called the "curses of dimensionality".

Some of these are unintuitive, such as the fact that the volume of the hypersphere increases and then shrinks beyond about 7 dimensions, while the volume of the hypercube always increases. This means that high-dimensional space is "mostly corners" and the distance between points increases greatly with dimension. This has consequences on correlation and classification.

Altman, N. & Krzywinski, M. (2018) Points of significance: Curse(s) of dimensionality *Nature Methods* **15**:399–400.

Inference creates a mathematical model of the datageneration process to formalize understanding or test a hypothesis about how the system behaves. Prediction aims at forecasting unobserved outcomes or future behavior. Typically we want to do both and know how biological processes work and what will happen next. Inference and ML are complementary in pointing us to biologically meaningful conclusions.

Statistics asks us to choose a model that incorporates our knowledge of the system, and ML requires us to choose a predictive algorithm by relying on its empirical capabilities. Justification for an inference model typically rests on whether we feel it adequately captures the essence of the system. The choice of pattern-learning algorithms often depends on measures of past performance in similar scenarios.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Statistics vs machine learning. Nature Methods 15:233–234.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: supervised methods. Nature Methods 15:5–6.

Celebrate `\pi` Day (March 14th) and go to brand new places. Together with Jake Lever, this year we shrink the world and play with road maps.

Streets are seamlessly streets from across the world. Finally, a halva shop on the same block!

Intriguing and personal patterns of urban development for each city appear in the Boonies, Burbs and Boutiques series.

No color—just lines. Lines from Marrakesh, Prague, Istanbul, Nice and other destinations for the mind and the heart.

The art is featured in the Pi City on the Scientific American SA Visual blog.

Check out art from previous years: 2013 `\pi` Day and 2014 `\pi` Day, 2015 `\pi` Day, 2016 `\pi` Day and 2017 `\pi` Day.

We examine two very common supervised machine learning methods: linear support vector machines (SVM) and k-nearest neighbors (kNN).

SVM is often less computationally demanding than kNN and is easier to interpret, but it can identify only a limited set of patterns. On the other hand, kNN can find very complex patterns, but its output is more challenging to interpret.

We illustrate SVM using a data set in which points fall into two categories, which are separated in SVM by a straight line "margin". SVM can be tuned using a parameter that influences the width and location of the margin, permitting points to fall within the margin or on the wrong side of the margin. We then show how kNN relaxes explicit boundary definitions, such as the straight line in SVM, and how kNN too can be tuned to create more robust classification.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Machine learning: a primer. Nature Methods 15:5–6.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

In a Nature graphics blog article, I present my process behind designing the stark black-and-white Nature 10 cover.

Nature 10, 18 December 2017

In this primer, we focus on essential ML principles— a modeling strategy to let the data speak for themselves, to the extent possible.

The benefits of ML arise from its use of a large number of tuning parameters or weights, which control the algorithm’s complexity and are estimated from the data using numerical optimization. Often ML algorithms are motivated by heuristics such as models of interacting neurons or natural evolution—even if the underlying mechanism of the biological system being studied is substantially different. The utility of ML algorithms is typically assessed empirically by how well extracted patterns generalize to new observations.

We present a data scenario in which we fit to a model with 5 predictors using polynomials and show what to expect from ML when noise and sample size vary. We also demonstrate the consequences of excluding an important predictor or including a spurious one.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.