latest news

Distractions and amusements, with a sandwich and coffee.

Twenty — minutes — maybe — more.
•
• choose four words
• more quotes

On March 14th celebrate `\pi` Day. Hug `\pi`—find a way to do it.

For those who favour `\tau=2\pi` will have to postpone celebrations until July 26th. That's what you get for thinking that `\pi` is wrong.

If you're not into details, you may opt to party on July 22nd, which is `\pi` approximation day (`\pi` ≈ 22/7). It's 20% more accurate that the official `\pi` day!

Finally, if you believe that `\pi = 3`, you should read why `\pi` is not equal to 3.

For the 2014 `\pi` day, two styles of posters are available: folded paths and frequency circles.

The folded paths show `\pi` on a path that maximizes adjacent prime digits and were created using a protein-folding algorithm.

The frequency circles colourfully depict the ratio of digits in groupings of 3 or 6. Oh, look, there's the Feynman Point!

Some of the posters for this year's Pi Day art expand on the work from last year, which showed Pi as colored circles on a grid.

For those of you who really liked this minimalist depiction of π , I've created something slightly more complicated, but still stylish: Pi digit frequency circles. These are pretty and easy to understand. If you like random distribution of colors (and circles), these are your thing.

Briefly, each set of concentric rings corresponds to a sequence of digits in
π
, such as 3 (`314 159 265 ...`

) or 6 (`314159 265358 ...`

). The number of times a given digit is seen within a sequence is encoded by the thickness of the ring. Rings are ordered outward in numerical order of their digits (i.e. 0 on the inside, 9 on the outside).

For some posters, the first digit (3) is offset from the rest of the groups. Look for the high count of 9s at the end of posters showing π up to the Feynman Point (6 9s at digit 762). For posters that show more digits, try to find the Feynman Point somewhere among the groups.

The Feynman point is at an extremely interesting location. If we group the digits of
π
into groups of 6, then the first `999999`

falls exactly into the 128th group. But, if we group the digits by 3s, then the two groups `999`

and `999`

fall exactly into groups 255 and 256 (a power of 2!), which can be arranged into a perfect square of 16 x 16 groups.

The Feynman point is a specific case of the general case in which the digit *d* appears *n* times in a row. I call this the (*d*=7,*n*=6) and provide a list of all these points in the first 1,000,000 digits. Points with a large *n* value will contribute significantly to the frequency distribution of the digit group they fall in. If the sequence is split across groups, its impact is lower.

We examine two very common supervised machine learning methods: linear support vector machines (SVM) and k-nearest neighbors (kNN).

SVM is often less computationally demanding than kNN and is easier to interpret, but it can identify only a limited set of patterns. On the other hand, kNN can find very complex patterns, but its output is more challenging to interpret.

We illustrate SVM using a data set in which points fall into two categories, which are separated in SVM by a straight line "margin". SVM can be tuned using a parameter that influences the width and location of the margin, permitting points to fall within the margin or on the wrong side of the margin. We then show how kNN relaxes explicit boundary definitions, such as the straight line in SVM, and how kNN too can be tuned to create more robust classification.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Machine learning: a primer. Nature Methods 15:5–6.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

In a Nature graphics blog article, I present my process behind designing the stark black-and-white Nature 10 cover.

Nature 10, 18 December 2017

In this primer, we focus on essential ML principles— a modeling strategy to let the data speak for themselves, to the extent possible.

The benefits of ML arise from its use of a large number of tuning parameters or weights, which control the algorithmâ€™s complexity and are estimated from the data using numerical optimization. Often ML algorithms are motivated by heuristics such as models of interacting neurons or natural evolutionâ€”even if the underlying mechanism of the biological system being studied is substantially different. The utility of ML algorithms is typically assessed empirically by how well extracted patterns generalize to new observations.

We present a data scenario in which we fit to a model with 5 predictors using polynomials and show what to expect from ML when noise and sample size vary. We also demonstrate the consequences of excluding an important predictor or including a spurious one.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

Just in time for the season, I've simulated a snow-pile of snowflakes based on the Gravner-Griffeath model.

The work is described as a wintertime tale in In Silico Flurries: Computing a world of snow and co-authored with Jake Lever in the Scientific American SA Blog.

Gravner, J. & Griffeath, D. (2007) Modeling Snow Crystal Growth II: A mesoscopic lattice map with plausible dynamics.

My illustration of the location of genes in the human genome that are implicated in disease appears in The Objects that Power the Global Economy, a book by Quartz.

We introduce two common ensemble methods: bagging and random forests. Both of these methods repeat a statistical analysis on a bootstrap sample to improve the accuracy of the predictor. Our column shows these methods as applied to Classification and Regression Trees.

For example, we can sample the space of values more finely when using bagging with regression trees because each sample has potentially different boundaries at which the tree splits.

Random forests generate a large number of trees by not only generating bootstrap samples but also randomly choosing which predictor variables are considered at each split in the tree.

Krzywinski, M. & Altman, N. (2017) Points of Significance: Ensemble methods: bagging and random forests. *Nature Methods* **14**:933–934.

Krzywinski, M. & Altman, N. (2017) Points of Significance: Classification and regression trees. *Nature Methods* **14**:757–758.