latest news

Distractions and amusements, with a sandwich and coffee.

Sun is on my face ...a beautiful day without you.
•
• be apart
• more quotes

On March 14th celebrate `\pi` Day. Hug `\pi`—find a way to do it.

For those who favour `\tau=2\pi` will have to postpone celebrations until July 26th. That's what you get for thinking that `\pi` is wrong.

If you're not into details, you may opt to party on July 22nd, which is `\pi` approximation day (`\pi` ≈ 22/7). It's 20% more accurate that the official `\pi` day!

Finally, if you believe that `\pi = 3`, you should read why `\pi` is not equal to 3.

Not a circle in sight in the 2015 `\pi` day art. Try to figure out how up to 612,330 digits are encoded before reading about the method. `\pi`'s transcendental friends `\phi` and `e` are there too—golden and natural. Get it?

This year's `\pi` day is particularly special. The digits of time specify a precise time if the date is encoded in North American day-month-year convention: 3-14-15 9:26:53.

The art has been featured in Ana Swanson's Wonkblog article at the Washington Post—10 Stunning Images Show The Beauty Hidden in `\pi`.

We begin with a square and progressively divide it. At each stage, the digit in `pi` is used to determine how many lines are used in the division. The thickness of the lines used for the divisions can be attenuated for higher levels to give the treemap some texture.

This method of encoding data is known as treemapping. Typically, it is used to encode hierarchical information, such as hard disk spac usage, where the divisions correspond to the total size of files within directories.

This kind of treemap can be made from any number. Below I show 6 level maps for `pi`, `phi` (Golden ratio) and `e` (base of natural logarithm).

The number of digits per level, `n_i` and total digits, `N_i` in the tree map for `pi`, `phi` and `e` is shown below for levels `i = 1 .. 6`.

PI PHI e i n_i N_i n_i N_i n_i N_i 1 1 1 1 1 1 1 2 4 5 2 3 3 4 3 15 20 9 12 19 23 4 98 118 59 71 96 119 5 548 666 330 401 574 693 6 2962 3628 1857 2258 3162 3855 7 16616 20244 10041 12299 17541 21396 8 91225 111469 9 500861 612330

In all the treemaps above, the divisions were made uniformly for each rectangle. With uniform division, the lines that divide a shape are evenly spaced. With randomized division, the placement of lines is randomized, while still ensuring that lines do not coincide.

A multiplier, such as `phi` (Golden Ratio), can be used to control the division. In this case, the first division is made at 1/`phi` (0.62/0.38 split) and the remaining rectangle (0.38) is further divided at `/`phi` (0.24/0.14 split).

Using a non-uniform multipler is one way to embed another number in the art.

When a multiplier like `phi` is used, divisions at the top levels create very large rectangles. To attenuate this, the effect of the multiplier can be weighted by the level. Regardless of what multiplier is used, the first level is always uniformly divided. Division at subsequent levels incorporates more of the multiplier effect.

The orientation of the division can be uniform (same for a layer and alternating across layers), alternating (alternating across and within a layer) or random. This modification has an effect only if the divisions are not uniform.

To emphasize the layers, a different line thickness can be used. When lines are drawn progressively thinner with each layer, detail is controlled and the map has more texture.

When all lines have the same thickness, it is harder to distinguish levels.

You could see this as a challenge! Below I show the treemaps for `pi`, `phi` and `e` with and without stroke modulation.

When displayed at a low resolution (the image below is 620 pixels across), shapes at higher levels appear darker because the distance between the lines within is close to (or smaller) than a pixel. By matching the line thickness to the image resolution, you can control how dark the smallest divisions appear.

Adding color can make things better, or worse. Dropping color randomly, without respect for the level structure of the treemap, creates a mess.

We can rescue things by increasing the probability that a given rectangle will be made transparent—this will allow the color of the rectangle below to show through. Additionally, by drawing the layers in increasing order, smaller rectangles are drawn on top of bigger ones, giving a sense of recursive subdivision.

Because the color is assigned randomly, various instances of the treemap can be made. The maps below have the same proportion of colors and transparency (same as the first image in second row in the figure above) and vary only by the random seed to pick colors.

The color assignments above were random. For each shape the probability of choosing a given color (transparent, white, yellow, red, blue) was the same.

Color choice for a shape can also be influenced by the color of neighbouring shapes. To do this, we need to create a graph that captures the adjacency relationship between all the shapes at each level. Below I show the first 4 levels of the `pi` treemap and their adjacency graphs. In each graph, the node corresponds to a shape and an edge between nodes indicates that the shapes share a part of their edge. Shapes that touch only at a corner are not considered adjacent.

One way in which the graphs can be used is to attempt to color each layer using at most 4 colors. The 4 color theorem tells us that only 4 unique colors are required to color maps such as these in a way that no two neighbouring shapes have the same color.

It turns out that the full algorithm of coloring a map with only 4 colors is complicated, but reasonably simple options exist.. For the maps here, I used the DSATUR (maximum degree of saturation) approach.

The DSATUR algorithm works well, but does not guarantee a 4-color solution. It performs no backtracking. If you look carefully, one of the rectangles in the 4th layer (top right quadrant in the graph) required a 5th color (shown black).

We examine two very common supervised machine learning methods: linear support vector machines (SVM) and k-nearest neighbors (kNN).

SVM is often less computationally demanding than kNN and is easier to interpret, but it can identify only a limited set of patterns. On the other hand, kNN can find very complex patterns, but its output is more challenging to interpret.

We illustrate SVM using a data set in which points fall into two categories, which are separated in SVM by a straight line "margin". SVM can be tuned using a parameter that influences the width and location of the margin, permitting points to fall within the margin or on the wrong side of the margin. We then show how kNN relaxes explicit boundary definitions, such as the straight line in SVM, and how kNN too can be tuned to create more robust classification.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Machine learning: a primer. Nature Methods 15:5–6.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

In a Nature graphics blog article, I present my process behind designing the stark black-and-white Nature 10 cover.

Nature 10, 18 December 2017

In this primer, we focus on essential ML principles— a modeling strategy to let the data speak for themselves, to the extent possible.

The benefits of ML arise from its use of a large number of tuning parameters or weights, which control the algorithmâ€™s complexity and are estimated from the data using numerical optimization. Often ML algorithms are motivated by heuristics such as models of interacting neurons or natural evolutionâ€”even if the underlying mechanism of the biological system being studied is substantially different. The utility of ML algorithms is typically assessed empirically by how well extracted patterns generalize to new observations.

We present a data scenario in which we fit to a model with 5 predictors using polynomials and show what to expect from ML when noise and sample size vary. We also demonstrate the consequences of excluding an important predictor or including a spurious one.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

Just in time for the season, I've simulated a snow-pile of snowflakes based on the Gravner-Griffeath model.

The work is described as a wintertime tale in In Silico Flurries: Computing a world of snow and co-authored with Jake Lever in the Scientific American SA Blog.

Gravner, J. & Griffeath, D. (2007) Modeling Snow Crystal Growth II: A mesoscopic lattice map with plausible dynamics.

My illustration of the location of genes in the human genome that are implicated in disease appears in The Objects that Power the Global Economy, a book by Quartz.

We introduce two common ensemble methods: bagging and random forests. Both of these methods repeat a statistical analysis on a bootstrap sample to improve the accuracy of the predictor. Our column shows these methods as applied to Classification and Regression Trees.

For example, we can sample the space of values more finely when using bagging with regression trees because each sample has potentially different boundaries at which the tree splits.

Random forests generate a large number of trees by not only generating bootstrap samples but also randomly choosing which predictor variables are considered at each split in the tree.

Krzywinski, M. & Altman, N. (2017) Points of Significance: Ensemble methods: bagging and random forests. *Nature Methods* **14**:933–934.

Krzywinski, M. & Altman, N. (2017) Points of Significance: Classification and regression trees. *Nature Methods* **14**:757–758.