Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - contact me Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca on Twitter Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Lumondo Photography Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Pi Art Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Hilbertonians - Creatures on the Hilbert Curve
Where am I supposed to go? Where was I supposed to know?Violet Indianaget lost in questionsmore quotes

piet mondrian: beautiful


EMBO Practical Course: Bioinformatics and Genome Analysis, 5–17 June 2017.


visualization + design

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
The 2017 Pi Day art imagines the digits of Pi as a star catalogue with constellations of extinct animals and plants. The work is featured in the article Pi in the Sky at the Scientific American SA Visual blog.

`\pi` Day 2015 Art Posters


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
2017 `\pi` day

Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
2016 `\pi` approximation day

Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
2016 `\pi` day

Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
2015 `\pi` day

Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
2014 `\pi` approx day

Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
2014 `\pi` day

Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
2013 `\pi` day

Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Circular `\pi` art

On March 14th celebrate `\pi` Day. Hug `\pi`—find a way to do it.

For those who favour `\tau=2\pi` will have to postpone celebrations until July 26th. That's what you get for thinking that `\pi` is wrong.

If you're not into details, you may opt to party on July 22nd, which is `\pi` approximation day (`\pi` ≈ 22/7). It's 20% more accurate that the official `\pi` day!

Finally, if you believe that `\pi = 3`, you should read why `\pi` is not equal to 3.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
All art posters are available for purchase.
I take custom requests.

Not a circle in sight in the 2015 `\pi` day art. Try to figure out how up to 612,330 digits are encoded before reading about the method. `\pi`'s transcendental friends `\phi` and `e` are there too—golden and natural. Get it?

This year's `\pi` day is particularly special. The digits of time specify a precise time if the date is encoded in North American day-month-year convention: 3-14-15 9:26:53.

The art has been featured in Ana Swanson's Wonkblog article at the Washington Post—10 Stunning Images Show The Beauty Hidden in `\pi`.

We begin with a square and progressively divide it. At each stage, the digit in `pi` is used to determine how many lines are used in the division. The thickness of the lines used for the divisions can be attenuated for higher levels to give the treemap some texture.


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Representing a number using a tree map. Each digit of the number is used to successively divide a shape, such as a square. (zoom)

This method of encoding data is known as treemapping. Typically, it is used to encode hierarchical information, such as hard disk spac usage, where the divisions correspond to the total size of files within directories.


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
At each level of the tree map, more digits are encoded. Shown here are tree maps for `pi` for the first 6 levels of division. (zoom)

This kind of treemap can be made from any number. Below I show 6 level maps for `pi`, `phi` (Golden ratio) and `e` (base of natural logarithm).


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
At each level of the tree map, more digits are encoded. Shown here are tree maps for `pi` for the first 6 levels of division. (zoom)

The number of digits per level, `n_i` and total digits, `N_i` in the tree map for `pi`, `phi` and `e` is shown below for levels `i = 1 .. 6`.

           PI             PHI              e
i     n_i    N_i      n_i    N_i      n_i    N_i
1       1      1        1      1        1      1
2       4      5        2      3        3      4
3      15     20        9     12       19     23
4      98    118       59     71       96    119
5     548    666      330    401      574    693
6    2962   3628     1857   2258     3162   3855
7   16616  20244    10041  12299    17541  21396
8   91225 111469
9  500861 612330

Dividing the map

In all the treemaps above, the divisions were made uniformly for each rectangle. With uniform division, the lines that divide a shape are evenly spaced. With randomized division, the placement of lines is randomized, while still ensuring that lines do not coincide.

A multiplier, such as `phi` (Golden Ratio), can be used to control the division. In this case, the first division is made at 1/`phi` (0.62/0.38 split) and the remaining rectangle (0.38) is further divided at `/`phi` (0.24/0.14 split).


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
The divisions of each shape can be influenced by another number and the level at which the division is performed. (zoom)

Using a non-uniform multipler is one way to embed another number in the art.

When a multiplier like `phi` is used, divisions at the top levels create very large rectangles. To attenuate this, the effect of the multiplier can be weighted by the level. Regardless of what multiplier is used, the first level is always uniformly divided. Division at subsequent levels incorporates more of the multiplier effect.

The orientation of the division can be uniform (same for a layer and alternating across layers), alternating (alternating across and within a layer) or random. This modification has an effect only if the divisions are not uniform.


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
The divisions of each shape can be influenced by another number and the level at which the division is performed. (zoom)

Adjusting line thickness

To emphasize the layers, a different line thickness can be used. When lines are drawn progressively thinner with each layer, detail is controlled and the map has more texture.

When all lines have the same thickness, it is harder to distinguish levels.


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
The divisions of each shape can be influenced by another number and the level at which the division is performed. (zoom)

You could see this as a challenge! Below I show the treemaps for `pi`, `phi` and `e` with and without stroke modulation.


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
The divisions of each shape can be influenced by another number and the level at which the division is performed. (zoom)

When displayed at a low resolution (the image below is 620 pixels across), shapes at higher levels appear darker because the distance between the lines within is close to (or smaller) than a pixel. By matching the line thickness to the image resolution, you can control how dark the smallest divisions appear.


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
The divisions of each shape can be influenced by another number and the level at which the division is performed. (zoom)

Adding color

Adding color can make things better, or worse. Dropping color randomly, without respect for the level structure of the treemap, creates a mess.

We can rescue things by increasing the probability that a given rectangle will be made transparent—this will allow the color of the rectangle below to show through. Additionally, by drawing the layers in increasing order, smaller rectangles are drawn on top of bigger ones, giving a sense of recursive subdivision.


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
The divisions of each shape can be influenced by another number and the level at which the division is performed. (zoom)

Because the color is assigned randomly, various instances of the treemap can be made. The maps below have the same proportion of colors and transparency (same as the first image in second row in the figure above) and vary only by the random seed to pick colors.


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Different instances of 5 level `pi` treemaps. The proportion of transparent, white, yellow, red and blue shapes is 20:1:1:1:1. (zoom)

Coloring using adjacency graph

The color assignments above were random. For each shape the probability of choosing a given color (transparent, white, yellow, red, blue) was the same.

Color choice for a shape can also be influenced by the color of neighbouring shapes. To do this, we need to create a graph that captures the adjacency relationship between all the shapes at each level. Below I show the first 4 levels of the `pi` treemap and their adjacency graphs. In each graph, the node corresponds to a shape and an edge between nodes indicates that the shapes share a part of their edge. Shapes that touch only at a corner are not considered adjacent.


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Different instances of 5 level `pi` treemaps. The proportion of transparent, white, yellow, red and blue shapes is 20:1:1:1:1. (zoom)

One way in which the graphs can be used is to attempt to color each layer using at most 4 colors. The 4 color theorem tells us that only 4 unique colors are required to color maps such as these in a way that no two neighbouring shapes have the same color.

It turns out that the full algorithm of coloring a map with only 4 colors is complicated, but reasonably simple options exist.. For the maps here, I used the DSATUR (maximum degree of saturation) approach.


Pi Day 2015 Art Posters
 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Different instances of 5 level `pi` treemaps. The proportion of transparent, white, yellow, red and blue shapes is 20:1:1:1:1. (zoom)

The DSATUR algorithm works well, but does not guarantee a 4-color solution. It performs no backtracking. If you look carefully, one of the rectangles in the 4th layer (top right quadrant in the graph) required a 5th color (shown black).

VIEW ALL

news + thoughts

Snowflake simulation

Tue 14-11-2017
Symmetric, beautiful and unique.

Just in time for the season, I've simulated a snow-pile of snowflakes based on the Gravner-Griffeath model.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
A few of the beautiful snowflakes generated by the Gravner-Griffeath model. (explore)

Gravner, J. & Griffeath, D. (2007) Modeling Snow Crystal Growth II: A mesoscopic lattice map with plausible dynamics.

Genes that make us sick

Thu 02-11-2017
Where disease hides in the genome.

My illustration of the location of genes in the human genome that are implicated in disease appears in The Objects that Power the Global Economy, a book by Quartz.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
The location of genes implicated in disease in the human genome, shown here as a spiral. (more...)

Ensemble methods: Bagging and random forests

Mon 16-10-2017
Many heads are better than one.

We introduce two common ensemble methods: bagging and random forests. Both of these methods repeat a statistical analysis on a bootstrap sample to improve the accuracy of the predictor. Our column shows these methods as applied to Classification and Regression Trees.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Ensemble methods: Bagging and random forests. (read)

For example, we can sample the space of values more finely when using bagging with regression trees because each sample has potentially different boundaries at which the tree splits.

Random forests generate a large number of trees by not only generating bootstrap samples but also randomly choosing which predictor variables are considered at each split in the tree.

Krzywinski, M. & Altman, N. (2017) Points of Significance: Ensemble methods: bagging and random forests. Nature Methods 14:933–934.

Background reading

Krzywinski, M. & Altman, N. (2017) Points of Significance: Classification and regression trees. Nature Methods 14:757–758.

...more about the Points of Significance column

Classification and regression trees

Mon 16-10-2017
Decision trees are a powerful but simple prediction method.

Decision trees classify data by splitting it along the predictor axes into partitions with homogeneous values of the dependent variable. Unlike logistic or linear regression, CART does not develop a prediction equation. Instead, data are predicted by a series of binary decisions based on the boundaries of the splits. Decision trees are very effective and the resulting rules are readily interpreted.

Trees can be built using different metrics that measure how well the splits divide up the data classes: Gini index, entropy or misclassification error.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Classification and decision trees. (read)

When the predictor variable is quantitative and not categorical, regression trees are used. Here, the data are still split but now the predictor variable is estimated by the average within the split boundaries. Tree growth can be controlled using the complexity parameter, a measure of the relative improvement of each new split.

Individual trees can be very sensitive to minor changes in the data and even better prediction can be achieved by exploiting this variability. Using ensemble methods, we can grow multiple trees from the same data.

Krzywinski, M. & Altman, N. (2017) Points of Significance: Classification and regression trees. Nature Methods 14:757–758.

Background reading

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.

Altman, N. & Krzywinski, M. (2015) Points of Significance: Multiple Linear Regression Nature Methods 12:1103-1104.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Model Selection and Overfitting. Nature Methods 13:703-704.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Regularization. Nature Methods 13:803-804.

...more about the Points of Significance column

Personal Oncogenomics Program 5 Year Anniversary Art

Wed 26-07-2017

The artwork was created in collaboration with my colleagues at the Genome Sciences Center to celebrate the 5 year anniversary of the Personalized Oncogenomics Program (POG).

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
5 Years of Personalized Oncogenomics Program at Canada's Michael Smith Genome Sciences Centre. The poster shows 545 cancer cases. (left) Cases ordered chronologically by case number. (right) Cases grouped by diagnosis (tissue type) and then by similarity within group.

The Personal Oncogenomics Program (POG) is a collaborative research study including many BC Cancer Agency oncologists, pathologists and other clinicians along with Canada's Michael Smith Genome Sciences Centre with support from BC Cancer Foundation.

The aim of the program is to sequence, analyze and compare the genome of each patient's cancer—the entire DNA and RNA inside tumor cells— in order to understand what is enabling it to identify less toxic and more effective treatment options.

Principal component analysis

Thu 06-07-2017
PCA helps you interpret your data, but it will not always find the important patterns.

Principal component analysis (PCA) simplifies the complexity in high-dimensional data by reducing its number of dimensions.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Principal component analysis. (read)

To retain trend and patterns in the reduced representation, PCA finds linear combinations of canonical dimensions that maximize the variance of the projection of the data.

PCA is helpful in visualizing high-dimensional data and scatter plots based on 2-dimensional PCA can reveal clusters.

Altman, N. & Krzywinski, M. (2017) Points of Significance: Principal component analysis. Nature Methods 14:641–642.

Background reading

Altman, N. & Krzywinski, M. (2017) Points of Significance: Clustering. Nature Methods 14:545–546.

...more about the Points of Significance column