Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - contact me Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca on Twitter Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Lumondo Photography Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Pi Art Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Hilbertonians - Creatures on the Hilbert Curve
In your hiding, you're alone. Kept your treasures with my bones.Coeur de Piratecrawl somewhere bettermore quotes

filling space: beautiful


In Silico Flurries: Computing a world of snow. Scientific American. 23 December 2017


visualization + design

Like paths? Got your lines twisted in a bunch?
Take a look at my 2014 Pi Day art that folds Pi.

Hilbert Curve Art, Hilbertonians and Monkeys

I collaborated with Scientific American to create a data graphic for the September 2014 issue. The graphic compared the genomes of the Denisovan, bonobo, chimp and gorilla, showing how our own genomes are almost identical to the Denisovan and closer to that of the bonobo and chimp than the gorilla.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca

Here you'll find Hilbert curve art, a introduction to Hilbertonians, the creatures that live on the curve, an explanation of the Scientific American graphic and downloadable SVG/EPS Hilbert curve files.

Hilbert curve

There are wheels within wheels in this village and fires within fires!
— Arthur Miller (The Crucible)

The Hilbert curve is one of many space-filling curves. It is a mapping between one dimension (e.g. a line) and multiple dimensions (e.g. a square, a cube, etc). It's useful because it preserves locality—points that are nearby on the line are usually mapped onto nearby points on the curve.

The Hilbert curve is a line that gives itself a hug.

It's a pretty strange mapping, to be sure. Although a point on a line maps uniquely onto the curve this is not the case in reverse. At infinite order the curve intersects itself infinitely many times! This shouldn't be a surprise if you consider that the unit square has the same number of points as the unit line. Now that's the real surprise! So surprising in fact that it apparently destabilized Cantor's mind, who made the initial discovery.

Bryan Hayes has a great introduction (Crinkly Curves) to the Hilbert curve at American Scientist.

If manipulated so that its ends are adjacent, the Hilbert curve becomes the Moore curve.

constructing the hilbert curve

The order 1 curve is generated by dividing a square into quadrants and connecting the centers of the quadrants with three lines. Which three connections are made is arbitrary—different choices result in rotations of the curve.

Hilbert curve. / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
First 8 orders of the space-filling Hilbert curve. Each square is 144 x 144 pixels. (zoom)

The order 6 curve is the highest order whose structure can be discerned at this figure resolution. Though just barely. The length of this curve is about 64 times the width of the square, so about 9,216 pixels! That's tight packing.

By order 7 the structure in the 620 pixel wide image (each square is 144 px wide) cannot be discerned. By order 8 the curve has 65,536 points, which exceeds the number of pixels its square in the figure. A square of 256 x 256 would be required to show all the points without downsampling.

Two order 10 curves have 1,048,576 points each and would approximately map onto all the pixels on an average monitor (1920 x 1200 pixels).

A curve of order 33 has `7.38 * 10^19` points and if drawn as a square of average body height would have points that are an atom's distance from one another (`10^{-10}` m).

mapping the line onto the square

By mapping the familiar rainbow onto the curve you can see how higher order curves "crinkle" (to borrow Bryan's term) around the square.

Hilbert curve. / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
First 8 orders of the space-filling Hilbert curve. Each square is 144 x 144 pixels. (zoom)

properties of the first 24 orders of the Hilbert curve

orderpointssegmentslength
`n``4^n``4^{n-1}``2^n-2^{-n}`
1 4 3 1.5
2 16 15 3.75
3 64 63 7.875
4 256 255 15.9375
5 1,024 1,023 31.96875
6 4,096 4,095 63.984375
7 16,384 16,383 127.9921875
8 65,536 65,535 255.99609375
9 262,144 262,143 511.998046875
10 1,048,576 1,048,575 1023.9990234375
11 4,194,304 4,194,303 2047.99951171875
12 16,777,216 16,777,215 4095.99975585938
13 67,108,864 67,108,863 8191.99987792969
14 268,435,456 268,435,455 16383.9999389648
15 1,073,741,824 1,073,741,823 32767.9999694824
16 4,294,967,296 4,294,967,295 65535.9999847412
17 17,179,869,184 17,179,869,183 131071.999992371
18 68,719,476,736 68,719,476,735 262143.999996185
19 274,877,906,944 274,877,906,943 524287.999998093
20 1,099,511,627,776 1,099,511,627,775 1048575.99999905
21 4,398,046,511,104 4,398,046,511,103 2097151.99999952
22 17,592,186,044,416 17,592,186,044,415 4194303.99999976
23 70,368,744,177,664 70,368,744,177,663 8388607.99999988
24 281,474,976,710,656 281,474,976,710,655 16777215.9999999

You can download the basic curve shapes for orders 1 to 10 and experiment yourself. Both square and circular forms are available.

VIEW ALL

news + thoughts

Machine learning: supervised methods (SVM & kNN)

Thu 18-01-2018
Supervised learning algorithms extract general principles from observed examples guided by a specific prediction objective.

We examine two very common supervised machine learning methods: linear support vector machines (SVM) and k-nearest neighbors (kNN).

SVM is often less computationally demanding than kNN and is easier to interpret, but it can identify only a limited set of patterns. On the other hand, kNN can find very complex patterns, but its output is more challenging to interpret.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Machine learning: supervised methods (SVM & kNN). (read)

We illustrate SVM using a data set in which points fall into two categories, which are separated in SVM by a straight line "margin". SVM can be tuned using a parameter that influences the width and location of the margin, permitting points to fall within the margin or on the wrong side of the margin. We then show how kNN relaxes explicit boundary definitions, such as the straight line in SVM, and how kNN too can be tuned to create more robust classification.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Machine learning: a primer. Nature Methods 15:5–6.

Background reading

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

...more about the Points of Significance column

Human Versus Machine

Tue 16-01-2018
Balancing subjective design with objective optimization.

In a Nature graphics blog article, I present my process behind designing the stark black-and-white Nature 10 cover.

Nature 10, 18 December 2017

Machine learning: a primer

Thu 18-01-2018
Machine learning extracts patterns from data without explicit instructions.

In this primer, we focus on essential ML principles— a modeling strategy to let the data speak for themselves, to the extent possible.

The benefits of ML arise from its use of a large number of tuning parameters or weights, which control the algorithm’s complexity and are estimated from the data using numerical optimization. Often ML algorithms are motivated by heuristics such as models of interacting neurons or natural evolution—even if the underlying mechanism of the biological system being studied is substantially different. The utility of ML algorithms is typically assessed empirically by how well extracted patterns generalize to new observations.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Machine learning: a primer. (read)

We present a data scenario in which we fit to a model with 5 predictors using polynomials and show what to expect from ML when noise and sample size vary. We also demonstrate the consequences of excluding an important predictor or including a spurious one.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

...more about the Points of Significance column

Snowflake simulation

Tue 16-01-2018
Symmetric, beautiful and unique.

Just in time for the season, I've simulated a snow-pile of snowflakes based on the Gravner-Griffeath model.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
A few of the beautiful snowflakes generated by the Gravner-Griffeath model. (explore)

The work is described as a wintertime tale in In Silico Flurries: Computing a world of snow and co-authored with Jake Lever in the Scientific American SA Blog.

Gravner, J. & Griffeath, D. (2007) Modeling Snow Crystal Growth II: A mesoscopic lattice map with plausible dynamics.

Genes that make us sick

Wed 22-11-2017
Where disease hides in the genome.

My illustration of the location of genes in the human genome that are implicated in disease appears in The Objects that Power the Global Economy, a book by Quartz.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
The location of genes implicated in disease in the human genome, shown here as a spiral. (more...)

Ensemble methods: Bagging and random forests

Wed 22-11-2017
Many heads are better than one.

We introduce two common ensemble methods: bagging and random forests. Both of these methods repeat a statistical analysis on a bootstrap sample to improve the accuracy of the predictor. Our column shows these methods as applied to Classification and Regression Trees.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Ensemble methods: Bagging and random forests. (read)

For example, we can sample the space of values more finely when using bagging with regression trees because each sample has potentially different boundaries at which the tree splits.

Random forests generate a large number of trees by not only generating bootstrap samples but also randomly choosing which predictor variables are considered at each split in the tree.

Krzywinski, M. & Altman, N. (2017) Points of Significance: Ensemble methods: bagging and random forests. Nature Methods 14:933–934.

Background reading

Krzywinski, M. & Altman, N. (2017) Points of Significance: Classification and regression trees. Nature Methods 14:757–758.

...more about the Points of Significance column