Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - contact me Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca on Twitter Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Lumondo Photography Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Pi Art Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Hilbertonians - Creatures on the Hilbert Curve
This love loves love. It's a strange love, strange love.Liz Fraserfind a way to lovemore quotes

hilbert: exciting


DNA on 10th — street art, wayfinding and font


visualization + design

Like paths? Got your lines twisted in a bunch?
Take a look at my 2014 Pi Day art that folds Pi.

Hilbert Curve Art, Hilbertonians and Monkeys

I collaborated with Scientific American to create a data graphic for the September 2014 issue. The graphic compared the genomes of the Denisovan, bonobo, chimp and gorilla, showing how our own genomes are almost identical to the Denisovan and closer to that of the bonobo and chimp than the gorilla.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca

Here you'll find Hilbert curve art, a introduction to Hilbertonians, the creatures that live on the curve, an explanation of the Scientific American graphic and downloadable SVG/EPS Hilbert curve files.

Hilbert curve

There are wheels within wheels in this village and fires within fires!
— Arthur Miller (The Crucible)

The Hilbert curve is one of many space-filling curves. It is a mapping between one dimension (e.g. a line) and multiple dimensions (e.g. a square, a cube, etc). It's useful because it preserves locality—points that are nearby on the line are usually mapped onto nearby points on the curve.

The Hilbert curve is a line that gives itself a hug.

It's a pretty strange mapping, to be sure. Although a point on a line maps uniquely onto the curve this is not the case in reverse. At infinite order the curve intersects itself infinitely many times! This shouldn't be a surprise if you consider that the unit square has the same number of points as the unit line. Now that's the real surprise! So surprising in fact that it apparently destabilized Cantor's mind, who made the initial discovery.

Bryan Hayes has a great introduction (Crinkly Curves) to the Hilbert curve at American Scientist.

If manipulated so that its ends are adjacent, the Hilbert curve becomes the Moore curve.

constructing the hilbert curve

The order 1 curve is generated by dividing a square into quadrants and connecting the centers of the quadrants with three lines. Which three connections are made is arbitrary—different choices result in rotations of the curve.

Hilbert curve. / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
First 8 orders of the space-filling Hilbert curve. Each square is 144 x 144 pixels. (zoom)

The order 6 curve is the highest order whose structure can be discerned at this figure resolution. Though just barely. The length of this curve is about 64 times the width of the square, so about 9,216 pixels! That's tight packing.

By order 7 the structure in the 620 pixel wide image (each square is 144 px wide) cannot be discerned. By order 8 the curve has 65,536 points, which exceeds the number of pixels its square in the figure. A square of 256 x 256 would be required to show all the points without downsampling.

Two order 10 curves have 1,048,576 points each and would approximately map onto all the pixels on an average monitor (1920 x 1200 pixels).

A curve of order 33 has `7.38 * 10^19` points and if drawn as a square of average body height would have points that are an atom's distance from one another (`10^{-10}` m).

mapping the line onto the square

By mapping the familiar rainbow onto the curve you can see how higher order curves "crinkle" (to borrow Bryan's term) around the square.

Hilbert curve. / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
First 8 orders of the space-filling Hilbert curve. Each square is 144 x 144 pixels. (zoom)

properties of the first 24 orders of the Hilbert curve

orderpointssegmentslength
`n``4^n``4^{n-1}``2^n-2^{-n}`
1 4 3 1.5
2 16 15 3.75
3 64 63 7.875
4 256 255 15.9375
5 1,024 1,023 31.96875
6 4,096 4,095 63.984375
7 16,384 16,383 127.9921875
8 65,536 65,535 255.99609375
9 262,144 262,143 511.998046875
10 1,048,576 1,048,575 1023.9990234375
11 4,194,304 4,194,303 2047.99951171875
12 16,777,216 16,777,215 4095.99975585938
13 67,108,864 67,108,863 8191.99987792969
14 268,435,456 268,435,455 16383.9999389648
15 1,073,741,824 1,073,741,823 32767.9999694824
16 4,294,967,296 4,294,967,295 65535.9999847412
17 17,179,869,184 17,179,869,183 131071.999992371
18 68,719,476,736 68,719,476,735 262143.999996185
19 274,877,906,944 274,877,906,943 524287.999998093
20 1,099,511,627,776 1,099,511,627,775 1048575.99999905
21 4,398,046,511,104 4,398,046,511,103 2097151.99999952
22 17,592,186,044,416 17,592,186,044,415 4194303.99999976
23 70,368,744,177,664 70,368,744,177,663 8388607.99999988
24 281,474,976,710,656 281,474,976,710,655 16777215.9999999

You can download the basic curve shapes for orders 1 to 10 and experiment yourself. Both square and circular forms are available.

VIEW ALL

news + thoughts

Markov Chains

Tue 30-07-2019

You can look back there to explain things,
but the explanation disappears.
You'll never find it there.
Things are not explained by the past.
They're explained by what happens now.
—Alan Watts

A Markov chain is a probabilistic model that is used to model how a system changes over time as a series of transitions between states. Each transition is assigned a probability that defines the chance of the system changing from one state to another.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Markov Chains. (read)

Together with the states, these transitions probabilities define a stochastic model with the Markov property: transition probabilities only depend on the current state—the future is independent of the past if the present is known.

Once the transition probabilities are defined in matrix form, it is easy to predict the distribution of future states of the system. We cover concepts of aperiodicity, irreducibility, limiting and stationary distributions and absorption.

This column is the first part of a series and pairs particularly well with Alan Watts and Blond:ish.

Grewal, J., Krzywinski, M. & Altman, N. (2019) Points of significance: Markov Chains. Nature Methods 16:663–664.

1-bit zoomable gigapixel maps of Moon, Solar System and Sky

Mon 22-07-2019

Places to go and nobody to see.

Exquisitely detailed maps of places on the Moon, comets and asteroids in the Solar System and stars, deep-sky objects and exoplanets in the northern and southern sky. All maps are zoomable.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
3.6 gigapixel map of the near side of the Moon, annotated with 6,733. (details)
Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
100 megapixel and 10 gigapixel map of the Solar System on 20 July 2019, annotated with 758k asteroids, 1.3k comets and all planets and satellites. (details)
Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
100 megapixle and 10 gigapixel map of the Northern Celestial Hemisphere, annotated with 44 million stars, 74,000 deep-sky objects and 3,000 exoplanets. (details)
Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
100 megapixle and 10 gigapixel map of the Southern Celestial Hemisphere, annotated with 69 million stars, 88,000 deep-sky objects and 1000 exoplanets. (details)

Quantile regression

Sat 01-06-2019
Quantile regression robustly estimates the typical and extreme values of a response.

Quantile regression explores the effect of one or more predictors on quantiles of the response. It can answer questions such as "What is the weight of 90% of individuals of a given height?"

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Quantile regression. (read)

Unlike in traditional mean regression methods, no assumptions about the distribution of the response are required, which makes it practical, robust and amenable to skewed distributions.

Quantile regression is also very useful when extremes are interesting or when the response variance varies with the predictors.

Das, K., Krzywinski, M. & Altman, N. (2019) Points of significance: Quantile regression. Nature Methods 16:451–452.

Background reading

Altman, N. & Krzywinski, M. (2015) Points of significance: Simple linear regression. Nature Methods 12:999–1000.

Analyzing outliers: Robust methods to the rescue

Sat 30-03-2019
Robust regression generates more reliable estimates by detecting and downweighting outliers.

Outliers can degrade the fit of linear regression models when the estimation is performed using the ordinary least squares. The impact of outliers can be mitigated with methods that provide robust inference and greater reliability in the presence of anomalous values.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Analyzing outliers: Robust methods to the rescue. (read)

We discuss MM-estimation and show how it can be used to keep your fitting sane and reliable.

Greco, L., Luta, G., Krzywinski, M. & Altman, N. (2019) Points of significance: Analyzing outliers: Robust methods to the rescue. Nature Methods 16:275–276.

Background reading

Altman, N. & Krzywinski, M. (2016) Points of significance: Analyzing outliers: Influential or nuisance. Nature Methods 13:281–282.

Two-level factorial experiments

Fri 22-03-2019
To find which experimental factors have an effect, simultaneously examine the difference between the high and low levels of each.

Two-level factorial experiments, in which all combinations of multiple factor levels are used, efficiently estimate factor effects and detect interactions—desirable statistical qualities that can provide deep insight into a system.

They offer two benefits over the widely used one-factor-at-a-time (OFAT) experiments: efficiency and ability to detect interactions.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Two-level factorial experiments. (read)

Since the number of factor combinations can quickly increase, one approach is to model only some of the factorial effects using empirically-validated assumptions of effect sparsity and effect hierarchy. Effect sparsity tells us that in factorial experiments most of the factorial terms are likely to be unimportant. Effect hierarchy tells us that low-order terms (e.g. main effects) tend to be larger than higher-order terms (e.g. two-factor or three-factor interactions).

Smucker, B., Krzywinski, M. & Altman, N. (2019) Points of significance: Two-level factorial experiments Nature Methods 16:211–212.

Background reading

Krzywinski, M. & Altman, N. (2014) Points of significance: Designing comparative experiments.. Nature Methods 11:597–598.

Happy 2019 `\pi` Day—
Digits, internationally

Tue 12-03-2019

Celebrate `\pi` Day (March 14th) and set out on an exploration explore accents unknown (to you)!

This year is purely typographical, with something for everyone. Hundreds of digits and hundreds of languages.

A special kids' edition merges math with color and fat fonts.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
116 digits in 64 languages. (details)
Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
223 digits in 102 languages. (details)

Check out art from previous years: 2013 `\pi` Day and 2014 `\pi` Day, 2015 `\pi` Day, 2016 `\pi` Day, 2017 `\pi` Day and 2018 `\pi` Day.