Martin Krzywinski / Genome Sciences Center / Martin Krzywinski / Genome Sciences Center / - contact me Martin Krzywinski / Genome Sciences Center / on Twitter Martin Krzywinski / Genome Sciences Center / - Lumondo Photography Martin Krzywinski / Genome Sciences Center / - Pi Art Martin Krzywinski / Genome Sciences Center / - Hilbertonians - Creatures on the Hilbert Curve
And whatever I do will become forever what I've done.Wislawa Szymborskadon't rehearsemore quotes

hilbert: exciting

EMBO Practical Course: Bioinformatics and Genome Analysis, 5–17 June 2017.

visualization + design

Like paths? Got your lines twisted in a bunch?
Take a look at my 2014 Pi Day art that folds Pi.

Hilbert Curve Art, Hilbertonians and Monkeys

I collaborated with Scientific American to create a data graphic for the September 2014 issue. The graphic compared the genomes of the Denisovan, bonobo, chimp and gorilla, showing how our own genomes are almost identical to the Denisovan and closer to that of the bonobo and chimp than the gorilla.

Martin Krzywinski @MKrzywinski

Here you'll find Hilbert curve art, a introduction to Hilbertonians, the creatures that live on the curve, an explanation of the Scientific American graphic and downloadable SVG/EPS Hilbert curve files.

The graphic won a Bronze medal at Malofiej 23. Art direction by Jen Christiansen. Text by Kate Wong. Spot illustrations by Portia Sloan Rollings.
Scientific American Hilbert Curve genome graphic of human, denisovan, chimp, bonobo and gorilla genomes by Martin Krzywinski. / Martin Krzywinski @MKrzywinski
Scientific American | Tiny genetic differences between humans and other primates pervade the genome. Art direction by Jen Christiansen. Text by Kate Wong. Spot illustrations by Portia Sloan Rollings.

monkey genomes

This page accompanies my blog post at Scientific American, which itself accompanies the figure in the magazine.

In the blog post I argue that the genome is not a blueprint—a common metaphor that doesn't leave room for appreciating the complexity of the genome—and talk about the process of creating the figure.

the graphic


The graphic shows the differences between the genome sequence of human and each of Denisovan, chimp, bonobo and gorilla. Differences are measured by the fraction of bases in the gene regions of human sequence that do not align to the other genome.

The approximately 1 Gb of sequence of gene regions (most introns are included) is divided into 2,047 bins which are mapped onto the Hilbert curve as circles.

The color of the circle, which represents about 500 kb of sequence, encodes the fraction of unaligned bases.

The original color scheme submitted for production was derived from the yellow-orange-red Brewer palette.

measuring differences

There's more than one way to do it.

The approach taken by the graphic is one of the simplest—this is why it was chosen. It's easy to understand and easy to explain. On the other hand, the answer depends on the state of the sequence resources for each species (especially bonobo, whose sequence assembly is in version 1) and completely overlooks the functional implications of these differences.

The real goal of identifying differences, a relatively superficial problem, is to find the subset of differences that make a difference, which is a deep problem.

Spitting images are identical within spitting error.

For example, if someone told you that Vancouver, Canada and Sydney, Australia were 85% similar, you would likely assume that (a) this metric isn't that useful to you unless it aligns to your priorities in how city similarities should be judged, (b) other metrics would give different answers, and (c) some parts of Sydney are nothing like Vancouver while others might be identical. This goes the same for genomes, except that cities are easier to figure out since we built them ourselves.

The differences will be scattered throughout the genome and will take many forms: single base changes, small insertions or deletions, inversions, copy number changes, and so on. In parts critical to basic cell function we expect no differences (e.g. insulin gene exons) while in genes that are rapidly evolving we expect to see some differences.

A comparison of protein coding genes reveals approximately 500 genes showing accelerated evolution on each of the gorilla, human and chimpanzee lineages, and evidence for parallel acceleration, particularly of genes involved in hearing.
Insights into hominid evolution from the gorilla genome sequence by Scally et al.

Parts of the genome that don't impact function are going to accumulate differences at a background rate of mutation.

uncertainty in life sciences

Any single-number statistic that compares two genomes is necessarily going to be a gross approximation. Such numerical measures should be taken as a starting point and at best as some kind of average that hides all of the texture in the data.

Statements like "the 1% difference" are incomplete because they do not incorporate an uncertainty. If you see four separate reports claiming a 1%, 2%, 5% and 7% difference, this does not necessarily mean that we cannot agree. It means that the error in our measurement is large. You might venture a guess that the answer is somewhere in the range 1–7% (at the very least).

While confidence intervals and error bars are a sine qua non in physical sciences, assessing uncertainty in life sciences is a lot more difficult. To assess the extent of biological variation, which will add to the uncertainty in our result, we need to collecting data from independent biological samples. Often this is too expensive or not practical.

To provide a sober and practical guide to statistics for the busy biologist, Naomi Altman and myself write the Points of Significance column in Nature Methods. These kinds of resources are needed as long as errors persist in the translation between statistical analysis and conclusions (e.g. `5 sigma` and P values).

Two compatible estimates can easily and wrongly be interpreted as incompatible facts.

We don't yet have a full handle on individual levels of genomic variation, especially for non-human primates for which we have a single and incomplete genome. Even for humans, although we have resources like dbSNP, which catalogue individual variation, it is common to use the canonical human reference sequence for analysis. This reference sequence is only a single instance of a human genome (in fact, parts of it are derived from different individuals).

As a result, many of the reported values (and certainly almost all that make it to popular media) are without any confidence limits and thus are likely to be interpreted as fact rather than as an estimate. This causes all sorts of problems—two compatible estimates can easily (but wrongly) be interpreted as incompatible facts.

As an example, look at the phylogenetic trees in the figure below. Without incorporating uncertainty, the top tree presents a fixed and deceptive state of what we know about the uncertainty in what we know.

Phylogenetic tree / Martin Krzywinski @MKrzywinski
Without incorporating uncertainty into results and data graphics we cannot tell how precise our observations and calculations are. (zoom)

Recent work has shed some light on the uncertainty in determining population split times. The two trees in the figure above are generated from the data in the table below, from Langergraber KE, Prufer K, Rowney C et al. 2012 Generation times in wild chimpanzees and gorillas suggest earlier divergence times in great ape and human evolution Proc Natl Acad Sci U S A 109 (39) 15716-15721.

Phylogenetic tree / Martin Krzywinski @MKrzywinski
Original and recalibrated population split times from several recent studies. (zoom)

Notice that the human/chimp/gorilla split time uncertainty overlaps the human/chimp split.

The addition of uncertainty is the inevitable consequence of making multiple measurements and upgraded analytical models. It is a blessing not a curse.

when we measure, we estimate

That our genome is "similar" to that of the chimp, bonobo and gorilla is not in dispute. How to classify and quantify the differences is an active field of research, a process that often looks like a dispute.

We do science so that our minds are changed.

We have been sequencing quickly and cheaply for less than 10 years. It's amazing how much we've been able to understand in such a short period of time. Genome sequencing (or some kind of genotyping) is now routinely done in the treatment of cancer. It is not long before a medical diagnosis will include an assessment of the full genome sequence.

As we sequence more and reflect more, we expect to change our minds. In fact, this is why we do science: so that our minds are changed.

Scientists engage the public in the process of scientific inquiry, testing and observation by way of reports in popular science media and newspapers. Understanding these reports requires that one holds as a core value to process of science and its outcomes. Groups with different agendas and a fundamentally different epistemology hijaack observations such as "In 30% of the genome, gorilla is closer to human or chimpanzee than the latter are to each other." (from the gorilla sequence paper) in an attempt to argue that our evolutionary models are sinfully wrong. They don't understand the implications of the uncertainty in our measurements (e.g. phylogenetic tree figure above) and have world outlooks that are impervious to the impact of observation.

It is certain that these genomes hold more surprises for us, but not in the way these groups claim.

Is our science incomplete? Absolutely. How do we address this? We do more science.

genome is not a blueprint

The genome is not blueprint. It's also absolutely not a recipe, which is promulgated by people who agree that it is not a blueprint. I explain my view of this here and why I think these analogies have disasterous effects on the public understanding of how their genomes (i.e. their bodies) work.

Sometimes metaphors are wonderful and they help expand our mind.

I, a universe of atoms, an atom in the universe.
—Richard Feynman

Other times they are like jailors, keeping us from having productive thoughts.

Genomics: the big blueprint

You might argue that "blueprint" is one of the closest words in meaning, so its use is justified. The trouble is that its actually very far in meaning.

Consider the following figure.

Genome is not a blueprint / Martin Krzywinski @MKrzywinski
The genome is not a blueprint and you should never say that it is. You shouldn't even say that it's like one, because it's nothing like one.

A blueprint shows you "what". A genome doesn’t encode "what". It doesn’t even encode “how”. Nor does it encode "from what". It encodes "with what", which is several degrees removed from "what". I promise that this will make sense shortly.

The reason why the blueprint analogy is pernicious is that it makes it sound like once the genome sequence is known, the rest easily follows. The reality is that these days the genome sequence is easily determined and the rest follows with great effort (or never) (see The $1,000 genome, the $100,000 analysis? by Elaine Mardis).

I'm going to try to motivate you that the analogy is false by an example. Suppose that you wanted to build a house but instead of getting blueprints from the architect, you received this strange drawing.

Genome is not a blueprint. / Martin Krzywinski @MKrzywinski
If your home had a genome, it might look like this. (zoom)

You’d be right to be confused—welcome to genome science. This house’s genome looks a lot like a set of tools and bears no resemblance to the house itself. The genome tells you nothing about (a) what the function of each tool is, (b) the effect of the tools form to its function (e.g. what are the many ways in which a hammer can diverge from its original shape before it ceases to be useful), (c) what the tools act on (this is why above I said "with what" rather than "from what"), (d) how the tools act together, and importantly (e) what the tools are used to build.

This is more of a way a genome works. It encodes the protein enzymes that make biochemical reactions possible at room temperature. In the house example, the tools encoded by the genome (e.g. saw, hammer) can be thought of automatically doing their job when they’re in the presence of the correct material (wood, nail). This is in analogy to enzymes, which mediate reactions when in physical proximity of chemical substrates.

The genome is not a code of life. It is a code of tools.

Neither wood nor nails—both essential materials for construction—appear anywhere in your home's genome. This directly translates into a biochemical example. We use sugar as a source of energy but the genome hints nothing at this—it only encodes the enzymes that act on sugar. Things are made more complex by the fact that the function of an enzyme is essentially impossible to predict without additional information, such as knowledge of functions of enzymes with similar characteristics.

You can probably imagine that the effect of changes in the home's genome is extraordinary difficult to predict. The figure below extends our example to that of your neighbour, which was recently observed to have collapsed. I’ll leave you to work out the mystery yourself.

Genome is not a blueprint. / Martin Krzywinski @MKrzywinski
Why exactly did your neighbour's home fall down? You suspect the root cause lies in its genome.

So next time someone says that the genome is a blueprint, or that it is the "code of life", point out that it is merely the "code of tools" for life, which is the emerging property of a set of chemicals confined within a physical space.

hilbert curve in genomics

The use of the Hilbert curve in genomics is not new. It appeared on the cover of Science in 2009 in connection to the 3-dimensional packing of the genome. It is an order 5 curve and just a flip of the curve I use in the Scientific American graphic. Here the corners of the curve have been smoothed out to give it a more organic and gooey feel.

Hilbert curve on cover of Science. / Martin Krzywinski @MKrzywinski
(left) Hilbert curve on cover of Science (Oct 2009) (right) Portion of Figure 2 from accompanying article Comprehensive Mapping of Long-Range Interactions Reveals Folding Principles of the Human Genome.

At least one tool exists (HilbertVis) that allows you to wrap genomic data onto the curve.

Anders S 2009 Visualization of genomic data with the Hilbert curve Bioinformatics 25 (10) 1231-1235.

I've used the Hilbert curve before to show the organization of genes in the genome. This figure shows the chromosome at a much higher resolution than would be possible if an ordinary line was used.

Chromosome 1 gene position and size on a Hilbert curve. / Martin Krzywinski @MKrzywinski
The size and position of genes on human chromosome 1. Genes implicated in cancer and generally in disease are highlighted. (zoom)

Because the Hilbert curve stretches the line into a square, it increases our ability to see details in data at higher resolution. In the figure below you can see distinct clumpiness in the organization of genes on the chromsome that is not representative of a purely random sampling.

Chromosome 1 gene position and size on a Hilbert curve. / Martin Krzywinski @MKrzywinski
Gene distribution is not random. This can be effectively demonstrated at high resolution using a Hilbert curve. (zoom)

data sources

Except for the Denisovan, the net alignments (e.g. human vs chimp net) from UCSC Genome browser were used for the analysis.

Gaps were intersected with human gene regions. For each gene, the region between the start of the first coding region and end of the last coding region was used.

human (Homo sapiens sapiens)

The RefSeq gene annotation from the UCSC Genome table browser was used. The union of all 51,010 RefSeq gene records was used.

The gene region was taken as the extent of the gene's coding sequence (CDS), not just the exons within it.

For example, for the BRCA2 gene, the RefSeq entry is

                        tx                cds               
BRCA2 NM_000059 chr13 + 32889616-32973809 32890597-32972907 

                        exons exonstart            exonend
                        27    32889616,32890558... 32889804,32890664... 

This record's contribution was the region 32890597-32972907, shown in bold above.

The total size of the union of tx regions is 1.28 Gb (20,722 coverage elements), of cds regions as defined above is 0.99 Gb (24,931 coverage elements) and of exons is 74.5 Mb (225,404 coverage elements).

Assembly version: Feb 2009 (CRCh37/hg19)

International Human Genome Sequencing Consortium 2004 Finishing the euchromatic sequence of the human genome Nature 431 (7011) 931-945.


30x sequence was aligned to the human genome at Max Planck (data portal).

Meyer M, Kircher M, Gansauge MT et al. 2012 A high-coverage genome sequence from an archaic Denisovan individual Science 338 (6104) 222-226.

chimp (Pan troglodytes)

Assembly version: Feb 2011 (panTro4)

Chimpanzee Sequencing and Analysis Consortium 2005 Initial sequence of the chimpanzee genome and comparison with the human genome Nature 437 (7055) 69-87.

bonobo (Pan paniscus)

Assembly version: May 2012 (panPan1).

Prufer K, Munch K, Hellmann I et al. 2012 The bonobo genome compared with the chimpanzee and human genomes Nature 486 (7404) 527-531.

At the moment this genome is available only on the test version of the browser.

Assembly version: Feb 2009 (CRCh37/hg19)

gorilla (Gorilla gorilla gorilla)

Assembly version: May 2011 (gorGor3.1/gorGor3)

Scally A, Dutheil JY, Hillier LW et al. 2012 Insights into hominid evolution from the gorilla genome sequence Nature 483 (7388) 169-175.

news + thoughts

Essentials of Data Visualization—8-part video series

Mon 16-01-2017
Martin Krzywinski @MKrzywinski

In collaboration with the Phil Poronnik and Kim Bell-Anderson at the University of Sydney, I'm delighted to share with you our 8-part video series project about thinking about drawing data and communicating science.

Martin Krzywinski @MKrzywinski
Essentials of Data Visualization: Thinking about drawing data and communicating science.

We've created 8 videos, each focusing on a different essential idea in data visualization: encoding, shapes, color, uncertainty, design, drawing missing or unobserved data, labels and process.

The videos were designed as teaching materials. Each video comes with a slide deck and exercises.

P values and the search for significance

Mon 16-01-2017
Little P value
What are you trying to say
Of significance?
—Steve Ziliak

We've written about P values before and warned readers about common misconceptions about them, which are so rife that the American Statistical Association itself has a long statement about them.

This month is our first of a two-part article about P values. Here we look at 'P value hacking' and 'data dredging', which are questionable practices that invalidate the correct interpretation of P values.

Martin Krzywinski @MKrzywinski
Nature Methods Points of Significance column: P values and the search for significance. (read)

We also illustrate how P values can lead us astray by asking "What is the smallest P value we can expect if the null hypothesis is true but we have done many tests, either explicitly or implicitly?"

Incidentally, this is our first column in which the standfirst is a haiku.

Altman, N. & Krzywinski, M. (2017) Points of Significance: P values and the search for significance. Nature Methods 14:3–4.

Background reading

Krzywinski, M. & Altman, N. (2013) Points of significance: Significance, P values and t–tests. Nature Methods 10:1041–1042.

...more about the Points of Significance column

Intuitive Design

Thu 03-11-2016

Appeal to intuition when designing with value judgments in mind.

Figure clarity and concision are improved when the selection of shapes and colors is grounded in the Gestalt principles, which describe how we visually perceive and organize information.

Martin Krzywinski @MKrzywinski
One of the Gestalt principles tells us that the magenta and green shapes will be perceived as as two groups, overriding the fact that the shapes within the group might be different. What the principle does not tell us is how the reader is likely to value each group. (read)

The Gestalt principles are value free. For example, they tell us how we group objects but do not speak to any meaning that we might intuitively infer from visual characteristics.

Martin Krzywinski @MKrzywinski
Nature Methods Points of View column: Intuitive Design. (read)

This month, we discuss how appealing to such intuitions—related to shapes, colors and spatial orientation— can help us add information to a figure as well as anticipate and encourage useful interpretations.

Krzywinski, M. (2016) Points of View: Intuitive Design. Nature Methods 13:895.

...more about the Points of View column


Fri 04-11-2016

Constraining the magnitude of parameters of a model can control its complexity.

This month we continue our discussion about model selection and evaluation and address how to choose a model that avoids both overfitting and underfitting.

Ideally, we want to avoid having either an underfitted model, which is usually a poor fit to the training data, or an overfitted model, which is a good fit to the training data but not to new data.

Martin Krzywinski @MKrzywinski
Nature Methods Points of Significance column: Regularization (read)

Regularization is a process that penalizes the magnitude of model parameters. This is done by not only minimizing the SSE, `\mathrm{SSE} = \sum_i (y_i - \hat{y}_i)^2 `, as is done normally in a fit, but adding to this minimized quantity the sum of the mode's squared parameters, `\mathrm{SSE} + \lambda \sum_i \hat{\beta}^2_i`.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Regularization. Nature Methods 13:803-804.

Background reading

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Model Selection and Overfitting. Nature Methods 13:703-704.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.

...more about the Points of Significance column

Model Selection and Overfitting

Fri 04-11-2016

With four parameters I can fit an elephant and with five I can make him wiggle his trunk. —John von Neumann.

By increasing the complexity of a model, it is easy to make it fit to data perfectly. Does this mean that the model is perfectly suitable? No.

When a model has a relatively large number of parameters, it is likely to be influenced by the noise in the data, which varies across observations, as much as any underlying trend, which remains the same. Such a model is overfitted—it matches training data well but does not generalize to new observations.

Martin Krzywinski @MKrzywinski
Nature Methods Points of Significance column: Model Selection and Overfitting (read)

We discuss the use of training, validation and testing data sets and how they can be used, with methods such as cross-validation, to avoid overfitting.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Model Selection and Overfitting. Nature Methods 13:703-704.

Background reading

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.

...more about the Points of Significance column

Classifier Evaluation

Tue 13-09-2016

It is important to understand both what a classification metric expresses and what it hides.

We examine various metrics use to assess the performance of a classifier. We show that a single metric is insufficient to capture performance—for any metric, a variety of scenarios yield the same value.

Martin Krzywinski @MKrzywinski
Nature Methods Points of Significance column: Classifier Evaluation (read)

We also discuss ROC and AUC curves and how their interpretation changes based on class balance.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604.

Background reading

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.

...more about the Points of Significance column

Happy 2016 `\pi` Approximation, roughly speaking

Sun 24-07-2016

Today is the day and it's hardly an approximation. In fact, `22/7` is 20% more accurate of a representation of `\pi` than `3.14`!

Time to celebrate, graphically. This year I do so with perfect packing of circles that embody the approximation.

By warping the circle by 8% along one axis, we can create a shape whose ratio of circumference to diameter, taken as twice the average radius, is 22/7.

If you prefer something more accurate, check out art from previous `\pi` days: 2013 `\pi` Day and 2014 `\pi` Day, 2015 `\pi` Day, and 2016 `\pi` Day.

Logistic Regression

Tue 13-09-2016

Regression can be used on categorical responses to estimate probabilities and to classify.

The next column in our series on regression deals with how to classify categorical data.

We show how linear regression can be used for classification and demonstrate that it can be unreliable in the presence of outliers. Using a logistic regression, which fits a linear model to the log odds ratio, improves robustness.

Martin Krzywinski @MKrzywinski
Nature Methods Points of Significance column: Logistic regression? (read)

Logistic regression is solved numerically and in most cases, the maximum-likelihood estimates are unique and optimal. However, when the classes are perfectly separable, the numerical approach fails because there is an infinite number of solutions.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.

Background reading

Altman, N. & Krzywinski, M. (2016) Points of Significance: Regression diagnostics? Nature Methods 13:385-386.

Altman, N. & Krzywinski, M. (2015) Points of Significance: Multiple Linear Regression Nature Methods 12:1103-1104.

Altman, N. & Krzywinski, M. (2015) Points of significance: Simple Linear Regression Nature Methods 12:999-1000.

...more about the Points of Significance column

Visualizing Clonal Evolution in Cancer

Thu 02-06-2016

Genomic instability is one of the defining characteristics of cancer and within a tumor, which is an ever-evolving population of cells, there are many genomes. Mutations accumulate and propagate to create subpopulations and these groups of cells, called clones, may respond differently to treatment.

It is now possible to sequence individual cells within a tumor to create a profile of genomes. This profile changes with time, both in the kinds of mutation that are found and in their proportion in the overall population.

Martin Krzywinski @MKrzywinski
Ways to present temporal and phylogenetic evolution of clones in cancer. M Krzywinski (2016) Molecular Cell 62:652-656. (read)

Clone evolution diagrams visualize these data. These diagrams can be qualitative, showing only trends, or quantitative, showing temporal and population changes to scale. In this Molecular Cell forum article I provide guidelines for drawing these diagrams, focusing with how to use color and navigational elements, such as grids, to clarify the relationships between clones.

Martin Krzywinski @MKrzywinski
How to draw clone evolution diagrams better. M Krzywinski (2016) Molecular Cell xxx:xxx-xxx. (read)

I'd like to thank Maia Smith and Cydney Nielsen for assistance in preparing some of the figures in the paper.

Krzywinski, M. (2016) Visualizing Clonal Evolution in Cancer. Mol Cell 62:652-656.

Binning High-Resolution Data

Wed 01-06-2016

Limitations in print resolution and visual acuity impose limits on data density and detail.

Your printer can print at 1,200 or 2,400 dots per inch. At reading distance, your reader can resolve about 200–300 lines per inch. This large gap—how finely we can print and how well we can see—can create problems when we don't take visual acuity into account.

Martin Krzywinski @MKrzywinski
Nature Methods Points of View column: Binning high-resolution data. (read)

The column provides some guidelines—particularly relevant when showing whole-genome data, where the scale of elements of interest such as genes is below the visual acuity limit—for binning data so that they are represented by elements that can be comfortably discerned.

Krzywinski, M. (2016) Points of view: Binning high-resolution data. Nature Methods 13:463.

...more about the Points of View column