Martin Krzywinski / Genome Sciences Center / Martin Krzywinski / Genome Sciences Center / - contact me Martin Krzywinski / Genome Sciences Center / on Twitter Martin Krzywinski / Genome Sciences Center / - Lumondo Photography Martin Krzywinski / Genome Sciences Center / - Pi Art Martin Krzywinski / Genome Sciences Center / - Hilbertonians - Creatures on the Hilbert Curve
Trance opera—Spente le Stellebe dramaticmore quotes

In Silico Flurries: Computing a world of snow. Scientific American. 23 December 2017

science + genomics

What if we were to print what we sequence?

Expressing the amount of sequence in the human genome in terms of the number of printed pages has been done before. At the Broad Institute, all of the human reference genome is printed in bound volumes.

At our sequencing facility, we sequence about 1 terabases per day. This is equivalent to 167 diploid human genomes (167 × 6 gigabases). The sequencing is done using a pool of 13 Illumina HiSeq 2500 sequencers, of which about 50% are sequencing at any given time.

Martin Krzywinski @MKrzywinski
A single letter-size page (8.5" × 11") of 6pt Courier using 0.25 inch margins accomodates 18,126 bases on 114 lines. Shown here is a portion of sequence from human chromosome 1. (PDF)

This sequencing is extremely fast.

To understand just how fast this is, consider printing this amount of sequence using a modern office laser printer. Let's pick the HP P3015n which costs about $400—a cheap and fast network printer. It can print at about 40 pages per minute.

If we print the sequence at 6pt Courier using 0.25" margins, each 8.5" × 11" page will accomodate 18,126 bases. I chose this font size because it's reasonably legible. To print 1 terabases we need `10^12 / 18126 = 55.2` million pages.

If we print continuously at 40 pages per minute, we need `10^12 / (18126*40*1440) = 957.8` days.

If we had 958 printers working around the clock, we could print everything we sequence and not fall behind. This does not account for time required to replenish toner or paper.

what's cheaper, sequencing or printing?

It costs us about $12,000 to sequence a terabase in reagents. If we do it on a cost-recovery basis, it is about twice that, to include labor and storage. Let's say $25,000 per terabase.

Coincidentally, this is about $150 per 1× coverage of a diploid human genome. The cost of sequencing a single genome would be significantly higher because of overhead. To overcome gaps in coverage and to be sensitive to alleles in heterogenous samples, sequencing should be done to 30× or more. For example, we sequence cancer genomes at over 100×. For theory and review see Aspects of coverage in medical DNA sequencing by Wendl et al. and Sequencing depth and coverage: key considerations in genomic analyses by Sims et al.. (Thanks to Nicolas Robine for pointing out that redundant coverage should be mentioned here).

Printing is 44× more expensive than sequencing, per base: 25 n$ vs 1.1 μ$.

I should mention that the cost of analyzing the sequenced genome should be considered—this step is always the much more expensive one. In The $1,000 genome, the $100,000 analysis? Mardis asks "If our efforts to improve the human reference sequence quality, variation, and annotation are successful, how do we avoid the pitfall of having cheap human genome resequencing but complex and expensive manual analysis to make clinical sense out of the data?"

The cost of a single printed page (toner, power, etc) is about $0.02–0.05, depending on the printer. Let's be generous and say it's $0.02. To print 55.2 million pages would cost us $1.1M. This is about 44 times as expensive as sequencing.

Martin Krzywinski @MKrzywinski
To print what we sequence we would require 958 office laser printers (shown here as HP3015n) at a cost of $1,100,000 per day. (PDF)

Think about this. It's 44× more expensive to merely print a letter on a page than it is to determine it from the DNA of a cell. In other words, to go from the physical molecule to a bit state on a disk is much cheaper than from a bit state on a disk to a representation of the letter on a page.

Per base, our sequencing costs `$25000/10^12 = $25*10^-9`, or 25 nanodollars. At $0.02 and 18,126 bp per page, printing costs `0.02/18126 = $1.1*10^-6` or 1.1 microdollars.

If at this point you're thinking that printing isn't practical, the fact that the pages would weigh 248,000 kg and stack to 5.5 km should cinch the argument.

The capital cost of sequencing is, of course, much higher. The printers themselves would cost about $400,000 to purchase. The 6 sequencers, on the other hand, cost about $3,600,000.

sequencing is as fast as downloading

We sequence at a rate close to the average internet bandwidth available to the public.

At 3.86 Mb/s, we could download a terabase of compressed sequence in a day, assuming the sequence can be compressed by a factor of 3. This level of compression is reasonable—the current human assembly is 938 Mb zipped).

In other words, you would have to be downloading essentially continuously to keep up with our sequencing.


news + thoughts

Happy 2018 `\pi` Day—Boonies, burbs and boutiques of `\pi`

Wed 14-03-2018

Celebrate `\pi` Day (March 14th) and go to brand new places. Together with Jake Lever, this year we shrink the world and play with road maps.

Streets are seamlessly streets from across the world. Finally, a halva shop on the same block!

Martin Krzywinski @MKrzywinski
A great 10 km run loop between Istanbul, Copenhagen, San Francisco and Dublin. Stop off for halva, smørrebrød, espresso and a Guinness on the way. (details)

Intriguing and personal patterns of urban development for each city appear in the Boonies, Burbs and Boutiques series.

Martin Krzywinski @MKrzywinski
In the Boonies, Burbs and Boutiques of `\pi` we draw progressively denser patches using the digit sequence 159 to inform density. (details)

No color—just lines. Lines from Marrakesh, Prague, Istanbul, Nice and other destinations for the mind and the heart.

Martin Krzywinski @MKrzywinski
Roads from cities rearranged according to the digits of `\pi`. (details)

The art is featured in the Pi City on the Scientific American SA Visual blog.

Check out art from previous years: 2013 `\pi` Day and 2014 `\pi` Day, 2015 `\pi` Day, 2016 `\pi` Day and 2017 `\pi` Day.

Machine learning: supervised methods (SVM & kNN)

Thu 18-01-2018
Supervised learning algorithms extract general principles from observed examples guided by a specific prediction objective.

We examine two very common supervised machine learning methods: linear support vector machines (SVM) and k-nearest neighbors (kNN).

SVM is often less computationally demanding than kNN and is easier to interpret, but it can identify only a limited set of patterns. On the other hand, kNN can find very complex patterns, but its output is more challenging to interpret.

Martin Krzywinski @MKrzywinski
Nature Methods Points of Significance column: Machine learning: supervised methods (SVM & kNN). (read)

We illustrate SVM using a data set in which points fall into two categories, which are separated in SVM by a straight line "margin". SVM can be tuned using a parameter that influences the width and location of the margin, permitting points to fall within the margin or on the wrong side of the margin. We then show how kNN relaxes explicit boundary definitions, such as the straight line in SVM, and how kNN too can be tuned to create more robust classification.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Machine learning: a primer. Nature Methods 15:5–6.

Background reading

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

...more about the Points of Significance column

Human Versus Machine

Tue 16-01-2018
Balancing subjective design with objective optimization.

In a Nature graphics blog article, I present my process behind designing the stark black-and-white Nature 10 cover.

Nature 10, 18 December 2017

Machine learning: a primer

Thu 18-01-2018
Machine learning extracts patterns from data without explicit instructions.

In this primer, we focus on essential ML principles— a modeling strategy to let the data speak for themselves, to the extent possible.

The benefits of ML arise from its use of a large number of tuning parameters or weights, which control the algorithm’s complexity and are estimated from the data using numerical optimization. Often ML algorithms are motivated by heuristics such as models of interacting neurons or natural evolution—even if the underlying mechanism of the biological system being studied is substantially different. The utility of ML algorithms is typically assessed empirically by how well extracted patterns generalize to new observations.

Martin Krzywinski @MKrzywinski
Nature Methods Points of Significance column: Machine learning: a primer. (read)

We present a data scenario in which we fit to a model with 5 predictors using polynomials and show what to expect from ML when noise and sample size vary. We also demonstrate the consequences of excluding an important predictor or including a spurious one.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

...more about the Points of Significance column

Snowflake simulation

Tue 16-01-2018
Symmetric, beautiful and unique.

Just in time for the season, I've simulated a snow-pile of snowflakes based on the Gravner-Griffeath model.

Martin Krzywinski @MKrzywinski
A few of the beautiful snowflakes generated by the Gravner-Griffeath model. (explore)

The work is described as a wintertime tale in In Silico Flurries: Computing a world of snow and co-authored with Jake Lever in the Scientific American SA Blog.

Gravner, J. & Griffeath, D. (2007) Modeling Snow Crystal Growth II: A mesoscopic lattice map with plausible dynamics.