Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - contact me Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca on Twitter Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Lumondo Photography Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Pi Art Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Hilbertonians - Creatures on the Hilbert Curve
Here we are now at the middle of the fourth large part of this talk.Pepe Deluxeget nowheremore quotes

visualization: revealing


In Silico Flurries: Computing a world of snow. Scientific American. 23 December 2017


data visualization + art

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
The BC Cancer Agency’s Personalized Oncogenomics Program (POG) is a clinical research initiative applying genomic sequencing to the diagnosis and treatment of patients with incurable cancers.

Art of the Personalized Oncogenomics Program

Nature uses only the longest threads to weave her patterns, so that each small piece of her fabric reveals the organization of the entire tapestry.
— Richard Feynman

Art is Science in Love
— E.F. Weisslitz

what do the circles mean?

The legend can be printed at 4" × 6". The bitmap resolution is 600 dpi.


 / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Quick legend. 5 Years of Personalized Oncogenomics Project at Canada's Michael Smith Genome Sciences Centre. The poster shows 545 cancer cases. (zoom)

a case for a visual case summary

For every case, we sequence the DNA to study the genome structure and the RNA to discover which genes are expressed and to what extent. The analysis is quite complex and brings together many steps: sequence alignment, structural variation detection, expression profiling, pathway analysis and so on. Every case is "summarized" by a lengthy report, such as the one below, which can run to over 40 pages.

Personalized Oncogenomics Program at Canada's Michael Smith Genome Sciences Center / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
A report for a typical POG case is about 40–50 pages.

One of the goals of the 5-year anniversary art was to represent the cases in a way to clearly show their number, classification as well as diversity. There are many metrics that can be used and I decided to choose the case's correlation to other cancer types.

correlation to TCGA cancer database

For every POG case, the gene expression of 1,744 key genes is compared to that of 1,000's of cases in the TCGA database of cancer samples. For a given cancer type in the TCGA database (e.g. BRCA), we visualize the correlations using box plots. The box plot is ideal for showing the distribution of values in a sample.

Personalized Oncogenomics Program at Canada's Michael Smith Genome Sciences Center / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Every case is compared to a database of 1,000's of cases. Shown here are box plots for the Spearman correlation coefficient between the gene expression of the POG case and cancers of a specific type (e.g. BRCA, LUAD, etc). (zoom)

The 10 largest Spearman correlation coefficients for the case shown above are

case    corr    type     tissue
-----------------------------------------------
POG661	0.436	BRCA	 Breast
POG661	0.371	PRAD	 Urologic
POG661	0.295	OV	 Gynecologic
POG661	0.257	UCEC	 Gynecologic
POG661	0.244	LUAD	 Thoracic
POG661	0.235	CESC_CAD Gynecologic
POG661	0.225	MB_Adult Central Nervous System
POG661	0.222	KICH	 Urologic
POG661	0.219	THCA	 Endocrine
POG661	0.208	UCS	 Gynecologic

In the figure below I show how the final encoding of the correlations is done. First, the top three correlations are taken—using more generates a busy look and diminishes visual impact. The correlations are encoded as concentric rings.

Because in most cases the differences in the top 3 correlations are relatively small, differences are emphasized by non-linearly scaling the encoding (the correlations are first scaled `r^3`).

Personalized Oncogenomics Program at Canada's Michael Smith Genome Sciences Center / Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Case POG661. Median gene expression correlations with different cancer types from TCGA database. (A) Top 10 correlations shown as a bar plot. Color coding is by source tissue associated with the cancer type. (B) Top 10 correlations encoded as concentric rings. The width of the ring is proportional to the correlation. (C) Top 3 correlations. (D) Top 3 correlations scaled with a power to emphasize differences. (zoom)

The type face is Proxima Nova. The colors for each tissue source are

         Gastrointestinal  234,62,144
                   Breast  237,75,51
                 Thoracic  242,130,56
              Gynecologic  253,188,61
              Soft tissue  244,217,59
                     Skin  193,216,51
                 Urologic  114,197,49
              Hematologic  29,166,68
            Head and neck  43,168,224
                Endocrine  71,82,178
   Central nervous system  127,65,146
                    Other  150,150,150
VIEW ALL

news + thoughts

Curse(s) of dimensionality

Tue 05-06-2018
There is such a thing as too much of a good thing.

We discuss the many ways in which analysis can be confounded when data has a large number of dimensions (variables). Collectively, these are called the "curses of dimensionality".

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Curse(s) of dimensionality. (read)

Some of these are unintuitive, such as the fact that the volume of the hypersphere increases and then shrinks beyond about 7 dimensions, while the volume of the hypercube always increases. This means that high-dimensional space is "mostly corners" and the distance between points increases greatly with dimension. This has consequences on correlation and classification.

Altman, N. & Krzywinski, M. (2018) Points of significance: Curse(s) of dimensionality Nature Methods 15:399–400.

Statistics vs Machine Learning

Tue 03-04-2018
We conclude our series on Machine Learning with a comparison of two approaches: classical statistical inference and machine learning. The boundary between them is subject to debate, but important generalizations can be made.

Inference creates a mathematical model of the datageneration process to formalize understanding or test a hypothesis about how the system behaves. Prediction aims at forecasting unobserved outcomes or future behavior. Typically we want to do both and know how biological processes work and what will happen next. Inference and ML are complementary in pointing us to biologically meaningful conclusions.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Statistics vs machine learning. (read)

Statistics asks us to choose a model that incorporates our knowledge of the system, and ML requires us to choose a predictive algorithm by relying on its empirical capabilities. Justification for an inference model typically rests on whether we feel it adequately captures the essence of the system. The choice of pattern-learning algorithms often depends on measures of past performance in similar scenarios.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Statistics vs machine learning. Nature Methods 15:233–234.

Background reading

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: supervised methods. Nature Methods 15:5–6.

...more about the Points of Significance column

Happy 2018 `\pi` Day—Boonies, burbs and boutiques of `\pi`

Wed 14-03-2018

Celebrate `\pi` Day (March 14th) and go to brand new places. Together with Jake Lever, this year we shrink the world and play with road maps.

Streets are seamlessly streets from across the world. Finally, a halva shop on the same block!

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
A great 10 km run loop between Istanbul, Copenhagen, San Francisco and Dublin. Stop off for halva, smørrebrød, espresso and a Guinness on the way. (details)

Intriguing and personal patterns of urban development for each city appear in the Boonies, Burbs and Boutiques series.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
In the Boonies, Burbs and Boutiques of `\pi` we draw progressively denser patches using the digit sequence 159 to inform density. (details)

No color—just lines. Lines from Marrakesh, Prague, Istanbul, Nice and other destinations for the mind and the heart.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Roads from cities rearranged according to the digits of `\pi`. (details)

The art is featured in the Pi City on the Scientific American SA Visual blog.

Check out art from previous years: 2013 `\pi` Day and 2014 `\pi` Day, 2015 `\pi` Day, 2016 `\pi` Day and 2017 `\pi` Day.

Machine learning: supervised methods (SVM & kNN)

Thu 18-01-2018
Supervised learning algorithms extract general principles from observed examples guided by a specific prediction objective.

We examine two very common supervised machine learning methods: linear support vector machines (SVM) and k-nearest neighbors (kNN).

SVM is often less computationally demanding than kNN and is easier to interpret, but it can identify only a limited set of patterns. On the other hand, kNN can find very complex patterns, but its output is more challenging to interpret.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Machine learning: supervised methods (SVM & kNN). (read)

We illustrate SVM using a data set in which points fall into two categories, which are separated in SVM by a straight line "margin". SVM can be tuned using a parameter that influences the width and location of the margin, permitting points to fall within the margin or on the wrong side of the margin. We then show how kNN relaxes explicit boundary definitions, such as the straight line in SVM, and how kNN too can be tuned to create more robust classification.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Machine learning: a primer. Nature Methods 15:5–6.

Background reading

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

...more about the Points of Significance column

Human Versus Machine

Tue 16-01-2018
Balancing subjective design with objective optimization.

In a Nature graphics blog article, I present my process behind designing the stark black-and-white Nature 10 cover.

Nature 10, 18 December 2017

Machine learning: a primer

Thu 18-01-2018
Machine learning extracts patterns from data without explicit instructions.

In this primer, we focus on essential ML principles— a modeling strategy to let the data speak for themselves, to the extent possible.

The benefits of ML arise from its use of a large number of tuning parameters or weights, which control the algorithm’s complexity and are estimated from the data using numerical optimization. Often ML algorithms are motivated by heuristics such as models of interacting neurons or natural evolution—even if the underlying mechanism of the biological system being studied is substantially different. The utility of ML algorithms is typically assessed empirically by how well extracted patterns generalize to new observations.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Machine learning: a primer. (read)

We present a data scenario in which we fit to a model with 5 predictors using polynomials and show what to expect from ML when noise and sample size vary. We also demonstrate the consequences of excluding an important predictor or including a spurious one.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

...more about the Points of Significance column