Mad about you, orchestrally.feel the vibe, feel the terror, feel the painmore quotes

# piet mondrian: exciting

EMBO Practical Course: Bioinformatics and Genome Analysis, 5–17 June 2017.

# visualization + design

The 2016 Pi Day art imagines the digits of Pi as physical masses collapsing under gravity and is featured in the articles The Gravity of Pi and The Boundless Beauty of Pi at the Scientific American SA Visual blog.

# $\pi$ Day 2015 Art Posters

2016 $\pi$ approximation day
2016 $\pi$ day
2015 $\pi$ day
2014 $\pi$ approx day
2014 $\pi$ day
2014 $\pi$ day
2013 $\pi$ day
Circular $\pi$ art

On March 14th celebrate Pi Day. Hug $\pi$—find a way to do it.

For those who favour $\tau=2\pi$ will have to postpone celebrations until July 26th. That's what you get for thinking that $\pi$ is wrong.

If you're not into details, you may opt to party on July 22nd, which is $\pi$ approximation day ($\pi$ ≈ 22/7). It's 20% more accurate that the official Pi day!

Finally, if you believe that $\pi = 3$, you should read why $\pi$ is not equal to 3.

All art posters are available for purchase.
I take custom requests.

Not a circle in sight in the 2015 $\pi$ day art. Try to figure out how up to 612,330 digits are encoded before reading about the method. $\pi$'s transcendental friends $\phi$ and $e$ are there too—golden and natural. Get it?

This year's $\pi$ day is particularly special. The digits of time specify a precise time if the date is encoded in North American day-month-year convention: 3-14-15 9:26:53.

The art has been featured in Ana Swanson's Wonkblog article at the Washington Post—10 Stunning Images Show The Beauty Hidden in $\pi$.

This year's art has a modern Bauhaus style. Sharp edges, lines and solid colors. Potato farms from space. CPUs from up close. If the pieces look like the art of Piet Mondrian, you'd be right.

3,628 digits of $\pi$ in a 6 level treemap. Uniform line thickness. Bauhaus prime colors. (posters, BUY ARTWORK)

The digits of $pi$ are encoded in something that looks like a treemap. I explain how this is done in the methods section, but before reading it, try to see if you can figure out how it's done.

2,258 digits of $\phi$, 3,855 digits of $e$ and 3,628 digits of $\pi$ in 6 level treemaps. Uniform line thickness. Brewer palette sequential greys. (posters, BUY ARTWORK)

I briefly experimented with the 4-color theorem in trying to apply color to the treemap, but it turned out to lack interesting stucture. Well, at least some graphs were made.

I experimented with different treemap resolutions. For treemaps that use an outline around each rectangle, I decided to stop at 8 levels, at which 111,469 digits of $pi$ can be encoded.

3,628 digits of $\pi$ in a 6 level treemap. Uniform line thickness. Bauhaus prime colors. (posters, BUY ARTWORK)
20,244 digits of $\pi$ in a 7 level treemap. Uniform line thickness. Bauhaus prime colors. (posters, BUY ARTWORK)
111,469 digits of $\pi$ in an 8 level treemap. Uniform line thickness, slightly thinner than for the 7-level map. Bauhaus prime colors. (posters, BUY ARTWORK)

I also made a level 9 treemap without the outlines, which encoded 612,330 digits. When rendered at 20,833 × 20,833 pixels (I needed the image in bitmap form to provide the posters for sale), some regions are essentially a pixel in size, as seen in the 1-1 crop below.

612,330 digits of $\pi$ in an 9 level treemap. Bauhaus prime colors. (posters, BUY ARTWORK)
1-1 crop of 612,330 digits of $\pi$ in an 9 level treemap. Bauhaus prime colors. (posters)

VIEW ALL

# Essentials of Data Visualization—8-part video series

Mon 16-01-2017

In collaboration with the Phil Poronnik and Kim Bell-Anderson at the University of Sydney, I'm delighted to share with you our 8-part video series project about thinking about drawing data and communicating science.

Essentials of Data Visualization: Thinking about drawing data and communicating science.

We've created 8 videos, each focusing on a different essential idea in data visualization: encoding, shapes, color, uncertainty, design, drawing missing or unobserved data, labels and process.

The videos were designed as teaching materials. Each video comes with a slide deck and exercises.

# P values and the search for significance

Mon 16-01-2017
Little P value
What are you trying to say
Of significance?
—Steve Ziliak

We've written about P values before and warned readers about common misconceptions about them, which are so rife that the American Statistical Association itself has a long statement about them.

This month is our first of a two-part article about P values. Here we look at 'P value hacking' and 'data dredging', which are questionable practices that invalidate the correct interpretation of P values.

Nature Methods Points of Significance column: P values and the search for significance. (read)

We also illustrate how P values can lead us astray by asking "What is the smallest P value we can expect if the null hypothesis is true but we have done many tests, either explicitly or implicitly?"

Incidentally, this is our first column in which the standfirst is a haiku.

Altman, N. & Krzywinski, M. (2017) Points of Significance: P values and the search for significance. Nature Methods 14:3–4.

Krzywinski, M. & Altman, N. (2013) Points of significance: Significance, P values and t–tests. Nature Methods 10:1041–1042.

# Intuitive Design

Thu 03-11-2016

Appeal to intuition when designing with value judgments in mind.

Figure clarity and concision are improved when the selection of shapes and colors is grounded in the Gestalt principles, which describe how we visually perceive and organize information.

One of the Gestalt principles tells us that the magenta and green shapes will be perceived as as two groups, overriding the fact that the shapes within the group might be different. What the principle does not tell us is how the reader is likely to value each group. (read)

The Gestalt principles are value free. For example, they tell us how we group objects but do not speak to any meaning that we might intuitively infer from visual characteristics.

Nature Methods Points of View column: Intuitive Design. (read)

This month, we discuss how appealing to such intuitions—related to shapes, colors and spatial orientation— can help us add information to a figure as well as anticipate and encourage useful interpretations.

Krzywinski, M. (2016) Points of View: Intuitive Design. Nature Methods 13:895.

# Regularization

Fri 04-11-2016

Constraining the magnitude of parameters of a model can control its complexity.

This month we continue our discussion about model selection and evaluation and address how to choose a model that avoids both overfitting and underfitting.

Ideally, we want to avoid having either an underfitted model, which is usually a poor fit to the training data, or an overfitted model, which is a good fit to the training data but not to new data.

Nature Methods Points of Significance column: Regularization (read)

Regularization is a process that penalizes the magnitude of model parameters. This is done by not only minimizing the SSE, $\mathrm{SSE} = \sum_i (y_i - \hat{y}_i)^2$, as is done normally in a fit, but adding to this minimized quantity the sum of the mode's squared parameters, $\mathrm{SSE} + \lambda \sum_i \hat{\beta}^2_i$.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Regularization. Nature Methods 13:803-804.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Model Selection and Overfitting. Nature Methods 13:703-704.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.

# Model Selection and Overfitting

Fri 04-11-2016

With four parameters I can fit an elephant and with five I can make him wiggle his trunk. —John von Neumann.

By increasing the complexity of a model, it is easy to make it fit to data perfectly. Does this mean that the model is perfectly suitable? No.

When a model has a relatively large number of parameters, it is likely to be influenced by the noise in the data, which varies across observations, as much as any underlying trend, which remains the same. Such a model is overfitted—it matches training data well but does not generalize to new observations.

Nature Methods Points of Significance column: Model Selection and Overfitting (read)

We discuss the use of training, validation and testing data sets and how they can be used, with methods such as cross-validation, to avoid overfitting.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Model Selection and Overfitting. Nature Methods 13:703-704.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.

# Classifier Evaluation

Tue 13-09-2016

It is important to understand both what a classification metric expresses and what it hides.

We examine various metrics use to assess the performance of a classifier. We show that a single metric is insufficient to capture performance—for any metric, a variety of scenarios yield the same value.

Nature Methods Points of Significance column: Classifier Evaluation (read)

We also discuss ROC and AUC curves and how their interpretation changes based on class balance.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604.