Let me tell you about something.

Distractions and amusements, with a sandwich and coffee.

This love loves love. It's a strange love, strange love.
•
• watch

Lately, I've been making a lot of square things round. So when Rhiannon Macrae, the Editor of Trends in Genetics, requested a Circos-like cover image for the human genetics special edition of the journal, I started drawing circles.

The image was published on the cover of Trends in Genetics human genetics special issue (Trends in Genetics October 2012, 28 (10)).

Circos has appeared on covers of journals and books. Some of the images were designed by me and others were drawn from papers published in the issue.

I have a collection of unpublished Circos posters and thought these might be a good starting point. Rhiannon and I narrowed the choice down to the black-and-white design that showed sequenced organisms. We also liked the complex style of a panel of hundreds of Circos images generated with the tableviewer.

The idea would be that the foreground would be more artistic and stylized, while the background was more technical and complex. I have thousands of images available from the tableviewer (e.g. huge 15,129 image matrix).

Rhiannon also wanted to include the quote by Henry David Thoreau, *"Nature and human life are as various as our several constitutions. Who shall say what prospect life offers to another?"* This reminded me of a similar but more tragic line from Shakespeare's Julius Caesar, *"How many ages hence shall this our lofty scene be acted over in states unborn and accents yet unknown!"*

In the early comps we played around with the idea of using non-genomics elements in the image, such as coins. We thought that we could use the variety of color and shape of the coins to communicate the idea of genetic diversity. However, after wrestling with how to do this effectively the concept was scrapped — the idea of using coins felt both arcane and arbitrary.

I decided to go with a warm brown color scheme. It's not a color I use a lot of, which makes me think that I *should* try to do more with it.

Deep brown provides great contrast for saturated colors, though I had to be careful not to make the image look too kitchy with an excess of colour variation. In some of the early comps shown above, two or more different color palettes were used (e.g. grey/red/blue and false color) and this lowered to overall visual cohesion of the image.

It's always a good idea to add variety to design. After all, without *any variety* we'd be left with a blank page. Ok, so variety *is* good, but too much variety is very bad, and can make you wish for that blank page again. Think about this: one kind of variety already provides variety! A variety of variety (I run the risk of recursing myself *ad infinitum*) can not only compete for attention but resonate destructively (that's design-speak for "turn into visual mush").

Everyone liked the combination of bright colors and dark background. This is an approach I favour too, which has worked well on other covers.

Briefly I experimented with various brush and pencil filters to give the image a more hand-drawn and organic look. Most of the illustrations I generate are very digital — blocks of solid colors and high-contrast shapes — and I thought a departure from this look could work in this case. However, like with the coins, this path didn't produce anything productive.

Celebrate Pi Approximation Day (July 22nd) with the art arm waving. This year I take the first 10,000 most accurate approximations (m/n, m=1..10,000) and look at their accuracy.

I turned to the spiral again after applying it to stack stacked ring plots of frequency distributions in Pi for the 2014 Pi Day.

Our 10th Points of Significance column! Continuing with our previous discussion about comparative experiments, we introduce ANOVA and blocking. Although this column appears to introduce two new concepts (ANOVA and blocking), you've seen both before, though under a different guise.

If you know the *t*-test you've already applied analysis of variance (ANOVA), though you probably didn't realize it. In ANOVA we ask whether the variation within our samples is compatible with the variation between our samples (sample means). If the samples don't all have the same mean then we expect the latter to be larger. The ANOVA test statistic (*F*) assigns significance to the ratio of these two quantities. When we only have two-samples and apply the *t*-test, *t*^{2} = *F*.

ANOVA naturally incorporates and partitions sources of variation—the effects of variables on the system are determined based on the amount of variation they contribute to the total variation in the data. If this contribution is large, we say that the variation can be "explained" by the variable and infer an effect.

We discuss how data collection can be organized using a randomized complete block design to account for sources of uncertainty in the experiment. This process is called blocking because we are blocking the variation from a known source of uncertainty from interfering with our measurements. You've already seen blocking in the paired *t*-test example, in which the subject (or experimental unit) was the block.

We've worked hard to bring you 20 pages of statistics primers (though it feels more like 200!). The column is taking a month off in August, as we shrink our error bars.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of Variance (ANOVA) and Blocking *Nature Methods* **11**:699-700.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

This month, Points of Significance begins a series of articles about experimental design. We start by returning to the two-sample and paired *t*-tests for a discussion of biological and experimental variability.

We introduce the concept of blocking using the paired *t*-test as an example and show how biological and experimental variability can be related using the correlation coefficient, *ρ*, and how its value imapacts the relative performance of the paired and two-sample *t*-tests.

We also emphasize that when reporting data analyzed with the paired t-test, differences in sample means (and their associated 95% CI error bars) should be shown—not the original samples—because the correlation in the samples (and its benefits) cannot be gleaned directly from the sample data.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

Our May Points of Significance Nature Methods column jumps straight into dealing with skewed data with Non Parametric Tests.

We introduce non-parametric tests and simulate data scenarios to compare their performance to the *t*-test. You might be surprised—the *t*-test is extraordinarily robust to distribution shape, as we've discussed before. When data is highly skewed, non-parametric tests perform better and with higher power. However, if sample sizes are small they are limited to a small number of possible *P* values, of which none may be less than 0.05!

Krzywinski, M. & Altman, N. (2014) Points of Significance: Non Parametric Testing *Nature Methods* **11**:467-468.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

In the April Points of Significance Nature Methods column, we continue our and consider what happens when we run a large number of tests.

Observing statistically rare test outcomes is expected if we run enough tests. These are statistically, not biologically, significant. For example, if we run *N* tests, the smallest *P* value that we have a 50% chance of observing is 1–exp(–ln2/*N*). For *N* = 10^{k} this *P* value is *P*_{k}=10^{–k}ln2 (e.g. for 10^{4}=10,000 tests, *P*_{4}=6.9×10^{–5}).

We discuss common correction schemes such as Bonferroni, Holm, Benjamini & Hochberg and Storey's *q* and show how they impact the false positive rate (FPR), false discovery rate (FDR) and power of a batch of tests.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part II — Multiple Testing *Nature Methods* **11**:215-216.

*t*-tests *Nature Methods* **11**:215-216.

*P* values and *t*-tests *Nature Methods* **10**:1041-1042.