Let me tell you about something.

Distractions and amusements, with a sandwich and coffee.

This love's a nameless dream.
•
• try to figure it out

The Points of Significance column was launched in September 2013 as an educational resource to authors and to provide practical suggestions about best practices in statistical analysis and reporting.

This month we launch a new column "Points of Significance" devoted to statistics, a topic of profound importance for biological research, but one that often doesnâ€™t receive the attention it deserves.

The "aura of exactitude" that often surrounds statistics is one of the main notions that the Points of Significance column will attempt to dispel, while providing useful pointers on using and evaluating statistical measures.

—Dan Evanko, Let's Give Statistics the Attention it Deserves in Biological Research

The column is co-authored with Naomi Altman (Pennsylvania State University). Paul Blainey (Broad) is a contributing co-author.

Each column is written with continuity and consistency in mind. Our goal is to never rely on concepts that we have not previously discussed. We do not assume previous statistical knowledge—only basic math. Concepts are illustrated using practical examples that embody the ideas without extraneous complicated details. All of the figures are designed with the same approach—as simple and self-contained as possible.

I've posted a writeup about the design and redesign process behind the figures in our Nature Methods Points of Significance column.

I have selected several figures from our past columns and show how they evolved from their draft to published versions.

Clarity, concision and space constraints—we have only 3.4" of horizontal space— all have to be balanced for a figure to be effective.

It's nearly impossible to find case studies of scientific articles (or figures) through the editing and review process. Nobody wants to show their drafts. With this writeup I hope to add to this space and encourage others to reveal their process. Students love this. See whether you agree with my decisions!

Past columns have described experimental designs that mitigate the effect of variation: random assignment, blocking and replication.

The goal of these designs is to observe a reproducible effect that can be due only to the treatment, avoiding confounding and bias. Simultaneously, to sample enough variability to estimate how much we expect the effect to differ if the measurements are repeated with similar but not identical samples (replicates).

We need to distinguish between sources of variation that are nuisance factors in our goal to measure mean biological effects from those that are required to assess how much effects vary in the population.

Altman, N. & Krzywinski, M. (2014) Points of Significance: Two Factor Designs *Nature Methods* **11**:5-6.

1. Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

2. Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of variance (ANOVA) and blocking *Nature Methods* **11**:699-700.

3. Blainey, P., Krzywinski, M. & Altman, N. (2014) Points of Significance: Replication *Nature Methods* **11**:879-880.

We've previously written about how to analyze the impact of one variable in our ANOVA column. Complex biological systems are rarely so obliging—multiple experimental factors interact and producing effects.

ANOVA is a natural way to analyze multiple factors. It can incorporate the possibility that the factors interact—the effect of one factor depends on the level of another factor. For example, the potency of a drug may depend on the subject's diet.

We can increase the power of the analysis by allowing for interaction, as well as by blocking.

Krzywinski, M., Altman, (2014) Points of Significance: Two Factor Designs *Nature Methods* **11**:1187-1188.

Blainey, P., Krzywinski, M. & Altman, N. (2014) Points of Significance: Replication *Nature Methods* **11**:879-880.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of variance (ANOVA) and blocking *Nature Methods* **11**:699-700.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

Sources of noise in experiments can be mitigated and assessed by nested designs. This kind of experimental design naturally models replication, which was the topic of last month's column.

Nested designs are appropriate when we want to use the data derived from experimental subjects to make general statements about populations. In this case, the subjects are *random* factors in the experiment, in contrast to *fixed* factors, such as we've seen previously.

In ANOVA analysis, random factors provide information about the amount of noise contributed by each factor. This is different from inferences made about fixed factors, which typically deal with a change in mean. Using the F-test, we can determine whether each layer of replication (e.g. animal, tissue, cell) contributes additional variation to the overall measurement.

Krzywinski, M., Altman, N. & Blainey, P. (2014) Points of Significance: Nested designs *Nature Methods* **11**:977-978.

Blainey, P., Krzywinski, M. & Altman, N. (2014) Points of Significance: Replication *Nature Methods* **11**:879-880.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of variance (ANOVA) and blocking *Nature Methods* **11**:699-700.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

It's fitting that the column published just before Labor day weekend is all about how to best allocate labor.

Replication is used to decrease the impact of variability from parts of the experiment that contribute noise. For example, we might measure data from more than one mouse to attempt to generalize over all mice.

It's important to distinguish technical replicates, which attempt to capture the noise in our measuring apparatus, from biological replicates, which capture biological variation. The former give us no information about biological variation and cannot be used to directly make biological inferences. To do so is to commit *pseudoreplication*. Technical replicates are useful to reduce the noise so that we have a better chance to detect a biologically meaningful signal.

Blainey, P., Krzywinski, M. & Altman, N. (2014) Points of Significance: Replication *Nature Methods* **11**:879-880.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of variance (ANOVA) and blocking *Nature Methods* **11**:699-700.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

I was commissioned by Scientific American to create an information graphic that showed how our genomes are more similar to those of the chimp and bonobo than to the gorilla.

I had about 5 x 5 inches of print space to work with. For 4 genomes? No problem. Bring out the Hilbert curve!

To accompany the piece, I will be posting to the Scientific American blog about the process of creating the figure. And to emphasize that the *genome is not a blueprint*!

As part of this project, I created some Hilbert curve art pieces. And while exploring, found thousands of Hilbertonians!

Celebrate Pi Approximation Day (July 22nd) with the art of arm waving. This year I take the first 10,000 most accurate approximations (*m*/*n*, *m*=1..10,000) and look at their accuracy.

I turned to the spiral again after applying it to stack stacked ring plots of frequency distributions in Pi for the 2014 Pi Day.

Our 10th Points of Significance column! Continuing with our previous discussion about comparative experiments, we introduce ANOVA and blocking. Although this column appears to introduce two new concepts (ANOVA and blocking), you've seen both before, though under a different guise.

If you know the *t*-test you've already applied analysis of variance (ANOVA), though you probably didn't realize it. In ANOVA we ask whether the variation within our samples is compatible with the variation between our samples (sample means). If the samples don't all have the same mean then we expect the latter to be larger. The ANOVA test statistic (*F*) assigns significance to the ratio of these two quantities. When we only have two-samples and apply the *t*-test, *t*^{2} = *F*.

ANOVA naturally incorporates and partitions sources of variation—the effects of variables on the system are determined based on the amount of variation they contribute to the total variation in the data. If this contribution is large, we say that the variation can be "explained" by the variable and infer an effect.

We discuss how data collection can be organized using a randomized complete block design to account for sources of uncertainty in the experiment. This process is called blocking because we are blocking the variation from a known source of uncertainty from interfering with our measurements. You've already seen blocking in the paired *t*-test example, in which the subject (or experimental unit) was the block.

We've worked hard to bring you 20 pages of statistics primers (though it feels more like 200!). The column is taking a month off in August, as we shrink our error bars.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of Variance (ANOVA) and Blocking *Nature Methods* **11**:699-700.

*Nature Methods* **11**:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.