Let me tell you about something.

Distractions and amusements, with a sandwich and coffee.

Poetry is just the evidence of life. If your life is burning well, poetry is just the ash
•
• watch

The Genome Research cover design takes a fun and illustrative approach to visualization. It's both art and science — in a 4:1 ratio.

The cover image accompanies the article by Cydney Nielsen from our visualization group, describing her Spark tool for visualization epigenetics data.

Nielsen CB, Younesy H, O'Geen H, Xu X, Jackson AR, et al. (2012) Spark: A navigational paradigm for genomic data exploration. *Genome Res* **22**: 2262-2269.

Instead of a literal depiction of output from Spark, the final design presents what appears to be necklaces of the kind of tiles that Spark uses for its visual presentation. I took a chance that Genome Research had a sense of humor. Luckily, they did and accepted the design for the cover.

*Colored tiles are playfully suspended on vertical strings to illustrate how Spark, presented in this issue, uses clustering to group genomic regions (tiles) with similar data patterns (colored heatmaps) and facilitates genome-wide data exploration.* — Genome Research 22 (11)

The image was published on the November 2012 issue of cover of Genome Research.

I had two other covers published this year: the PNAS cover accompanied our manuscript about mouse vasculature development and the Trends in Genetics cover was commissioned.

Thinking about design ideas for the cover, I looked to the kind of visual motifs that Spark used for inspiration. Immediately the colorful tiles, which represent clustered data tracks, stood out.

Spark's output is very stylized, colorful and high contrast. It was important to preserve this aesthetic in the design. I also wanted to incorporate the idea of clustering in the design, as well as the concept that the clusters represented data from different parts of the genome.

While it was not important to illustrate how Spark organizes and analyzed data explicitly — in fact, I wanted these aspects to be subtle — it *was* important that the cover illustration had connections to Spark at several levels.

Spark was created by Cydney Nielsen, who works with me at the Genome Sciences Center. It is designed to mitigate the difficulties arising from the fact that genome-wide data is typically scattered across thousands of points of interest.

Genome browsers integrate diverse data sets by plotting them as vertically stacked tracks across a common genomic *x*-axis. Genome browsers are designed for viewing local regions of interest (e.g. an individual gene) and are frequently used during the initial data inspection and exploration phases.

Most genome browsers support zooming along the genome coordinate. This type of overview is not always useful because it produces a summary across a continuous genomic range (e.g. chromosome 1) and not across the subset of regions that are of interest (e.g. genes on chromosome 1). Spark addresses this shortcoming and provides a way to help answer questions like: *What are the common data patterns across genes start sites in my data set?*

Spark's visualization is driven by clustering data tracks (e.g. ChIP-seq coverage) from across equivalent regions (e.g. gene start sites). The clustered tracks are displayed as heatmaps, with each row being a data track and each column a windowed region of the genome.

With fond memories of Monte Carlo simulations from my physics days, I set out to simulate some realistic-looking, but entirely synthetic, Spark cluster tiles.

My first idea was a design which would show these tiles falling, perhaps accumulating on a pile on the ground. Quick prototypes of this idea were disappointing. The tiles appeared flimsy and too complex, while the image was largely empty. I spent several hours messing around with the rotation and pseudo-3D layout, but could not find anything that was satisfying.

I thought to do this right would require a proper simulation within a 3D system.

To address the fact that the tiles felt flimsy and overly complicated and the design lacked depth, I simplified the tile simulation to generate 5x5 tiles. These simpler representations still embodied how Spark displayed data, but did so minimally.

To keep with the idea that the clusters come from different regions of the genome, I thought of arranging them along line segments. Unlike the design in which the tiles were falling, this constrained the layout significantly and allowed me to play with the design to make it look like the clusters were draped over it. By casting a light shadow behind each string of tiles, a subtle 3D effect could be achieved while still keeping the design within a plane.

There are 11 orientations of tiles created by rotating a thin square around the vertical axis with a slight forward tilt. There are 5 rotations to the left and right at angles 10, 26, 46, 66 and 80 degrees. The rotation was achieved using Illustrator's Extrude and Bevel 3D filter.

The layout and rotation of the tiles was inspired by Flight and Fall by Rachel Nottingham, a mobile of paper birds.

I wanted to keep the layout of the spark tiles pleasant, without being too organized. I find this to be a difficult balance to achieve — natural randomness is deceptively difficult to create by hand.

Four different versions of the design were submitted to Genome Research. I was happiest with the treatment in which the tiles maintained their color and the Spark clusters were projected as tones of white. This designed felt more solid and punchy — I feel like you can reach out and touch one of those strings.

Our 10th Points of Significance column! Continuing with our previous discussion about comparative experiments, we introduce ANOVA and blocking. Although this column appears to introduce two new concepts (ANOVA and blocking), you've seen both before, though under a different guise.

If you know the *t*-test you've already applied analysis of variance (ANOVA), though you probably didn't realize it. In ANOVA we ask whether the variation within our samples is compatible with the variation between our samples (sample means). If the samples don't all have the same mean then we expect the latter to be larger. The ANOVA test statistic (*F*) assigns significance to the ratio of these two quantities. When we only have two-samples and apply the *t*-test, *t*^{2} = *F*.

ANOVA naturally incorporates and partitions sources of variation—the effects of variables on the system are determined based on the amount of variation they contribute to the total variation in the data. If this contribution is large, we say that the variation can be "explained" by the variable and infer an effect.

We discuss how data collection can be organized using a randomized complete block design to account for sources of uncertainty in the experiment. This process is called blocking because we are blocking the variation from a known source of uncertainty from interfering with our measurements. You've already seen blocking in the paired *t*-test example, in which the subject (or experimental unit) was the block.

We've worked hard to bring you 20 pages of statistics primers (though it feels more like 200!). The column is taking a month off in August, as we shrink our error bars.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of Variance (ANOVA) and Blocking *Nature Methods* **11**:699-700.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

This month, Points of Significance begins a series of articles about experimental design. We start by returning to the two-sample and paired *t*-tests for a discussion of biological and experimental variability.

We introduce the concept of blocking using the paired *t*-test as an example and show how biological and experimental variability can be related using the correlation coefficient, *ρ*, and how its value imapacts the relative performance of the paired and two-sample *t*-tests.

We also emphasize that when reporting data analyzed with the paired t-test, differences in sample means (and their associated 95% CI error bars) should be shown—not the original samples—because the correlation in the samples (and its benefits) cannot be gleaned directly from the sample data.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

Our May Points of Significance Nature Methods column jumps straight into dealing with skewed data with Non Parametric Tests.

We introduce non-parametric tests and simulate data scenarios to compare their performance to the *t*-test. You might be surprised—the *t*-test is extraordinarily robust to distribution shape, as we've discussed before. When data is highly skewed, non-parametric tests perform better and with higher power. However, if sample sizes are small they are limited to a small number of possible *P* values, of which none may be less than 0.05!

Krzywinski, M. & Altman, N. (2014) Points of Significance: Non Parametric Testing *Nature Methods* **11**:467-468.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

In the April Points of Significance Nature Methods column, we continue our and consider what happens when we run a large number of tests.

Observing statistically rare test outcomes is expected if we run enough tests. These are statistically, not biologically, significant. For example, if we run *N* tests, the smallest *P* value that we have a 50% chance of observing is 1–exp(–ln2/*N*). For *N* = 10^{k} this *P* value is *P*_{k}=10^{–k}ln2 (e.g. for 10^{4}=10,000 tests, *P*_{4}=6.9×10^{–5}).

We discuss common correction schemes such as Bonferroni, Holm, Benjamini & Hochberg and Storey's *q* and show how they impact the false positive rate (FPR), false discovery rate (FDR) and power of a batch of tests.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part II — Multiple Testing *Nature Methods* **11**:215-216.

*t*-tests *Nature Methods* **11**:215-216.

*P* values and *t*-tests *Nature Methods* **10**:1041-1042.

Celebrate Pi Day (March 14th) with the art of folding numbers. This year I take the number up to the Feynman Point and apply a protein folding algorithm to render it as a path.

For those of you who liked the minimalist and colorful digit grid, I've expanded on the concept to show stacked ring plots of frequency distributions.

And if spirals are your thing...