Let me tell you about something.

Distractions and amusements, with a sandwich and coffee.

Trance opera.
• Spente le Stelle

numbers.tgz

1,000,000 digits of π, φ, e and ASN.

All the artwork can be purchased from Fine Art America.

The accidental similarity number is a kind of overlap between numbers. I came up with this concept after creating typographical art about the 4ness of π.

To construct this number for π, φ and e we first write the numbers on top of each other and then identify positions for which the numbers have the same digit.

3.1415926535897932 … 21170679821 … 10270193852 … 1.6180339887498948 … 93911374847 … 08659593958 … 2.7182818284590452 … 51664274274 … 32862794349 …

These digits are then used to create the accidental similarity number. In thise case,

0.979 …

By definition, the decimal is held in place.

The poster shows the accidental similarity number for π, φ and e created from the first 1,000,000 digits of each number. There are 9,997 positions in which these numbers have the same digit, but only 9,996 are shown because the distance between positions is used to color the digit and I was limited by input files with 1M digits.

The distribution of distances follows a Poisson distribution with an average of 100, with about 1-1/e values being smaller than 100.

The font is Neutraface Slab Display Medium.

Any properties are accidental, but curiously ASN(π, φ, e) ≈ 1.

If you find other curiously accidental properties, let me know.

Download the first 9,997 digits of the accidental similarity number. This file provides the ASN digit index, the digit and the position from which it is sampled.

I came up with Accidental Similarity Number immediately after creating this poster of the overlap between π, φ and e.

This thought stream started with the 4ness of π.

Our 10th Points of Significance column! Continuing with our previous discussion about comparative experiments, we introduce ANOVA and blocking. Although this column appears to introduce two new concepts (ANOVA and blocking), you've seen both before, though under a different guise.

If you know the *t*-test you've already applied analysis of variance (ANOVA), though you probably didn't realize it. In ANOVA we ask whether the variation within our samples is compatible with the variation between our samples (sample means). If the samples don't all have the same mean then we expect the latter to be larger. The ANOVA test statistic (*F*) assigns significance to the ratio of these two quantities. When we only have two-samples and apply the *t*-test, *t*^{2} = *F*.

ANOVA naturally incorporates and partitions sources of variation—the effects of variables on the system are determined based on the amount of variation they contribute to the total variation in the data. If this contribution is large, we say that the variation can be "explained" by the variable and infer an effect.

We discuss how data collection can be organized using a randomized complete block design to account for sources of uncertainty in the experiment. This process is called blocking because we are blocking the variation from a known source of uncertainty from interfering with our measurements. You've already seen blocking in the paired *t*-test example, in which the subject (or experimental unit) was the block.

We've worked hard to bring you 20 pages of statistics primers (though it feels more like 200!). The column is taking a month off in August, as we shrink our error bars.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of Variance (ANOVA) and Blocking *Nature Methods* **11**:699-700.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

This month, Points of Significance begins a series of articles about experimental design. We start by returning to the two-sample and paired *t*-tests for a discussion of biological and experimental variability.

We introduce the concept of blocking using the paired *t*-test as an example and show how biological and experimental variability can be related using the correlation coefficient, *ρ*, and how its value imapacts the relative performance of the paired and two-sample *t*-tests.

We also emphasize that when reporting data analyzed with the paired t-test, differences in sample means (and their associated 95% CI error bars) should be shown—not the original samples—because the correlation in the samples (and its benefits) cannot be gleaned directly from the sample data.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

Our May Points of Significance Nature Methods column jumps straight into dealing with skewed data with Non Parametric Tests.

We introduce non-parametric tests and simulate data scenarios to compare their performance to the *t*-test. You might be surprised—the *t*-test is extraordinarily robust to distribution shape, as we've discussed before. When data is highly skewed, non-parametric tests perform better and with higher power. However, if sample sizes are small they are limited to a small number of possible *P* values, of which none may be less than 0.05!

Krzywinski, M. & Altman, N. (2014) Points of Significance: Non Parametric Testing *Nature Methods* **11**:467-468.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

In the April Points of Significance Nature Methods column, we continue our and consider what happens when we run a large number of tests.

Observing statistically rare test outcomes is expected if we run enough tests. These are statistically, not biologically, significant. For example, if we run *N* tests, the smallest *P* value that we have a 50% chance of observing is 1–exp(–ln2/*N*). For *N* = 10^{k} this *P* value is *P*_{k}=10^{–k}ln2 (e.g. for 10^{4}=10,000 tests, *P*_{4}=6.9×10^{–5}).

We discuss common correction schemes such as Bonferroni, Holm, Benjamini & Hochberg and Storey's *q* and show how they impact the false positive rate (FPR), false discovery rate (FDR) and power of a batch of tests.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part II — Multiple Testing *Nature Methods* **11**:215-216.

*t*-tests *Nature Methods* **11**:215-216.

*P* values and *t*-tests *Nature Methods* **10**:1041-1042.