Let me tell you about something.

Distractions and amusements, with a sandwich and coffee.

music + dance + projected visuals
•
• marvel at perfect timing

Typography geek? If you like the geometry and mathematics of these posters, you may enjoy something more lettered. Visions of type: Type Peep Show: The Private Curves of Letters posters.

numbers.tgz

1,000,000 digits of π, φ, e and ASN.

Watch the video at Numberphile about my art.

Explore Pi Day art for 2014.

All the artwork can be purchased from Fine Art America.

Proclus got it right when he said (as quoted by M. Kline in *Mathematical Thought from Ancient to Modern Times*)

Wherever there is number, there is beauty.

So let's explore what Pi looks like with something whimsical and pretty and colourful. Rational art of the highly irrational, a regime where beauty runs with her hair down and lets her "*ribbons to flow confusedly.*" Robert Herrick says it well in Sweet Disorder,

I see a wild civility;—

Do more bewitch me, than when art

Is too precise in every part.

The posters explore the relationship between adjacent digits in Pi, which are encoded by color using the scheme shown above. The design appears to shimmer due to the luminance effect. In some versions of the poster, adjacent identical (or similar) digits are connected by lines.

Want more math + art? Discover the Accidental Similarity Number and other number art. Find humor in my poster of the first 2,000 4s of Pi.

The recipe for each poster is included and gives the color of the *i*th outer/inner circle. π[i] is used to represent the *i*th digit of π. For example, the recipe

π[i] / π[i+1]

generates a poster whose outer circle color encodes the *i*th digit and the inner circle color encodes the next digit (*i*+1). In this scheme, inner and outer circles of adjacent positions have the same color.

The posters were generated automatically with a Perl script that generated SVG files. Post processing and layout was done in Illustrator. If you are interested in depicting your favourite number this way, let me know.

The design was inspired by the beautiful AIDS posters by Elena Miska.

I calculated Pi to 13,099,586 digits and then I found love.

It's fun to look for words in Pi. I wanted to know the first time that *love* appears in Pi. When encoded using the scheme a=0, b=1, ..., z=25, *love* is the digit 1114214. This digit appears first at position 13,099,586 (...8921991631**1114214**8187311392...). And, of course, infinitely many times after that.

If you use the scheme a=1, b=2, ..., z=26, then *love* becomes 1215225. This is first seen at 6,317,696 (...6103119129**1215225**6606850141...).

Because the digits of Pi never repeat and are distributed randomly (as far as we know), if you look long enough you'll find all the words in Pi infinitely many times.

π[i] / grey, 80% opacity

π[i] / π[i+1], 80% opacity

π[i] / grey, 80% opacity (equal neighbours connected)

π[i] / π[i+1], 80% opacity (equal neighbours connected)

— / π[i+1] (equal neighbours connected, unconnected digits not shown)

π[i] / π[i+1] (equal neighbours connected with line width proportional to difference in neighbour digits *d*∈{0,1,2}, unconnected digits not shown)

π[i] / π[i+1] (equal neighbours connected with line width proportional to difference in neighbour digits *d*∈{0..5}, unconnected digits not shown)

Pi (π): — / red (equal neighbours connected, unconnected digits not shown)

Phi (φ): — / white (equal neighbours connected, unconnected digits not shown)

e: — / grey (equal neighbours connected, unconnected digits not shown)

π[i] / grey, 80% opacity (equal neighbours connected)

π[i] / π[i+1], 80% opacity (equal neighbours connected)

π[i] / π[i+1] &>

π[i] / grey, 80% opacity (equal neighbours connected, unconnected digits not shown)

π[i] / π[i+1], 80% opacity (equal neighbours connected, unconnected digits not shown)

I was commissioned by Scientific American to create an information graphic that showed how our genomes are more similar to those of the chimp and bonobo than to the gorilla.

I had about 5 x 5 inches of print space to work with. For 4 genomes? No problem. Bring out the Hilbert curve!

To accompany the piece, I will be posting to the Scientific American blog about the process of creating the figure. And to emphasize that the *genome is not a blueprint*!

As part of this project, I created some Hilbert curve art pieces. And while exploring, found thousands of Hilbertonians!

Celebrate Pi Approximation Day (July 22nd) with the art of arm waving. This year I take the first 10,000 most accurate approximations (*m*/*n*, *m*=1..10,000) and look at their accuracy.

I turned to the spiral again after applying it to stack stacked ring plots of frequency distributions in Pi for the 2014 Pi Day.

Our 10th Points of Significance column! Continuing with our previous discussion about comparative experiments, we introduce ANOVA and blocking. Although this column appears to introduce two new concepts (ANOVA and blocking), you've seen both before, though under a different guise.

If you know the *t*-test you've already applied analysis of variance (ANOVA), though you probably didn't realize it. In ANOVA we ask whether the variation within our samples is compatible with the variation between our samples (sample means). If the samples don't all have the same mean then we expect the latter to be larger. The ANOVA test statistic (*F*) assigns significance to the ratio of these two quantities. When we only have two-samples and apply the *t*-test, *t*^{2} = *F*.

ANOVA naturally incorporates and partitions sources of variation—the effects of variables on the system are determined based on the amount of variation they contribute to the total variation in the data. If this contribution is large, we say that the variation can be "explained" by the variable and infer an effect.

We discuss how data collection can be organized using a randomized complete block design to account for sources of uncertainty in the experiment. This process is called blocking because we are blocking the variation from a known source of uncertainty from interfering with our measurements. You've already seen blocking in the paired *t*-test example, in which the subject (or experimental unit) was the block.

We've worked hard to bring you 20 pages of statistics primers (though it feels more like 200!). The column is taking a month off in August, as we shrink our error bars.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of Variance (ANOVA) and Blocking *Nature Methods* **11**:699-700.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

This month, Points of Significance begins a series of articles about experimental design. We start by returning to the two-sample and paired *t*-tests for a discussion of biological and experimental variability.

We introduce the concept of blocking using the paired *t*-test as an example and show how biological and experimental variability can be related using the correlation coefficient, *ρ*, and how its value imapacts the relative performance of the paired and two-sample *t*-tests.

We also emphasize that when reporting data analyzed with the paired t-test, differences in sample means (and their associated 95% CI error bars) should be shown—not the original samples—because the correlation in the samples (and its benefits) cannot be gleaned directly from the sample data.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

Our May Points of Significance Nature Methods column jumps straight into dealing with skewed data with Non Parametric Tests.

We introduce non-parametric tests and simulate data scenarios to compare their performance to the *t*-test. You might be surprised—the *t*-test is extraordinarily robust to distribution shape, as we've discussed before. When data is highly skewed, non-parametric tests perform better and with higher power. However, if sample sizes are small they are limited to a small number of possible *P* values, of which none may be less than 0.05!

Krzywinski, M. & Altman, N. (2014) Points of Significance: Non Parametric Testing *Nature Methods* **11**:467-468.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

In the April Points of Significance Nature Methods column, we continue our and consider what happens when we run a large number of tests.

Observing statistically rare test outcomes is expected if we run enough tests. These are statistically, not biologically, significant. For example, if we run *N* tests, the smallest *P* value that we have a 50% chance of observing is 1–exp(–ln2/*N*). For *N* = 10^{k} this *P* value is *P*_{k}=10^{–k}ln2 (e.g. for 10^{4}=10,000 tests, *P*_{4}=6.9×10^{–5}).

We discuss common correction schemes such as Bonferroni, Holm, Benjamini & Hochberg and Storey's *q* and show how they impact the false positive rate (FPR), false discovery rate (FDR) and power of a batch of tests.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part II — Multiple Testing *Nature Methods* **11**:215-216.

*t*-tests *Nature Methods* **11**:215-216.

*P* values and *t*-tests *Nature Methods* **10**:1041-1042.