music + dance + projected visualsmarvel at perfect timing

Circos at British Library Beautiful Science exhibit—Feb 20–May 26

# visualization + design

Typography geek? If you like the geometry and mathematics of these posters, you may enjoy something more lettered. Visions of type: Type Peep Show: The Private Curves of Letters posters.

numbers.tgz
1,000,000 digits of π , φ , e and ASN.

The source code is freely available. Read how you can compute your own π path!

## watch video

Watch the video at Numberphile about my art.

Numberphile video — Pi is Beautiful. (watch)

## 2013 Pi Day art

Explore Pi Day art for 2013.

Pi Day art for 2013. (explore)

All the artwork can be purchased from Fine Art America.

# The art of Pi (π), Phi (φ) and e

### the art

Numbers are a lot of fun. They can start conversations—the interesting number paradox is a party favourite. Of course, in the wrong company they can just as easily end conversations.

The art here represents my attempt at transforming famous numbers in mathematics into pretty visual forms. This work is 99% art and 1% data visualization. Because the digits in the numbers are essentially random (as far as we know), the essence of the art is based on randomness.

In a few cases, the art reveals an interesting and unexpected observation. For example, the sequence 999999 in π at digit 762 appears significantly earlier than expected by chance. Or that if you calculate π to 13,099,586 digits you will find love, as encoded by 1114214 in the scheme a=0, b=1, c=2...

Keep in mind that because the digits are random and never terminating, they have the property that they contain all observations about numbers within them. In fact, because the digits go on forever, you'll eventually find π within π.

### the numbers

Of these three transcendental numbers, π is the most well known. It is the ratio of a circle's circumference to its diameter (d = πr).

The Golden Ratio (φ) is the attractive proportion of values a and b (a > b) that satisfy (a+b)/a = a/b, which solves to a/b = (1+√5)/2.

The numbers π, φ and e nearly form a right-angled triangle.

The last of the three numbers, e is Euler's number and also known as the base of the natural logarithm. It, too, can be defined geometrically—it is the unique real number, e, for which the function f(x)=ex has a tangent of slope 1 at x=0. Like π, e appears throughout mathematics. For example, e is central in the expression for the normal distribution as well as the definition of entropy. And if you've ever heard of someone talking about log plots ... well, there's e again!

π
φ
e

= 3.141592653589793238462643...
= 1.618033988749894848204586...
= 2.718281828459045235360287...


### did you see something special?

These three numbers have the curious property that they are almost Pythagorean. In other words, if they are made into sides of a triangle, the triangle is nearly a right-angled triangle (89.1°).

Did you notice how in the 12th decimal point all three numbers have the same digit—9? This accidental similarity generates its own number—the Accidental Similarity Number (ASN).

### methods

perl, SVG, Illustrator

## Happy Pi Day!

Hug π on March 14th and celebrate Pi Day. Those who favour τ will have to postpone celebrations until July 26th (τ = 2 π). If you're not into details, you may opt to party on July 22nd, which is π approximation day (π ≈ 22/7).

2013 Pi Day posters. Celebrate with this post-modern poster. (BUY ARTWORK)
2014 Pi Day posters. Celebrate with this modern poster. Pi is folded on a self-avoiding path to maximize the number of neighbouring prime digits. (BUY ARTWORK)

The 2013 posters were inspired by the beautiful AIDS posters by Elena Miska.

The 4ness of π. Shown here are the first 2,000 4’s in pi. Each digit is formatted based on its 4-ness, which is a measure of how similar its neighbours are to 4. (...more, BUY ARTWORK)

## 4ness of Pi (π)

A concept created for this visualization, the iness of a number measures how close each of its digits is to a given number, i.

The iness is calculated for each digit from the average of the relative difference between i and the digit's neighbours.

The 4ness of Pi (π) is a specific case of an iness, for i=4.

Thanks to Lance Bailey for suggesting how to measure iness.

### example

In the sequence of Pi (π) 3.1415 the neighbours of the 4 are 3, 1, 1 and 5. The relative distances to 4 are -1, -3, -1 and 1. The average, which is the 4ness, of this digit (which is also a 4, coincidentally) is -1.5. The 4ness of each of the other digits is computed identically.

In the iness posters, the 4ness is mapped onto a color and the standard deviation of the differences onto a size.

The accidental similarity number for π, φ and e created from the first 1,000,000 digits of each number. (...more, BUY ARTWORK)

## accidental similarity number

The accidental similarity number is a kind of overlap between numbers. I came up with this concept after creating typographical art about the 4ness of Pi (π).

## example

To construct this number for Pi (π), Phi (φ) and e we first write the numbers on top of each other and then identify positions for which the numbers have the same digit.

3.141 … 3589793 … 7067982 … 7019385 …
1.618 … 8749894 … 1137484 … 5959395 …
2.718 … 8459045 … 6427427 … 6279434 …


These digits are then used to create the accidental similarity number. In thise case,

asn(π,φ,e) = 0.979 …


Circos art depicting π, φ and e. (...more, BUY ARTWORK)

## Circos numerical art

Numerology is bogus, but art based on numbers is pretty, in a random non-metaphysical way.

These depictions were generated using my Circos software by Cristian Ilies Vasile and myself.

# Monkeys on a Hilbert Curve—Scientific American Graphic

Tue 19-08-2014

I was commissioned by Scientific American to create an information graphic that showed how our genomes are more similar to those of the chimp and bonobo than to the gorilla.

I had about 5 x 5 inches of print space to work with. For 4 genomes? No problem. Bring out the Hilbert curve!

Our genomes are much more similar to the chimp and bonobo than to the gorilla. And, we're practically still Denisovans. (details)

To accompany the piece, I will be posting to the Scientific American blog about the process of creating the figure. And to emphasize that the genome is not a blueprint!

As part of this project, I created some Hilbert curve art pieces. And while exploring, found thousands of Hilbertonians!

# Happy Pi Approximation Day— π, roughly speaking 10,000 times

Wed 13-08-2014

Celebrate Pi Approximation Day (July 22nd) with the art of arm waving. This year I take the first 10,000 most accurate approximations (m/n, m=1..10,000) and look at their accuracy.

Accuracy of the first 10,000 m/n approximations of Pi. (details)

I turned to the spiral again after applying it to stack stacked ring plots of frequency distributions in Pi for the 2014 Pi Day.

Frequency distribution of digits of Pi in groups of 4 up to digit 4,988. (details)

# Analysis of Variance (ANOVA) and Blocking—Accounting for Variability in Multi-factor Experiments

Mon 07-07-2014

Our 10th Points of Significance column! Continuing with our previous discussion about comparative experiments, we introduce ANOVA and blocking. Although this column appears to introduce two new concepts (ANOVA and blocking), you've seen both before, though under a different guise.

Nature Methods Points of Significance column: Analysis of variance (ANOVA) and blocking. (read)

If you know the t-test you've already applied analysis of variance (ANOVA), though you probably didn't realize it. In ANOVA we ask whether the variation within our samples is compatible with the variation between our samples (sample means). If the samples don't all have the same mean then we expect the latter to be larger. The ANOVA test statistic (F) assigns significance to the ratio of these two quantities. When we only have two-samples and apply the t-test, t2 = F.

ANOVA naturally incorporates and partitions sources of variation—the effects of variables on the system are determined based on the amount of variation they contribute to the total variation in the data. If this contribution is large, we say that the variation can be "explained" by the variable and infer an effect.

We discuss how data collection can be organized using a randomized complete block design to account for sources of uncertainty in the experiment. This process is called blocking because we are blocking the variation from a known source of uncertainty from interfering with our measurements. You've already seen blocking in the paired t-test example, in which the subject (or experimental unit) was the block.

We've worked hard to bring you 20 pages of statistics primers (though it feels more like 200!). The column is taking a month off in August, as we shrink our error bars.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of Variance (ANOVA) and Blocking Nature Methods 11:699-700.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments Nature Methods 11:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — t-tests Nature Methods 11:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, P values and t-tests Nature Methods 10:1041-1042.

# Designing Experiments—Coping with Biological and Experimental Variation

Thu 29-05-2014

This month, Points of Significance begins a series of articles about experimental design. We start by returning to the two-sample and paired t-tests for a discussion of biological and experimental variability.

Nature Methods Points of Significance column: Designing Comparative Experiments. (read)

We introduce the concept of blocking using the paired t-test as an example and show how biological and experimental variability can be related using the correlation coefficient, ρ, and how its value imapacts the relative performance of the paired and two-sample t-tests.

We also emphasize that when reporting data analyzed with the paired t-test, differences in sample means (and their associated 95% CI error bars) should be shown—not the original samples—because the correlation in the samples (and its benefits) cannot be gleaned directly from the sample data.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments Nature Methods 11:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — t-tests Nature Methods 11:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, P values and t-tests Nature Methods 10:1041-1042.

# Have skew, will test

Wed 28-05-2014

Our May Points of Significance Nature Methods column jumps straight into dealing with skewed data with Non Parametric Tests.

Nature Methods Points of Significance column: Non Parametric Testing. (read)

We introduce non-parametric tests and simulate data scenarios to compare their performance to the t-test. You might be surprised—the t-test is extraordinarily robust to distribution shape, as we've discussed before. When data is highly skewed, non-parametric tests perform better and with higher power. However, if sample sizes are small they are limited to a small number of possible P values, of which none may be less than 0.05!

Krzywinski, M. & Altman, N. (2014) Points of Significance: Non Parametric Testing Nature Methods 11:467-468.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — t-tests Nature Methods 11:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, P values and t-tests Nature Methods 10:1041-1042.

# Mind your p's and q's

Sat 29-03-2014

In the April Points of Significance Nature Methods column, we continue our and consider what happens when we run a large number of tests.

Nature Methods Points of Significance column: Comparing Samples — Part II — Multiple Testing. (read)

Observing statistically rare test outcomes is expected if we run enough tests. These are statistically, not biologically, significant. For example, if we run N tests, the smallest P value that we have a 50% chance of observing is 1–exp(–ln2/N). For N = 10k this P value is Pk=10kln2 (e.g. for 104=10,000 tests, P4=6.9×10–5).

We discuss common correction schemes such as Bonferroni, Holm, Benjamini & Hochberg and Storey's q and show how they impact the false positive rate (FPR), false discovery rate (FDR) and power of a batch of tests.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part II — Multiple Testing Nature Methods 11:215-216.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — t-tests Nature Methods 11:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, P values and t-tests Nature Methods 10:1041-1042.