Trance opera—Spente le Stelle
• be dramatic

Bioinformatics and Genome Analysis Course. Izmir International Biomedicine and Genome Institute, Izmir, Turkey. May 2–14, 2016

Typography geek? If you like the geometry and mathematics of these posters, you may enjoy something more lettered. Visions of type: Type Peep Show: The Private Curves of Letters posters.

Numbers are a lot of fun. They can start conversations—the interesting number paradox is a party favourite: every number must be interesting because the first number that wasn't would be very interesting! Of course, in the wrong company they can just as easily end conversations.

The art here is my attempt at transforming famous numbers in mathematics into pretty visual forms, start some of these conversations and awaken emotions for mathematics—other than dislike and confusion

Numerology is bogus, but art based on numbers can be beautiful. Proclus got it right when he said (as quoted by M. Kline in *Mathematical Thought from Ancient to Modern Times*)

Wherever there is number, there is beauty.

The consequence of the interesting number paradox is that all numbers are interesting. But some are more interesting than others—how Orwellian!

All animals are equal, but some animals are more equal than others.

—George Orwell (Animal Farm)

Numbers such as `pi` (or `tau` if you're a revolutionary), `phi`, `e`, `i = \sqrt{-1}`, and `0` have captivated imagination. Chances are at least one of them appears in the next physics equation you come across.

π φ e

= 3.14159 26535 89793 23846 26433 83279 50288 41971 69399 ... = 1.61803 39887 49894 84820 45868 34365 63811 77203 09179 ... = 2.71828 18284 59045 23536 02874 71352 66249 77572 47093 ...

Of these three transcendental numbers, `\pi` (3.14159265...) is the most well known. It is the ratio of a circle's circumference to its diameter (`d = \pi r`) and appears in the formula for the area of the circle (`a = \pi r^2`).

The Golden Ratio (`\phi`, 1.61803398...) is the attractive proportion of values `a > b` that satisfy `{a+b}/2 = a/b`, which solves to `a/b = {1 + \sqrt{5}}/2`.

The last of the three numbers, `e` (2.71828182...) is Euler's number and also known as the base of the natural logarithm. It, too, can be defined geometrically—it is the unique real number, `e`, for which the function `f(x) = e^x` has a tangent of slope 1 at `x=0`. Like `\pi`, `e` appears throughout mathematics. For example, `e` is central in the expression for the normal distribution as well as the definition of entropy. And if you've ever heard of someone talking about log plots ... well, there's `e` again!

Two of these numbers can be seen together in mathematics' most beautiful equation, the Euler identity: `e^{i pi} = -1`. The tau-oists would argue that this is even prettier: `e^{i tau} = 1`.

These three numbers have the curious property that they are almost Pythagorean. In other words, if they are made into sides of a triangle, the triangle is nearly a right-angled triangle (89.1°).

Did you notice how the 13th digit of all three numbers is the same (9)? This accidental similarity generates its own number—the Accidental Similarity Number (ASN).

It is not yet known whether the digits of π are normal—determining this is an important problem in mathematics. In other words, is the distribution of digit frequencies in π uniform? Do each of the digits 0–9 appear exactly 1/10th of the time, does every two-digit string appear exactly 1/100th of the time and so on for every finite-length string^{1}?

^{1} One interesting finite-length string is the 6-digit Fenyman Point (...999999...) which appears at digit 762 in π. The Feynman Point was the subject of 2014 `pi` Day art.

This question can be posed for different representations of π—in different bases. The distribution frequencies of 1/10, 1/100, and so on above refer to the representation of π in base 10. This is the way we're used to seeing numbers. However, if π is encoded as binary (base 2), would all the digits in 11.00100100001111... be normal? The table below shows the first several digits of π in each base from 2 to 16, as well as the natural logarithm base, `e`.

base, `b` | `pi_b` | base, `b` | `pi_b` |

2 | 11.00100100001111 | 10 | 3.14159265358979 |

3 | 10.01021101222201 | 11 | 3.16150702865A48 |

4 | 3.02100333122220 | 12 | 3.184809493B9186 |

5 | 3.03232214303343 | 13 | 3.1AC1049052A2C7 |

6 | 3.05033005141512 | 14 | 3.1DA75CDA813752 |

7 | 3.06636514320361 | 15 | 3.21CD1DC46C2B7A |

8 | 3.11037552421026 | 16 | 3.243F6A8885A300 |

`e` | 10.10100202000211 | ||

source: virtuescience.com |

Because the digits in the numbers are essentially random (this is a conjecture), the essence of the art is based on randomness.

A vexing consequence of π being normal is that, because it is non-terminating, π would contain *all* patterns. Any word you might think of, encoded into numbers in any way, would appear infinitely many times. The entire works of Shakespeare, too. As well, all his plays in which each sentence is reversed, or has one spelling mistake, or two! In fact, you would eventually find π within π, but only if you have infinite patience.

This is why any attempts to use the digits of `pi` to infer meaning about anything is ridiculous. The exact opposite of what you find is also in `pi`.

A number can be normal in one base, but another. For example, Stoneham's constant,

`\alpha_{2,3} = 1/2 + 1/(2^{3^1} 3^1) + 1/(2^{3^2} 3^2) + 1/(2^{3^3} 3^3) + ... + 1/(2^{3^k} 3^k) + ... `

is 0.54188368083150298507... in base 10 and 0.100010101011100011100011100... in base 2.

Stoneham's constant is provably normal in base 2. In some other bases, such 6, Stoneham's constant is provably not normal.

Some of the numerical art reveals interesting and unexpected observations. For example, the sequence 999999 in π at digit 762 called the Feynman Point. Or that if you calculate π to 13,099,586 digits you will find love.

The Points of Significance column is on vacation this month.

Meanwhile, we're showing you how to manage small multiple plots in the Points of View column Unentangling Complex Plots.

Data in small multiples can vary in range, noise level and trend. Gregor McInerny and myself show you how you can deal with this by cropped and scaling the multiples to a different range to emphasize relative changes while preserving the context of the full data range to show absolute changes.

McInerny, G. & Krzywinski, M. (2015) Points of View: Unentangling complex plots. *Nature Methods* **12**:591.

The Jurassic World Creation Lab webpage shows you how one might create a dinosaur from a sample of DNA. First extract, sequence, assemble and fill in the gaps in the DNA and then incubate in an egg and wait.

With enough time, you'll grow your own brand new dinosaur. Or a stalk of corn ... with more teeth.

What went wrong? Let me explain.

You've seen bound volumes of printouts of the human reference genome. But what if at the Genome Sciences Center we wanted to print everything we sequence today?

I was commissioned by Scientific American to create an information graphic based on Figure 9 in the landmark Nature Integrative analysis of 111 reference human epigenomes paper.

The original figure details the relationships between more than 100 sequenced epigenomes and genetic traits, including disease like Crohn's and Alzheimer's. These relationships were shown as a heatmap in which the epigenome-trait cell depicted the *P* value associated with tissue-specific H3K4me1 epigenetic modification in regions of the genome associated with the trait.

As much as I distrust network diagrams, in this case this was the right way to show the data. The network was meticulously laid out by hand to draw attention to the layered groups of diseases of traits.

This was my second information graphic for the Graphic Science page. Last year, I illustrated the extent of differences in the gene sequence of humans, Denisovans, chimps and gorillas.

The bootstrap is a computational method that simulates new sample from observed data. These simulated samples can be used to determine how estimates from replicate experiments might be distributed and answer questions about precision and bias.

We discuss both parametric and non-parametric bootstrap. In the former, observed data are fit to a model and then new samples are drawn using the model. In the latter, no model assumption is made and simulated samples are drawn with replacement from the observed data.

Kulesa, A., Krzywinski, M., Blainey, P. & Altman, N (2015) Points of Significance: Sampling distributions and the bootstrap *Nature Methods* **12**:477-478.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Importance of being uncertain. *Nature Methods* **10**:809-810.

Building on last month's column about Bayes' Theorem, we introduce Bayesian inference and contrast it to frequentist inference.

Given a hypothesis and a model, the frequentist calculates the probability of different data generated by the model, *P*(data|model). When this probability to obtain the observed data from the model is small (e.g. `alpha` = 0.05), the frequentist rejects the hypothesis.

In contrast, the Bayesian makes direct probability statements about the model by calculating P(model|data). In other words, given the observed data, the probability that the model is correct. With this approach it is possible to relate the probability of different models to identify one that is most compatible with the data.

The Bayesian approach is actually more intuitive. From the frequentist point of view, the probability used to assess the veracity of a hypothesis, P(data|model), commonly referred to as the *P* value, does not help us determine the probability that the model is correct. In fact, the *P* value is commonly misinterpreted as the probability that the hypothesis is right. This is the so-called "prosecutor's fallacy", which confuses the two conditional probabilities *P*(data|model) for *P*(model|data). It is the latter quantity that is more directly useful and calculated by the Bayesian.

Puga, J.L, Krzywinski, M. & Altman, N. (2015) Points of Significance: Bayes' Theorem *Nature Methods* **12**:277-278.

Puga, J.L, Krzywinski, M. & Altman, N. (2015) Points of Significance: Bayes' Theorem *Nature Methods* **12**:277-278.