latest news

Distractions and amusements, with a sandwich and coffee.

In your hiding, you're alone. Kept your treasures with my bones.
•
• crawl somewhere better
• more quotes

Typography geek? If you like the geometry and mathematics of these posters, you may enjoy something more letter ed. Visions of type: Type Peep Show: The Private Curves of Letters posters.

This section contains various art work based on `\pi`, `\phi` and `e` that I created over the years. `pi` day and `pi` approximation day artwork is kept separate.

The accidental similarity number (ASN) is a kind of overlap between numbers. I came up with this concept after creating typographical art about the `i`-ness of `\pi`.

To construct the accidental similarity number (ASN) for three numbers `\pi`, `\phi` and `e`, we first align these numbers and then identify positions for which the numbers have the same digit.

π φ e

3.1415926535897932 … 21170679821 … 10270193852 … 1.6180339887498948 … 93911374847 … 08659593958 … 2.7182818284590452 … 51664274274 … 32862794349 …

These digits are then used to create the accidental similarity number. In this case,

ASN(π, φ, e) = 0.97911 48920 55221 …

By definition, the decimal is held in place.

The posters of `asn(pi,phi,e)` show the accidental similarity number created from the first 1,000,000 digits of each number. The numbers have the same digit at 9,997 positions.

The poster shows 9,996 ASN digits (last one is omitted) because I use the distance between the index of the digits that make up the ASN for the color mapping.

The distribution of distances follows a Poisson distribution with an average of 100, with about 1-1/`e` values being smaller than 100.

The font is Neutraface Slab Display Medium.

Any properties are accidental, but curiously ASN(`\pi`,`\phi`,`e`) ≈ 1.

If you find other curiously accidental properties, let me know.

Download the first 9,997 digits of the accidental similarity number. This file provides the ASN digit index, `i`, the digit, `ASN_i` and the position from which it is sampled, `\text{index}(ASN_i)`.

i ASN_i index(ASN_i) 0 9 13 1 7 100 2 9 170 3 1 396 # e.g. 4th ASN digit is 1, sampled from digit index 396 4 1 500 5 4 596 6 8 607 7 9 694 8 2 825 9 0 828 10 5 841 11 5 941 12 2 1283 ...

Discover Cantor's transfinite numbers through my music video for the Aleph 2 track of Max Cooper's Yearning for the Infinite (album page, event page).

I discuss the math behind the video and the system I built to create the video.

*Everything we see hides another thing, we always want to see what is hidden by what we see.
—Rene Magritte*

A Hidden Markov Model extends a Markov chain to have hidden states. Hidden states are used to model aspects of the system that cannot be directly observed and themselves form a Markov chain and each state may emit one or more observed values.

Hidden states in HMMs do not have to have meaning—they can be used to account for measurement errors, compress multi-modal observational data, or to detect unobservable events.

In this column, we extend the cell growth model from our Markov Chain column to include two hidden states: normal and sedentary.

We show how to calculate forward probabilities that can predict the most likely path through the HMM given an observed sequence.

Grewal, J., Krzywinski, M. & Altman, N. (2019) Points of significance: Hidden Markov Models. *Nature Methods* **16**:795–796.

Altman, N. & Krzywinski, M. (2019) Points of significance: Markov Chains. *Nature Methods* **16**:663–664.

My cover design for Hola Mundo by Hannah Fry. Published by Blackie Books.

Curious how the design was created? Read the full details.

*You can look back there to explain things,
but the explanation disappears.
You'll never find it there.
Things are not explained by the past.
They're explained by what happens now.
—Alan Watts*

A Markov chain is a probabilistic model that is used to model how a system changes over time as a series of transitions between states. Each transition is assigned a probability that defines the chance of the system changing from one state to another.

Together with the states, these transitions probabilities define a stochastic model with the Markov property: transition probabilities only depend on the current stateâ€”the future is independent of the past if the present is known.

Once the transition probabilities are defined in matrix form, it is easy to predict the distribution of future states of the system. We cover concepts of aperiodicity, irreducibility, limiting and stationary distributions and absorption.

This column is the first part of a series and pairs particularly well with Alan Watts and Blond:ish.

Grewal, J., Krzywinski, M. & Altman, N. (2019) Points of significance: Markov Chains. *Nature Methods* **16**:663–664.