Twenty — minutes — maybe — more.choose four wordsmore quotes

# orbits: beautiful

In Silico Flurries: Computing a world of snow. Scientific American. 23 December 2017

# visualization + design

The 2017 Pi Day art imagines the digits of Pi as a star catalogue with constellations of extinct animals and plants. The work is featured in the article Pi in the Sky at the Scientific American SA Visual blog.

# $\pi$ Day 2016 Art Posters

2017 $\pi$ day
2016 $\pi$ approximation day
2016 $\pi$ day
2015 $\pi$ day
2014 $\pi$ approx day
2014 $\pi$ day
2013 $\pi$ day
Circular $\pi$ art

On March 14th celebrate $\pi$ Day. Hug $\pi$—find a way to do it.

For those who favour $\tau=2\pi$ will have to postpone celebrations until July 26th. That's what you get for thinking that $\pi$ is wrong.

If you're not into details, you may opt to party on July 22nd, which is $\pi$ approximation day ($\pi$ ≈ 22/7). It's 20% more accurate that the official $\pi$ day!

Finally, if you believe that $\pi = 3$, you should read why $\pi$ is not equal to 3.

All art posters are available for purchase.
I take custom requests.

This year's $\pi$ day art collection celebrates not only the digit but also one of the fundamental forces in nature: gravity.

In February of 2016, for the first time, gravitational waves were detected at the Laser Interferometer Gravitational-Wave Observatory (LIGO).

The signal in the detector was sonified—a process by which any data can be encoded into sound to provide hints at patterns and structure that we might otherwise miss—and we finally heard what two black holes sound like. A buzz and chirp.

The art is featured in the Gravity of Pi article on the Scientific American SA Visual blog.

## this year's theme music

All the art was processed while listening to Roses by Coeur de Pirate, a brilliant female French-Canadian songwriter, who sounds like a mix of Patricia Kaas and Lhasa. The lyrics Oublie-moi (Forget me) are fitting with this year's theme of gravity.

Mais laisse-moi tomber, laisse-nous tomber
Laisse la nuit trembler en moi
Laisse-moi tomber, laisse nous tomber
Cette fois

But let me fall, let us fall
Let the night tremble in me
Let me fall, let us fall
This time

## simulating gravity in 2d

The gravitational force between two masses $m_1$ located at $(x_1,y_1)$ and $m_2$ located at $(x_2,y_2)$ is given by

$$F = \frac{G m_1 m_2}{r^2} \tag{1}$$

where $r$ is the distance between the masses given by

$$r = \sqrt{ \Delta x ^2 + \Delta y ^2 } = \sqrt{ (x_2-x_1)^2 + (y_2-y_1)^2 } \tag{2}$$

The force is directed along the vector formed by $r$ and can be decomposed into $x$ and $y$ components using \begin{align} F_x &= F \frac{ \Delta x}{r} = F \frac{x_2-x_1}{r} \tag{3} \\ F_y &= F \frac{ \Delta y}{r} =F \frac{y_2-y_1}{r} \tag{4} \end{align}

The acceleration of each mass can be obtained using $F = ma$ and similarly decomposed into $x$ and $y$ components \begin{align} a_{1x} &= \frac { F_{1x} }{ m_1} = \frac{G m_2 \Delta x}{r^3} \tag{5} \\ a_{1y} &= \frac { F_{1y} }{ m_1} = \frac{G m_2 \Delta y}{r^3} \tag{6} \\ a_{2x} &= \frac { F_{2x} }{ m_2} = -\frac{G m_1 \Delta x}{r^3} \tag{7} \\ a_{2y} &= \frac { F_{2y} }{ m_2} = -\frac{G m_1 \Delta y}{r^3} \tag{8} \end{align}

When there are $n$ masses in the system, the acceleration of mass $i$ is the sum of the accelerations due to all other masses \begin{align} a_{ix} &= \sum_{i \ne j} \frac{G m_j \Delta x_{ij}}{r_{ij}^3} \tag{9} \\ a_{iy} &= \sum_{i \ne j} \frac{G m_j \Delta y_{ij}}{r_{ij}^3} \tag{10} \end{align}

The equations of motion for the masses over a period of time $\Delta t$ are

\begin{align} \Delta v_x &= \Delta t a_x \tag{11} \\ \Delta v_y &= \Delta t a_y \tag{12} \\ \Delta x &= \Delta t \left( v_x + a_x \frac{\Delta t}{2} \right) \tag{13} \\ \Delta y &= \Delta t \left( v_y + a_y \frac{\Delta t}{2} \right) \tag{14} \end{align}

## numerical simulation

There are various ways in which the numerical simulation can be performed. The Euler, Verlet, Runge-Kutta methods are perhaps the most common. I use the Verlet approach.

Using the equations of motions above, the Verlet simulation goes as follows

1. calculate acceleration, $a_1$ (eq 9,10)
2. update position (eq 13,14)
3. calculate new acceleration, $a_2$ (eq 9,10)
4. update velocity using $(a_1+a_2)/2$ (eq 7,8)

The masses are initially uniformly distributed on a circle and given a zero initial velocity or a normally distributed random velocity.

I ran about 10,000 individual simulations with different values of $n$ and $k$ and collected ones that stood out as pretty.

## collisions

The size of a mass is taken to be $s = m^{1/3}$. When two masses, $m_1$ and $m_2$ come within a distance of $\left( s_1 + s_2 \right)(1-z)$ of each other, they collide. Here $z$ is a collision margin parameter that I set to either $z=0$ or $z=0.25$.

During the collision, a new body is created with mass $M = m_1 + m_2$ given a speed that conserves momentum in the collision. \begin{align} v_x &= \frac{m_1 v_{1x} + m_2 v_{2x} }{M} \\ v_y &= \frac{m_1 v_{1y} + m_2 v_{2y} }{M} \end{align}

## values

For my simulation, the following values are used

• $G = 100$
• mass for each digit, $d$ is $(1+d)^k$
• masses placed on circle with radius $216$
• when randomized, $(v_x,v_y) \sim N(0,1)$
• $\Delta t = 0.01$
• simulation runs for up to 100,000 steps
• canvas size is $1440 \times 1440$
VIEW ALL

# Machine learning: supervised methods (SVM & kNN)

Thu 18-01-2018
Supervised learning algorithms extract general principles from observed examples guided by a specific prediction objective.

We examine two very common supervised machine learning methods: linear support vector machines (SVM) and k-nearest neighbors (kNN).

SVM is often less computationally demanding than kNN and is easier to interpret, but it can identify only a limited set of patterns. On the other hand, kNN can find very complex patterns, but its output is more challenging to interpret.

Nature Methods Points of Significance column: Machine learning: supervised methods (SVM & kNN). (read)

We illustrate SVM using a data set in which points fall into two categories, which are separated in SVM by a straight line "margin". SVM can be tuned using a parameter that influences the width and location of the margin, permitting points to fall within the margin or on the wrong side of the margin. We then show how kNN relaxes explicit boundary definitions, such as the straight line in SVM, and how kNN too can be tuned to create more robust classification.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Machine learning: a primer. Nature Methods 15:5–6.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

# Human Versus Machine

Tue 16-01-2018
Balancing subjective design with objective optimization.

In a Nature graphics blog article, I present my process behind designing the stark black-and-white Nature 10 cover.

Nature 10, 18 December 2017

# Machine learning: a primer

Thu 18-01-2018
Machine learning extracts patterns from data without explicit instructions.

In this primer, we focus on essential ML principles— a modeling strategy to let the data speak for themselves, to the extent possible.

The benefits of ML arise from its use of a large number of tuning parameters or weights, which control the algorithm’s complexity and are estimated from the data using numerical optimization. Often ML algorithms are motivated by heuristics such as models of interacting neurons or natural evolution—even if the underlying mechanism of the biological system being studied is substantially different. The utility of ML algorithms is typically assessed empirically by how well extracted patterns generalize to new observations.

Nature Methods Points of Significance column: Machine learning: a primer. (read)

We present a data scenario in which we fit to a model with 5 predictors using polynomials and show what to expect from ML when noise and sample size vary. We also demonstrate the consequences of excluding an important predictor or including a spurious one.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

# Snowflake simulation

Tue 16-01-2018
Symmetric, beautiful and unique.

Just in time for the season, I've simulated a snow-pile of snowflakes based on the Gravner-Griffeath model.

A few of the beautiful snowflakes generated by the Gravner-Griffeath model. (explore)

The work is described as a wintertime tale in In Silico Flurries: Computing a world of snow and co-authored with Jake Lever in the Scientific American SA Blog.

Gravner, J. & Griffeath, D. (2007) Modeling Snow Crystal Growth II: A mesoscopic lattice map with plausible dynamics.

# Genes that make us sick

Wed 22-11-2017
Where disease hides in the genome.

My illustration of the location of genes in the human genome that are implicated in disease appears in The Objects that Power the Global Economy, a book by Quartz.

The location of genes implicated in disease in the human genome, shown here as a spiral. (more...)