Lips that taste of tears, they say, are the best for kissing.get crankymore quotes

# accidents: beautiful

UCD Computational and Molecular Biology Symposium, Dublin, Ireland. 1-2 Dec 2016.

# visualization + design

The 2016 Pi Day art imagines the digits of Pi as physical masses collapsing under gravity and is featured in the articles The Gravity of Pi and The Boundless Beauty of Pi at the Scientific American SA Visual blog.

# The art of Pi ($\pi$), Phi ($\phi$) and $e$

2016 $\pi$ approximation day
2016 $\pi$ day
2015 $\pi$ day
2014 $\pi$ approx day
2014 $\pi$ day
2014 $\pi$ day
2013 $\pi$ day
Circular $\pi$ art

Numbers are a lot of fun. They can start conversations—the interesting number paradox is a party favourite: every number must be interesting because the first number that wasn't would be very interesting! Of course, in the wrong company they can just as easily end conversations.

The art here is my attempt at transforming famous numbers in mathematics into pretty visual forms, start some of these conversations and awaken emotions for mathematics—other than dislike and confusion

Like music with numbers? Try Angels at My Door (Una), Pt vs Ys (Yoshinori Sunahara), 2wicky (Hooverphonic), One (Aimee Mann), Straight to Number One (Touch and Go), 99 luftbaloons (Nena).

Numerology is bogus, but art based on numbers can be beautiful. Proclus got it right when he said (as quoted by M. Kline in Mathematical Thought from Ancient to Modern Times)

Wherever there is number, there is beauty.

2,258 digits of $\phi$, 3,855 digits of $e$ and 3,628 digits of $\pi$ in 6 level treemaps. Uniform line thickness. Bauhaus prime colors in Piet Mondrian style. (2015 $\pi$ day posters, BUY ARTWORK)
All art posters are available for purchase.
I take custom requests.

## the numbers π, φ and e

The consequence of the interesting number paradox is that all numbers are interesting. But some are more interesting than others—how Orwellian!

All animals are equal, but some animals are more equal than others.
—George Orwell (Animal Farm)

Numbers such as $\pi$ (or $\tau$ if you're a revolutionary), $\phi$, $e$, $i = \sqrt{-1}$, and $0$ have captivated imagination. Chances are at least one of them appears in the next physics equation you come across.

$π φ e$
$= 3.14159 26535 89793 23846 26433 83279 50288 41971 69399 ... = 1.61803 39887 49894 84820 45868 34365 63811 77203 09179 ... = 2.71828 18284 59045 23536 02874 71352 66249 77572 47093 ...$

Of these three transcendental numbers, $\pi$ (3.14159265...) is the most well known. It is the ratio of a circle's circumference to its diameter ($d = \pi r$) and appears in the formula for the area of the circle ($a = \pi r^2$).

2,258 digits of $\phi$, 3,855 digits of $e$ and 3,628 digits of $\pi$ in 6 level treemaps. Uniform line thickness. Bauhaus prime colors in Piet Mondrian style. (2016 $\pi$ day posters, BUY ARTWORK)

The Golden Ratio ($\phi$, 1.61803398...) is the attractive proportion of values $a > b$ that satisfy ${a+b}/2 = a/b$, which solves to $a/b = {1 + \sqrt{5}}/2$.

The last of the three numbers, $e$ (2.71828182...) is Euler's number and also known as the base of the natural logarithm. It, too, can be defined geometrically—it is the unique real number, $e$, for which the function $f(x) = e^x$ has a tangent of slope 1 at $x=0$. Like $\pi$, $e$ appears throughout mathematics. For example, $e$ is central in the expression for the normal distribution as well as the definition of entropy. And if you've ever heard of someone talking about log plots ... well, there's $e$ again!

Two of these numbers can be seen together in mathematics' most beautiful equation, the Euler identity: $e^{i\pi} = -1$. The tau-oists would argue that this is even prettier: $e^{i\tau} = 1$.

## accidentally similar

Did you notice how the 13th digit of all three numbers is the same (9)? This accidental similarity generates its own number—the Accidental Similarity Number (ASN).

VIEW ALL

# Intuitive Design

Thu 03-11-2016

Appeal to intuition when designing with value judgments in mind.

Figure clarity and concision are improved when the selection of shapes and colors is grounded in the Gestalt principles, which describe how we visually perceive and organize information.

One of the Gestalt principles tells us that the magenta and green shapes will be perceived as as two groups, overriding the fact that the shapes within the group might be different. What the principle does not tell us is how the reader is likely to value each group. (read)

The Gestalt principles are value free. For example, they tell us how we group objects but do not speak to any meaning that we might intuitively infer from visual characteristics.

Nature Methods Points of View column: Intuitive Design. (read)

This month, we discuss how appealing to such intuitions—related to shapes, colors and spatial orientation— can help us add information to a figure as well as anticipate and encourage useful interpretations.

Krzywinski, M. (2016) Points of View: Intuitive Design. Nature Methods 13:895.

# Regularization

Fri 04-11-2016

Constraining the magnitude of parameters of a model can control its complexity.

This month we continue our discussion about model selection and evaluation and address how to choose a model that avoids both overfitting and underfitting.

Ideally, we want to avoid having either an underfitted model, which is usually a poor fit to the training data, or an overfitted model, which is a good fit to the training data but not to new data.

Nature Methods Points of Significance column: Regularization (read)

Regularization is a process that penalizes the magnitude of model parameters. This is done by not only minimizing the SSE, $\mathrm{SSE} = \sum_i (y_i - \hat{y}_i)^2$, as is done normally in a fit, but adding to this minimized quantity the sum of the mode's squared parameters, $\mathrm{SSE} + \lambda \sum_i \hat{\beta}^2_i$.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Regularization. Nature Methods 13:803-804.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Model Selection and Overfitting. Nature Methods 13:703-704.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.

# Model Selection and Overfitting

Fri 04-11-2016

With four parameters I can fit an elephant and with five I can make him wiggle his trunk. —John von Neumann.

By increasing the complexity of a model, it is easy to make it fit to data perfectly. Does this mean that the model is perfectly suitable? No.

When a model has a relatively large number of parameters, it is likely to be influenced by the noise in the data, which varies across observations, as much as any underlying trend, which remains the same. Such a model is overfitted—it matches training data well but does not generalize to new observations.

Nature Methods Points of Significance column: Model Selection and Overfitting (read)

We discuss the use of training, validation and testing data sets and how they can be used, with methods such as cross-validation, to avoid overfitting.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Model Selection and Overfitting. Nature Methods 13:703-704.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.

# Classifier Evaluation

Tue 13-09-2016

It is important to understand both what a classification metric expresses and what it hides.

We examine various metrics use to assess the performance of a classifier. We show that a single metric is insufficient to capture performance—for any metric, a variety of scenarios yield the same value.

Nature Methods Points of Significance column: Classifier Evaluation (read)

We also discuss ROC and AUC curves and how their interpretation changes based on class balance.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.

# Happy 2016 $\pi$ Approximation, roughly speaking

Sun 24-07-2016

Today is the day and it's hardly an approximation. In fact, $22/7$ is 20% more accurate of a representation of $\pi$ than $3.14$!

Time to celebrate, graphically. This year I do so with perfect packing of circles that embody the approximation.

By warping the circle by 8% along one axis, we can create a shape whose ratio of circumference to diameter, taken as twice the average radius, is 22/7.

If you prefer something more accurate, check out art from previous $\pi$ days: 2013 $\pi$ Day and 2014 $\pi$ Day, 2015 $\pi$ Day, and 2016 $\pi$ Day.

# Logistic Regression

Tue 13-09-2016

Regression can be used on categorical responses to estimate probabilities and to classify.

The next column in our series on regression deals with how to classify categorical data.

We show how linear regression can be used for classification and demonstrate that it can be unreliable in the presence of outliers. Using a logistic regression, which fits a linear model to the log odds ratio, improves robustness.

Nature Methods Points of Significance column: Logistic regression? (read)

Logistic regression is solved numerically and in most cases, the maximum-likelihood estimates are unique and optimal. However, when the classes are perfectly separable, the numerical approach fails because there is an infinite number of solutions.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.