view updates

Distractions and amusements, with a sandwich and coffee.

This love loves love. It's a strange love, strange love.
•
• find a way to love
• more quotes

Access all columns for free at Statistics for Biologists Nature Collection.

The Points of Significance column was launched in September 2013 as an educational resource to authors and to provide practical suggestions about best practices in statistical analysis and reporting.

This month we launch a new column "Points of Significance" devoted to statistics, a topic of profound importance for biological research, but one that often doesn’t receive the attention it deserves.

The "aura of exactitude" that often surrounds statistics is one of the main notions that the Points of Significance column will attempt to dispel, while providing useful pointers on using and evaluating statistical measures.

—Dan Evanko, Let's Give Statistics the Attention it Deserves in Biological Research

The column is co-authored with Naomi Altman (Pennsylvania State University). Paul Blainey (Broad) is a contributing co-author.

In February 2015, Nature Methods announced that the entire Points of Significance collection will be free.

When Nature Methods launched the Points of Significance column over a year ago we were hopeful that those biologists with a limited background in statistics, or who just needed a refresher, would find it accessible and useful for helping them improve the statistical rigor of their research. We have since received comments from researchers and educators in fields ranging from biology to meteorology who say they read the column regularly and use it in their courses. Hearing that the column has had a wider impact than we anticipated has been very encouraging and we hope the column continues for quite some time.

—Dan Evanko, Points of Significance now free access

Also, in a recent post on the ofschemesandmemes blog, a new statistics collection for biologists was announced.

The pieces range from comments, to advice on very specific experimental approaches, to the entire collection of the Points of Significance columns that address basic concepts in statistics in an experimental biology context. These columns, originally published in Nature Methods thanks to Martin Krzywinski and guest editor Naomi Altman, have already proven very popular with readers and teachers. Finally, the collection presents a web tool to create box plots among other resources.

—Veronique Kiermer, Statistics for biologists—A free Nature Collection

Each column is written with continuity and consistency in mind. Our goal is to never rely on concepts that we have not previously discussed. We do not assume previous statistical knowledge—only basic math. Concepts are illustrated using practical examples that embody the ideas without extraneous complicated details. All of the figures are designed with the same approach—as simple and self-contained as possible.

In collaboration with the Phil Poronnik and Kim Bell-Anderson at the University of Sydney, I'm delighted to share with you our 8-part video series project about thinking about drawing data and communicating science.

We've created 8 videos, each focusing on a different essential idea in data visualization: encoding, shapes, color, uncertainty, design, drawing missing or unobserved data, labels and process.

The videos were designed as teaching materials. Each video comes with a slide deck and exercises.

What are you trying to say

Of significance?

—Steve Ziliak

We've written about P values before and warned readers about common misconceptions about them, which are so rife that the American Statistical Association itself has a long statement about them.

This month is our first of a two-part article about P values. Here we look at 'P value hacking' and 'data dredging', which are questionable practices that invalidate the correct interpretation of P values.

We also illustrate how P values can lead us astray by asking "What is the smallest P value we can expect if the null hypothesis is true but we have done many tests, either explicitly or implicitly?"

Incidentally, this is our first column in which the standfirst is a haiku.

Altman, N. & Krzywinski, M. (2017) Points of Significance: P values and the search for significance. *Nature Methods* **14**:3–4.

Krzywinski, M. & Altman, N. (2013) Points of significance: Significance, P values and t–tests. Nature Methods 10:1041–1042.

*Appeal to intuition when designing with value judgments in mind.*

Figure clarity and concision are improved when the selection of shapes and colors is grounded in the Gestalt principles, which describe how we visually perceive and organize information.

The Gestalt principles are value free. For example, they tell us how we group objects but do not speak to any meaning that we might intuitively infer from visual characteristics.

This month, we discuss how appealing to such intuitions—related to shapes, colors and spatial orientation— can help us add information to a figure as well as anticipate and encourage useful interpretations.

Krzywinski, M. (2016) Points of View: Intuitive Design. Nature Methods 13:895.

*Constraining the magnitude of parameters of a model can control its complexity.*

This month we continue our discussion about model selection and evaluation and address how to choose a model that avoids both overfitting and underfitting.

Ideally, we want to avoid having either an underfitted model, which is usually a poor fit to the training data, or an overfitted model, which is a good fit to the training data but not to new data.

Regularization is a process that penalizes the magnitude of model parameters. This is done by not only minimizing the SSE, `\mathrm{SSE} = \sum_i (y_i - \hat{y}_i)^2 `, as is done normally in a fit, but adding to this minimized quantity the sum of the mode's squared parameters, `\mathrm{SSE} + \lambda \sum_i \hat{\beta}^2_i`.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Regularization. *Nature Methods* **13**:803-804.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Model Selection and Overfitting. *Nature Methods* **13**:703-704.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. *Nature Methods* **13**:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. *Nature Methods* **13**:541-542.

*With four parameters I can fit an elephant and with five I can make him wiggle his trunk. —John von Neumann.*

By increasing the complexity of a model, it is easy to make it fit to data perfectly. Does this mean that the model is perfectly suitable? No.

When a model has a relatively large number of parameters, it is likely to be influenced by the noise in the data, which varies across observations, as much as any underlying trend, which remains the same. Such a model is overfitted—it matches training data well but does not generalize to new observations.

We discuss the use of training, validation and testing data sets and how they can be used, with methods such as cross-validation, to avoid overfitting.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Model Selection and Overfitting. *Nature Methods* **13**:703-704.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. *Nature Methods* **13**:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. *Nature Methods* **13**:541-542.

*It is important to understand both what a classification metric expresses and what it hides.*

We examine various metrics use to assess the performance of a classifier. We show that a single metric is insufficient to capture performance—for any metric, a variety of scenarios yield the same value.

We also discuss ROC and AUC curves and how their interpretation changes based on class balance.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. *Nature Methods* **13**:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. *Nature Methods* **13**:541-542.

Today is the day and it's hardly an approximation. In fact, `22/7` is 20% more accurate of a representation of `\pi` than `3.14`!

Time to celebrate, graphically. This year I do so with perfect packing of circles that embody the approximation.

By warping the circle by 8% along one axis, we can create a shape whose ratio of circumference to diameter, taken as twice the average radius, is 22/7.

If you prefer something more accurate, check out art from previous `\pi` days: 2013 `\pi` Day and 2014 `\pi` Day, 2015 `\pi` Day, and 2016 `\pi` Day.

*Regression can be used on categorical responses to estimate probabilities and to classify.*

The next column in our series on regression deals with how to classify categorical data.

We show how linear regression can be used for classification and demonstrate that it can be unreliable in the presence of outliers. Using a logistic regression, which fits a linear model to the log odds ratio, improves robustness.

Logistic regression is solved numerically and in most cases, the maximum-likelihood estimates are unique and optimal. However, when the classes are perfectly separable, the numerical approach fails because there is an infinite number of solutions.

*Nature Methods* **13**:541-542.

Altman, N. & Krzywinski, M. (2016) Points of Significance: Regression diagnostics? *Nature Methods* **13**:385-386.

Altman, N. & Krzywinski, M. (2015) Points of Significance: Multiple Linear Regression *Nature Methods* **12**:1103-1104.

Altman, N. & Krzywinski, M. (2015) Points of significance: Simple Linear Regression *Nature Methods* **12**:999-1000.