Working with senior graphics editor at Scientific American Jen Christiansen, I have designed three Graphic Science visualizations for the magazine.
This collaboration with Jeanine Hunnicutt explored differences in household dust bacteria based on the gender and pet status of the occupants.
We have also written about the making of the graphic, for those interested in how these things come together.
Graphic by Martin Krzywinski and Barbara Jeanine Hunnicutt.
Catalogue of bacteria shapes by Barbara Jeanine Hunnicutt.
Source: Barberan A et al. (2015) The ecology of microscopic life in household dust. Proc. R. Soc. B 282: 20151139.
Because sometimes only a network hairball will do.
Graphic by Martin Krzywinski.
Source: Integrative analysis of 111 reference human epigenomes. (2015) Nature 518:317.
A Scientific American blog entry "A Monkey's Blueprint" accompanies this piece. I also have a more detailed description with links to data sources.
This design won a bronze award at Malofiej 23. For more information about Malofiej, see the SA Visual blog entry "There's No Infographic without Info (and other Lessons from Malofiej)".
Graphic by Martin Krzywinski, illustrations by Portia Sloan Rollings.
In this primer, we focus on essential ML principles— a modeling strategy to let the data speak for themselves, to the extent possible.
The benefits of ML arise from its use of a large number of tuning parameters or weights, which control the algorithm’s complexity and are estimated from the data using numerical optimization. Often ML algorithms are motivated by heuristics such as models of interacting neurons or natural evolution—even if the underlying mechanism of the biological system being studied is substantially different. The utility of ML algorithms is typically assessed empirically by how well extracted patterns generalize to new observations.
We present a data scenario in which we fit to a model with 5 predictors using polynomials and show what to expect from ML when noise and sample size vary. We also demonstrate the consequences of excluding an important predictor or including a spurious one.
Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.",
Just in time for the season, I've simulated a snow-pile of snowflakes based on the Gravner-Griffeath model.
Gravner, J. & Griffeath, D. (2007) Modeling Snow Crystal Growth II: A mesoscopic lattice map with plausible dynamics.
We introduce two common ensemble methods: bagging and random forests. Both of these methods repeat a statistical analysis on a bootstrap sample to improve the accuracy of the predictor. Our column shows these methods as applied to Classification and Regression Trees.
For example, we can sample the space of values more finely when using bagging with regression trees because each sample has potentially different boundaries at which the tree splits.
Random forests generate a large number of trees by not only generating bootstrap samples but also randomly choosing which predictor variables are considered at each split in the tree.
Krzywinski, M. & Altman, N. (2017) Points of Significance: Ensemble methods: bagging and random forests. Nature Methods 14:933–934.
Krzywinski, M. & Altman, N. (2017) Points of Significance: Classification and regression trees. Nature Methods 14:757–758.