The images shown here were created as part of my ASCII Art project, which extends ASCII art to include
Applying the code to images of Hitchens was motivated by my own deep love of Hitchens and a typographic portrait of Christopher Hitchens, created out of Gill Sans letters by Miles Chic at Capilano University.
All images are generated using Gotham, with up to 8 weights (Extra Light to Ultra). Each image includes size and characters used for the image. I give the absolute type size, though only useful to know in relative terms to the size of the image and other images drawn with the same method. The color of text in each layer is the same—black— but font weight may vary.
As the font size is reduced, greater detail and contrast can be achieved.
By setting the image with a fixed string, such as a short quote or longer body of text, detail is lost but the ASCII representation takes on more meaning.
Images take on detail when multiple rotated layers of text is used. Each of the images below is composed of more than one layer, starting with a 2-layer image which uses the uppercase alphabet at 0 and 90 degrees.
Meaning can be added to the image by using different text in each layer. In the examples below, I set the same image using the pair "Godisnotgreat" (at 0 degrees) and "religionpoisonseverything" (at 90 degrees). In the second example, I use the unlikely combination of "Jesus" and "Mohammad"—inspired by Jesus and Mo.
When rotated layers contain punctuation, very high level of detail can be achieved.
The image below is made out of layers that contain only forward (/) and back (\) slashes.
The image below is made using only the period character in three layers rotated at -45, 0 and 45 degrees. Although the image looks like a pixelated version of the original—it is more than that. It is a typeset representation that uses 8 weights of Gotham. Character spacing between periods is informed by font metrics.
The three images below show the difference between using a variety of punctuation characters and setting an image using a block of text. The first image uses "8 X x" and common punctuation.
I use hitchslap 9 for the first image below, and all the hitchslaps for the second image. When setting an image in using a block of text, the choice of character at any position is fixed and only the font weight is allowed to vary. When the text is relatively short (e.g. hitchslap 9 is 544 characters and is repeated 50 times in the image), rivers of space appear in the image.
When an image of text is set with the text itself, you have recursive ASCII art. Below is hitchslap 2, set with itself. In the image, the font is Gotham and the text used to asciify the image is also Gotham.
It makes ordinary moral people, compels them, forces them, in some cases orders them do disgusting wicked unforgivable things. There's no expiation for the generations of misery and suffering that religion has inflicted in this way and continues to inflict. And I still haven't heard enough apology for it. — Christopher Hitchens
The quote is 307 characters long and is repeated 391 times in the image.
In principle, the process of asciifying text with text can be repeated, by using the asciified image as input for asciification with progressively smaller text.
In this primer, we focus on essential ML principles— a modeling strategy to let the data speak for themselves, to the extent possible.
The benefits of ML arise from its use of a large number of tuning parameters or weights, which control the algorithm’s complexity and are estimated from the data using numerical optimization. Often ML algorithms are motivated by heuristics such as models of interacting neurons or natural evolution—even if the underlying mechanism of the biological system being studied is substantially different. The utility of ML algorithms is typically assessed empirically by how well extracted patterns generalize to new observations.
We present a data scenario in which we fit to a model with 5 predictors using polynomials and show what to expect from ML when noise and sample size vary. We also demonstrate the consequences of excluding an important predictor or including a spurious one.
Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.",
Just in time for the season, I've simulated a snow-pile of snowflakes based on the Gravner-Griffeath model.
Gravner, J. & Griffeath, D. (2007) Modeling Snow Crystal Growth II: A mesoscopic lattice map with plausible dynamics.
We introduce two common ensemble methods: bagging and random forests. Both of these methods repeat a statistical analysis on a bootstrap sample to improve the accuracy of the predictor. Our column shows these methods as applied to Classification and Regression Trees.
For example, we can sample the space of values more finely when using bagging with regression trees because each sample has potentially different boundaries at which the tree splits.
Random forests generate a large number of trees by not only generating bootstrap samples but also randomly choosing which predictor variables are considered at each split in the tree.
Krzywinski, M. & Altman, N. (2017) Points of Significance: Ensemble methods: bagging and random forests. Nature Methods 14:933–934.
Krzywinski, M. & Altman, N. (2017) Points of Significance: Classification and regression trees. Nature Methods 14:757–758.