After finding a typographic portrait of Christopher Hitchens, created out of Gill Sans letters by Miles Chic at Capilano University, I thought to resurrect software I wrote a long time ago that converts images into letters and expanding traditional ASCII art by using
The representation of images by characters—ASCII art—has a long history. ASCII art extends the emoticon (or smiley) to represent a larger piece of work. Typically, the works use a fixed-space font (e.g. Courier), originally designed for display on a terminal. Despite the sophistication of computer graphics today, ASCII art continues to have a strong following with new work continually added to public online galleries.
Photos and paintings can be ASCIIfied using a tone-based approach and automated methods exist to do this (Paul D. O’Grady and Scott T. Rickard (2008) Automatic ASCII Art Conversion of Binary Images Using Non-Negative Constraints).
Many artists generate new creations, exclusive to the medium. Typically this kind of ASCII art is based on the interpretation of structure rather than tone—this method has also been automated (Xuemiao Xu, Linling Zhang, Tien-Tsin Wong (2010) Structure-based ASCII Art).
I have written code to generate ASCII art from images by using proportional spaced fonts.
Let's see how these methods work on a real image. Many ASCII art Mona Lisa versions exist. Below, I render the Mona Lisa with Pragmata, Gotham Book and 8 weights of Gotham.
Two-tone shapes like the S in the figure above require selecting characters that match the structure of the image. (e.g. "|" matches vertical lines). For a given character and image position there are four distinct match possibilities—a combination of whether the character and image have a signal at a position. I show this in the figure below.
By maximizing scores derived from matches (s1, s3) and minimizing any penalties (s2, s4), a character is identified based on maximal coverage of the image region and minimum coverage of areas that are blank.
When proportional text is used, edges are better approximated, such as in the Homer Simpson example below which uses Gotham Book.
Images that are not two-tone require that we match both structure and tone. Structure is approximated by the choice of character, while tone by choice of font weight. To select the best character based on tone, the character's average tone is compared to the average tone of the section of the image to which it is being compared.
It is possible to combine both structure and tone metrics in character selection. Below is an example of how an image with both tone and structure is interpreted as the tone and structure score weights are varied. The balance between these two metrics can be very hard to find—it greatly depends on the image. Tone-based mapping works well when font size is small and the image is viewed from larger distance—in this case, characters play the role of individual pixels with varying brightness. Structure-based mapping works with larger type and closer viewing distance.
Continuous tone bitmaps are an idea application of multi-font ASCII art—images no longer need to be thresholded or dithered.
ASCII art is generated by dividing the image into a grid and finding the letter (the choice of characters is often expanded to include punctuation) that best matches the grid section. Typically, for each grid the entire set of allowable characters is sampled. Instead, we can limit the choice of character by successively sampling from a fixed string.
rendered with the fixed string "monalisa" using 8 weights of Gotham.
Things get even more interesting when the text is angled.
The image can be textured with multiple layers of ASCII art. In the example below, four layers of text are used, each with a different font size.
Instead of varying size, the angle of the text can be changed among layers. This results in a pattern reminiscent of a halftone.
An image can be asciified several times, with each iteration the asciified output of the previous step used as input for the next. At each step, the font size should be reduced to s → √s.
Choose your own dust adventure!
Nobody likes dusting but everyone should find dust interesting.
Working with Jeannie Hunnicutt and with Jen Christiansen's art direction, I created this month's Scientific American Graphic Science visualization based on a recent paper The Ecology of microscopic life in household dust.
Barberan A et al. (2015) The ecology of microscopic life in household dust. Proc. R. Soc. B 282: 20151139.
A very large list of named colors generated from combining some of the many lists that already exist (X11, Crayola, Raveling, Resene, wikipedia, xkcd, etc).
For each color, coordinates in RGB, HSV, XYZ, Lab and LCH space are given along with the 5 nearest, as measured with ΔE, named neighbours.
I also provide a web service. Simply call this URL with an RGB string.
It is possible to predict the values of unsampled data by using linear regression on correlated sample data.
This month, we begin our column with a quote, shown here in its full context from Box's paper Science and Statistics.
In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless. The physicist knows that particles have mass and yet certain results, approximating what really happens, may be derived from the assumption that they do not. Equally, the statistician knows, for example, that in nature there never was a normal distribution, there never was a straight line, yet with normal and linear assumptions, known to be false, he can often derive results which match, to a useful approximation, those found in the real world.
—Box, G. J. Am. Stat. Assoc. 71, 791–799 (1976).
This column is our first in the series about regression. We show that regression and correlation are related concepts—they both quantify trends—and that the calculations for simple linear regression are essentially the same as for one-way ANOVA.
While correlation provides a measure of a specific kind of association between variables, regression allows us to fit correlated sample data to a model, which can be used to predict the values of unsampled data.
Altman, N. & Krzywinski, M. (2015) Points of Significance: Simple Linear Regression Nature Methods 12:999-1000.
Altman, N. & Krzywinski, M. (2015) Points of significance: Association, correlation and causation Nature Methods 12:899-900.
Krzywinski, M. & Altman, N. (2014) Points of significance: Analysis of variance (ANOVA) and blocking. Nature Methods 11:699-700.
Correlation implies association, but not causation. Conversely, causation implies association, but not correlation.
This month, we distinguish between association, correlation and causation.
Association, also called dependence, is a very general relationship: one variable provides information about the other. Correlation, on the other hand, is a specific kind of association: an increasing or decreasing trend. Not all associations are correlations. Moreover, causality can be connected only to association.
We discuss how correlation can be quantified using correlation coefficients (Pearson, Spearman) and show how spurious corrlations can arise in random data as well as very large independent data sets. For example, per capita cheese consumption is correlated with the number of people who died by becoming tangled in bedsheets.
Altman, N. & Krzywinski, M. (2015) Points of Significance: Association, correlation and causation Nature Methods 12:899-900.