Trance opera—Spente le Stellebe dramaticmore quotes

# gotham: exciting

In Silico Flurries: Computing a world of snow. Scientific American. 23 December 2017

# ASCII Art—Proportional Spacing, Tone/Structure Mapping and Fixed Strings

## contents

asciifyimage-0.02.tgz

This is a Perl script and requires Imager. See README in the archive for instructions. I cannot provide installation support, but welcome questions and ideas about the method.

## examples

Part of the Pioneer plaque rendered with the sequence of human chromosome 1, using 8 weights of Gotham. (zoom)
DNA helix rendered with string 'dna'. (zoom)

After finding a typographic portrait of Christopher Hitchens, created out of Gill Sans letters by Miles Chic at Capilano University, I thought to resurrect software I wrote a long time ago that converts images into letters and expanding traditional ASCII art by using

• proportionally spaced fonts
• a variety of font weights in a single image
• both tone and structure of the image to select characters
• fixed strings to render an image in legible text

## ASCII Art

The representation of images by characters—ASCII art—has a long history. ASCII art extends the emoticon (or smiley) to represent a larger piece of work. Typically, the works use a fixed-space font (e.g. Courier), originally designed for display on a terminal. Despite the sophistication of computer graphics today, ASCII art continues to have a strong following with new work continually added to public online galleries.

Community contributions to ASCII Art Museum. ASCII art can vary from simple cartoon-like depictions to photorealistic interpretations. (source, zoom)

Photos and paintings can be ASCIIfied using a tone-based approach and automated methods exist to do this (Paul D. O’Grady and Scott T. Rickard (2008) Automatic ASCII Art Conversion of Binary Images Using Non-Negative Constraints).

Many artists generate new creations, exclusive to the medium. Typically this kind of ASCII art is based on the interpretation of structure rather than tone—this method has also been automated (Xuemiao Xu, Linling Zhang, Tien-Tsin Wong (2010) Structure-based ASCII Art).

## Proportional spaced and multi-font art

I have written code to generate ASCII art from images by using proportional spaced fonts.

Fixed width fonts (e.g. Pragmata) are popular. ASCII art can be extended to proportionally spaced fonts (e.g. Gotham). More than one weight (or font) can be used to add contrast.

Below is an example of how Pragmata and Gotham can be used to different effect to render an image. When a proportional spaced font is used, the ASCII shape can more fully fill the image.

Comparison of fixed and proportional spaced fonts in ASCII art. Employing multiple weights adds contrast. The grey background is added to emphasize the original image. (zoom)

Let's see how these methods work on a real image. Many ASCII art Mona Lisa versions exist. Below, I render the Mona Lisa with Pragmata, Gotham Book and 8 weights of Gotham.

## structural character selection

Two-tone shapes like the S in the figure above require selecting characters that match the structure of the image. (e.g. "|" matches vertical lines). For a given character and image position there are four distinct match possibilities—a combination of whether the character and image have a signal at a position. I show this in the figure below.

Finding the best character involves maximizing overlap (s1, s3) and minimizing penalty (s2, s4).

By maximizing scores derived from matches (s1, s3) and minimizing any penalties (s2, s4), a character is identified based on maximal coverage of the image region and minimum coverage of areas that are blank.

Ink artwork, or thresholded bitmaps in which there are only two tone values, are approximated using structural matching. Here I compare the method of O'Grady and Rickard with my naive structural matching. (zoom)

When proportional text is used, edges are better approximated, such as in the Homer Simpson example below which uses Gotham Book.

For this image, 17pt text matches the detail well. (zoom)

## tone-based character selection

Images that are not two-tone require that we match both structure and tone. Structure is approximated by the choice of character, while tone by choice of font weight. To select the best character based on tone, the character's average tone is compared to the average tone of the section of the image to which it is being compared.

Heavier weights are used to match dark areas of the image. (zoom)

It is possible to combine both structure and tone metrics in character selection. Below is an example of how an image with both tone and structure is interpreted as the tone and structure score weights are varied. The balance between these two metrics can be very hard to find—it greatly depends on the image. Tone-based mapping works well when font size is small and the image is viewed from larger distance—in this case, characters play the role of individual pixels with varying brightness. Structure-based mapping works with larger type and closer viewing distance.

A tone:structure ratio of 1:0.5 works well for the Star Trek logo. (zoom)

Continuous tone bitmaps are an idea application of multi-font ASCII art—images no longer need to be thresholded or dithered.

Applying both tone and structure character selection metrics to a greyscale image. (source, zoom, )

## fixed string ASCII art

ASCII art is generated by dividing the image into a grid and finding the letter (the choice of characters is often expanded to include punctuation) that best matches the grid section. Typically, for each grid the entire set of allowable characters is sampled. Instead, we can limit the choice of character by successively sampling from a fixed string.

Fixed string ASCII art limits the choice of characters available at each grid. Characters can be drawn from a short string (e.g. 'ilovegotham') or from a larger corpus (e.g. Wikipedia entry for Mona Lisa). The string can be contiguous within the image, or locally within the font. (zoom)

rendered with the fixed string "monalisa" using 8 weights of Gotham.

Fixed string ASCII art rendering of Mona Lisa. (zoom)
DNA helix rendered with string 'dna'. (zoom)
DNA helix rendered with sequence from human chromosome 1. (zoom)

## angled text ASCII art

Things get even more interesting when the text is angled.

By applying rotations to the input and output images, the image can be approximated by angled text. (zoom)

## multi-layer ASCII art

The image can be textured with multiple layers of ASCII art. In the example below, four layers of text are used, each with a different font size.

Part of the Pioneer plaque rendered with the sequence of human chromosome 1, using 4 layers of sizes (17pt, 33pt, 59pt and 93pt) and 8 weights of Gotham. (zoom)

Instead of varying size, the angle of the text can be changed among layers. This results in a pattern reminiscent of a halftone.

Part of the Pioneer plaque rendered with the sequence of human chromosome 1, using 4 layers with different text rotation (-45, -15, 15, 45 degrees) and 8 weights of Gotham. (zoom)

## recursive ASCII art

An image can be asciified several times, with each iteration the asciified output of the previous step used as input for the next. At each step, the font size should be reduced to s → √s.

ASCII, set in Gotham Ultra (zoom)
The above image asciified using 8 weights of 105pt Gotham and the fixed string 'ASCII'. (zoom)
The asciified example above, asciified again using 8 weights of 11pt Gotham and the fixed string 'ASCII'. (zoom)

VIEW ALL

# Curse(s) of dimensionality

Tue 05-06-2018
There is such a thing as too much of a good thing.

We discuss the many ways in which analysis can be confounded when data has a large number of dimensions (variables). Collectively, these are called the "curses of dimensionality".

Nature Methods Points of Significance column: Curse(s) of dimensionality. (read)

Some of these are unintuitive, such as the fact that the volume of the hypersphere increases and then shrinks beyond about 7 dimensions, while the volume of the hypercube always increases. This means that high-dimensional space is "mostly corners" and the distance between points increases greatly with dimension. This has consequences on correlation and classification.

Altman, N. & Krzywinski, M. (2018) Points of significance: Curse(s) of dimensionality Nature Methods 15:399–400.

# Statistics vs Machine Learning

Tue 03-04-2018
We conclude our series on Machine Learning with a comparison of two approaches: classical statistical inference and machine learning. The boundary between them is subject to debate, but important generalizations can be made.

Inference creates a mathematical model of the datageneration process to formalize understanding or test a hypothesis about how the system behaves. Prediction aims at forecasting unobserved outcomes or future behavior. Typically we want to do both and know how biological processes work and what will happen next. Inference and ML are complementary in pointing us to biologically meaningful conclusions.

Nature Methods Points of Significance column: Statistics vs machine learning. (read)

Statistics asks us to choose a model that incorporates our knowledge of the system, and ML requires us to choose a predictive algorithm by relying on its empirical capabilities. Justification for an inference model typically rests on whether we feel it adequately captures the essence of the system. The choice of pattern-learning algorithms often depends on measures of past performance in similar scenarios.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Statistics vs machine learning. Nature Methods 15:233–234.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: supervised methods. Nature Methods 15:5–6.

# Happy 2018 $\pi$ Day—Boonies, burbs and boutiques of $\pi$

Wed 14-03-2018

Celebrate $\pi$ Day (March 14th) and go to brand new places. Together with Jake Lever, this year we shrink the world and play with road maps.

Streets are seamlessly streets from across the world. Finally, a halva shop on the same block!

A great 10 km run loop between Istanbul, Copenhagen, San Francisco and Dublin. Stop off for halva, smørrebrød, espresso and a Guinness on the way. (details)

Intriguing and personal patterns of urban development for each city appear in the Boonies, Burbs and Boutiques series.

In the Boonies, Burbs and Boutiques of $\pi$ we draw progressively denser patches using the digit sequence 159 to inform density. (details)

No color—just lines. Lines from Marrakesh, Prague, Istanbul, Nice and other destinations for the mind and the heart.

Roads from cities rearranged according to the digits of $\pi$. (details)

The art is featured in the Pi City on the Scientific American SA Visual blog.

Check out art from previous years: 2013 $\pi$ Day and 2014 $\pi$ Day, 2015 $\pi$ Day, 2016 $\pi$ Day and 2017 $\pi$ Day.

# Machine learning: supervised methods (SVM & kNN)

Thu 18-01-2018
Supervised learning algorithms extract general principles from observed examples guided by a specific prediction objective.

We examine two very common supervised machine learning methods: linear support vector machines (SVM) and k-nearest neighbors (kNN).

SVM is often less computationally demanding than kNN and is easier to interpret, but it can identify only a limited set of patterns. On the other hand, kNN can find very complex patterns, but its output is more challenging to interpret.

Nature Methods Points of Significance column: Machine learning: supervised methods (SVM & kNN). (read)

We illustrate SVM using a data set in which points fall into two categories, which are separated in SVM by a straight line "margin". SVM can be tuned using a parameter that influences the width and location of the margin, permitting points to fall within the margin or on the wrong side of the margin. We then show how kNN relaxes explicit boundary definitions, such as the straight line in SVM, and how kNN too can be tuned to create more robust classification.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Machine learning: a primer. Nature Methods 15:5–6.