Without an after or a when.can you hear the rain?more quotes

# making poetry out of spam is fun

2020 $\pi$ day art and the piku

# data visualization + art

Telling a story about infinity is tricky,
especially a short one.

# Infinity in Just Over Six Minutes

One, two, three, infinity.
— George Gamow

Max Cooper at the London Barbican Hall performing Yearning for the Infinite. (Alex Kozobolis)

# making of the video

The video was created with a custom-made kinetic typography system that emulates a low-fi terminal display. The system is controlled by a plain-text configuration file that defines scenes and timings—there is no GUI. There is also no post-processing of any kind, such as After Effects. Everything that you see in the final video was generated programatically. The creation process took about a month.

## video length and format

The original music score for Aleph 2 is 6 minutes and 34 seconds in length.

The score tempo is 118 bpm (1.967 beats per second, 0.082 beats per frame). There are 194.1 measures in the video (29.5 measures per minute, 0.492 measures per second, 0.020 measures per frame).

The video format is 24 fps, so the track comprises 9,473 frames (1440 frames per minute, 48.814 frames per measure, 12.203 frames per beat, 3.051 frames per 16th note).

The typography uses the Classic Console font, expanded by me to include set theory characters such as $\aleph$, $\mathbb{N}$, $\mathbb{R}$, $\notin$, $\varnothing$ and so on.

The video is 16:9 and each frame is rendered at 1,920 × 1,080. Text is set on a grid of 192 columns and 83 rows (maximum of 15,936 characters per frame).

## pipeline

The entire video is first initialized as an $(x,y,z)$ matrix of size 192 × 83 × 9,473 (150,961,728 elements). The $z$ dimension is the time dimension and each matrix slice (e.g. $x,y,1$) corresponds to a given frame.

The video is then built up from a series of scenes. Briefly, this process is single-threaded and takes about 30 minutes and during this time the matrix is populated with text characters. As the scenes are built up, each element in the matrix stays blank or has a colored character assigned to it. Periodically, effects are added such as random glitches. All elements are synchronized to the tempo of the score and transitions can be triggered from drum score midi files. For example, in parts, the background for each frame flashes to the kick and snare. I get into the detail of this below.

Once the matrix has been fully populated, each frame is output to a PNG file (e.g. 000000.png ... 009472.png). Below you can see 10 frames in the range 2490–2499 from the Bijection 2 section of the video at 1:42.

Frame 2490 of Aleph 2. (zoom)
Frame 2491 of Aleph 2. (zoom)
Frame 2492 of Aleph 2. (zoom)
Frame 2493 of Aleph 2. (zoom)
Frame 2494 of Aleph 2. (zoom)
Frame 2495 of Aleph 2. (zoom)
Frame 2496 of Aleph 2. (zoom)
Frame 2497 of Aleph 2. (zoom)
Frame 2498 of Aleph 2. (zoom)
Frame 2499 of Aleph 2. (zoom)

The frames were then stitched into a PNG movie in 24-bit RGB color space using ffmpeg.

$ffmpeg -thread_queue_size 32 -r 24 -y -i "frames/aleph/%06d.png" $offset$seek -i wav/aleph.wav -pix_fmt rgb24 -avoid_negative_ts make_zero -shortest -r 24 -c:v png -f mov aleph.mov$

The .mov file is then converted into DXV format used by the Resolume VJ software that Max Cooper uses in his shows to control the displays.

If you watch the video on YouTube, please know that the YouTube temporal and chroma compression greatly reduces the quality of the original 24-bit RGB master, which was used for the Barbican Hall performance. The YouTube compression bleaches out the vibrant red, which really pops out in the master version, and blurs frames during fast strobing, which neuters parts of the video that are designed to be overwhelming, such as the drop transition to 24 fps strobing during the natural number power set.

## why a custom system?

Here is Max's original direction for the video's narrative and art style.

I want to show ever growing lists of numbers, then split the lists somehow (left/right of screen) and show how you can pair subsets of Aleph 0 / integers with themselves, then show how Cantor's diagonal argument can be used to pair the fractions with the natural numbers, then show his other diagonal argument for proving the reals are greater than Aleph 0, then show the process (very roughly and with maximum artistic licence if necessary) of taking the power set of infinite sets to create larger infinities to get up to Aleph 1, Aleph 2, and maybe further if the system allows. In the end it should just be complete text/number chaos on screen along with the intense chaos of the audio.

We'd have to be very careful to avoid any Matrix reference visually! ... but that low-fi / command line style would be suitable I think.

In terms of animation, the technical requirements were relatively simple. Everything would be rendered with a fixed-width font with no anti-aliasing and the color palette would be very limited (e.g. black, grey, white and a red for emphasis). Nothing other than characters would be drawn (no lines, circles or other geometrical shapes) and there would be no gradients. Basically, very lo-fi and 8-bit.

Despite the fact that I've never made any kind of animation before, I thought that these requirements could be relatively easy to achieve. After all, if worse came to worst, I told myself that I could always generate the video frame-by-frame.

However, it quickly became obvious that traditional keyframe systems could not be easily used to tell our story. Tools like Adobe After effects rely on interpolation between keyframes. But in our video every frame is essentially a keyframe. This meant that any kind of interpolation between scene points would have to be programmed—while After Effects makes it easy to move things around on the screen, it requires scripting to generate content based on lists of numbers, set elements, and so on. And since I have no experience in After Effects, I thought it would be faster to code my own system than to learn After Effects' expression language to (possibly) later discover that what I wanted to do was either hard or practically impossible for me to achieve within our time frame (a month).

I knew that I was reasonably good at prototyping and generating custom visualizations, so it felt safer to create something from scratch.

The final version of the system, which is very much a prototype, is about 6,000 lines of Perl. The Aleph 2 video is built out from about 2,000 lines of plain-text configuration that defines scenes, timings and effects.

## system architecture

It took about a week to figure out how to design the system. As we were building out the video, I iterated between creating the story and creating the system to tell the story. This was iterative and felt very much like trying to simultaneously building and flying a plane.

At times, the entire process would crash down on me because some tiny tweak fundamentally changed how everything worked.

### pesky timing notation

For example, one extremely nagging aspect of the code that I patched only half-way through the entire process had to do with how time was specified. From the start, I made use of measure:beat:16note notation, such as for scene starts and ends (e.g. 2:2 to 4:1 meant a scene started on measure 2 beat 2 and stopped at measure 4 beat 1).

This notation used 1-indexing (e.g. 1:1:1 is the first 16th note of the score). 0-indexing would have been unintuitive because musically one counts beats as 1, 2, 3, 4 and not 0, 1, 2, 3. Importantly, I wanted the way we referred to timings in conversation (e.g. on the "and of 2" of the 4th measure) to be directly reflected in the code.

All this made sense until I needed a notation to express duration. When I started using 1:1 to indicate a duration of 1 measure and 1 beat, I had to reconcile the difference between 1:1 as a point in time and as a duration—the former specified the beginning of the interval (e.g. frame=0) and the latter the end (frame=61). It also took me forever to decide whether the duration of 5 beats should be expressed as 1:2 (e.g. end is start of beat 2) or 1:1 (e.g. end is end of beat 1).

The fact that the video frame rate is 24 fps made things even more complicated. At this frame rate, there are 3.051 frames per 16th note. This meant that any duration of 1 frame (which happens during fast strobing) couldn't be expressed by the integer notation of measure:note:16note. I didn't want to have to write 0:0:0.3278, which seemed a tedious way of saying "1 frame". Furthermore, because frames didn't neatly match up to 16th note boundaries, quite a lot of time is spent checking that timing definitions don't suffer from rounding issues.

In the final video, you'll see the measure timer in the upper right corner. The : between the measure and beat flashes as a red + on the beat (118 bpm). Next to the measure timestamp you'll see a min:sec:frame timestamp and hexadecimal readout of the frame number.

# The Outbreak Poems

Tue 24-03-2020

I'm writing poetry daily to put my feelings into words more often during the COVID-19 outbreak.

$Panic can wait for tomorrow.$
$Regrets live on curves not tangents.$
$Small chances are never zero.$
$Month's last day waits for another year.$

# Deadly Genomes: Genome Structure and Size of Harmful Bacteria and Viruses

Tue 17-03-2020

A poster full of epidemiological worry and statistics. Now updated with the genome of SARS-CoV-2 and COVID-19 case statistics as of 3 March 2020.

Deadly Genomes: Genome Structure and Size of Harmful Bacteria and Viruses (zoom)

Bacterial and viral genomes of various diseases are drawn as paths with color encoding local GC content and curvature encoding local repeat content. Position of the genome encodes prevalence and mortality rate.

The deadly genomes collection has been updated with a posters of the genomes of SARS-CoV-2, the novel coronavirus that causes COVID-19.

Genomes of 56 SARS-CoV-2 coronaviruses that causes COVID-19.
Ball of 56 SARS-CoV-2 coronaviruses that causes COVID-19.
The first SARS-CoV-2 genome (MT019529) to be sequenced appears first on the poster.

# Using Circos in Galaxy Australia Workshop

Wed 04-03-2020

A workshop in using the Circos Galaxy wrapper by Hiltemann and Rasche. Event organized by Australian Biocommons.

Using Circos in Galaxy Australia workshop. (zoom)

Galaxy wrapper training materials, Saskia Hiltemann, Helena Rasche, 2020 Visualisation with Circos (Galaxy Training Materials).

# Essence of Data Visualization in Bioinformatics Webinar

Thu 20-02-2020

My webinar on fundamental concepts in data visualization and visual communication of scientific data and concepts. Event organized by Australian Biocommons.

Essence of Data Visualization in Bioinformatics webinar. (zoom)

# Markov models — training and evaluation of hidden Markov models

Thu 20-02-2020

With one eye you are looking at the outside world, while with the other you are looking within yourself.
—Amedeo Modigliani

Following up with our Markov Chain column and Hidden Markov model column, this month we look at how Markov models are trained using the example of biased coin.

We introduce the concepts of forward and backward probabilities and explicitly show how they are calculated in the training process using the Baum-Welch algorithm. We also discuss the value of ensemble models and the use of pseudocounts for cases where rare observations are expected but not necessarily seen.

Nature Methods Points of Significance column: Markov models — training and evaluation of hidden Markov models. (read)

Grewal, J., Krzywinski, M. & Altman, N. (2019) Points of significance: Markov models — training and evaluation of hidden Markov models. Nature Methods 17:121–122.

Altman, N. & Krzywinski, M. (2019) Points of significance: Hidden Markov models. Nature Methods 16:795–796.

Altman, N. & Krzywinski, M. (2019) Points of significance: Markov Chains. Nature Methods 16:663–664.

# Genome Sciences Center 20th Anniversary Clothing, Music, Drinks and Art

Tue 28-01-2020

Science. Timeliness. Respect.

Read about the design of the clothing, music, drinks and art for the Genome Sciences Center 20th Anniversary Celebration, held on 15 November 2019.

Luke and Mayia wearing limited edition volunteer t-shirts. The pattern reproduces the human genome with chromosomes as spirals. (zoom)

As part of the celebration and with the help of our engineering team, we framed 48 flow cells from the lab.

Precisely engineered frame mounts of flow cells used to sequence genomes in our laboratory. (zoom)

Each flow cell was accompanied by an interpretive plaque explaining the technology behind the flow cell and the sample information and sequence content.

The plaque at the back of one of the framed Illumina flow cell. This one has sequence from a patient's lymph node diagnosed with Burkitt's lymphoma. (zoom)