listen; there's a hell of a good universe next door: let's go.go theremore quotes

# a rat: fun

In Silico Flurries: Computing a world of snow. Scientific American. 23 December 2017

# Alex — Internet's Most Popular Rat

## Poster Rat for Rat Genome Sequencing

The rat genome sequencing project at the Baylor College of Medicine Human Genome Sequencing Centre is complete. The genome has been analyzed and published.

I'd like to introduce you one of the faces of the project: Alex, the genomics rat idol.

Arguably, Alex is the most popular rat on the internet. For the justification of this strong statement, read on.

Alex, the rat. Rattus norvegicus on an ABI 3700 genome sequencer. (zoom)
Alex, the rat. Rattus norvegicus on an ABI 3700 genome sequencer. (zoom)

## Alex's Biography

Alex was born in May 2000. It's well known that a rat's cuteness reaches maximum at about 3-4 weeks. After this critical time, a pet store rat is less likely to be purchased and may be asked to act as snake food. In Alex's case, she was perilously close to her deadline. Luckily for her, we paid a ransom of \$6.99 to the Noah's Ark pet shop in Vancouver. She was on her last cute leg.

Portrait of Alex, the genome rat (Rattus norvegicus). Here, she is seen in a forced portrait position (zoom)

From May 2000 Alex spent most of her time hoarding food pellets and riding on shoulders.

Portrait of Alex, the genome rat (Rattus norvegicus). Riding on shoulder.

Alex liked to bite. And rats only bite hard — they don't nibble. Her contention for this unattractive behaviour was the uncanny similarity between a finger and a pellet of food.

Other than unpredictable bouts of biting (by far the most exciting aspect of her personality), Alex lacked other distinguishing characteristics.

Alex died of a seizure in late 2002. She was buried outside of the Museum of Anthropology. A ratty pair of underwear served as a burial shroud.

And I hope you got that last pun.

Photos are for public use. Use, modification and distribution of these photos is unrestricted.

## Alex's Popularity

Despite my best efforts at meaningful work, this web page continues to be the most popular of all my online offerings, making for a somewhat embarrassing achievement.

Alex's images consistently show up first in Google's web search for 'rat', 'rat image' and image search for 'rat'.

Alex image is the first for Google's 'rat' search query (retrieved 16 Mar 2013). (rat Google search)
Alex image is the first for Google's 'rat image' search query (retrieved 16 Mar 2013). (rat Google search)

Finally, Alex appears as the first entry in Google images for 'rat'.

Alex image is the first for Google's 'rat image' search query (retrieved 16 Mar 2013). (rat Google search)

## Alex's Public Appearances

Alex is neither without modesty nor public fame. Her first cover-ratgirl appearance was on the April 2004 issue of Genome Research.

Alex the rat appeared on the cover of Genome Research (April 2004). (zoom)

More recently, she's appeared on the cover of Ethnologie Francaise (Jan-Mar 2009 issue).

Alex the rat appeared on the cover of Ethnologie Francaise (1/2009). (zoom)

The topic of the issue was the relationship between animals and humans. It is fitting therefore to recount here the relationship I shared with Alex during her sojourn with us.

VIEW ALL

# Statistics vs Machine Learning

Tue 03-04-2018
We conclude our series on Machine Learning with a comparison of two approaches: classical statistical inference and machine learning. The boundary between them is subject to debate, but important generalizations can be made.

Inference creates a mathematical model of the datageneration process to formalize understanding or test a hypothesis about how the system behaves. Prediction aims at forecasting unobserved outcomes or future behavior. Typically we want to do both and know how biological processes work and what will happen next. Inference and ML are complementary in pointing us to biologically meaningful conclusions.

Nature Methods Points of Significance column: Statistics vs machine learning. (read)

Statistics asks us to choose a model that incorporates our knowledge of the system, and ML requires us to choose a predictive algorithm by relying on its empirical capabilities. Justification for an inference model typically rests on whether we feel it adequately captures the essence of the system. The choice of pattern-learning algorithms often depends on measures of past performance in similar scenarios.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Statistics vs machine learning. Nature Methods 15:233–234.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: supervised methods. Nature Methods 15:5–6.

# Happy 2018 $\pi$ Day—Boonies, burbs and boutiques of $\pi$

Wed 14-03-2018

Celebrate $\pi$ Day (March 14th) and go to brand new places. Together with Jake Lever, this year we shrink the world and play with road maps.

Streets are seamlessly streets from across the world. Finally, a halva shop on the same block!

A great 10 km run loop between Istanbul, Copenhagen, San Francisco and Dublin. Stop off for halva, smørrebrød, espresso and a Guinness on the way. (details)

Intriguing and personal patterns of urban development for each city appear in the Boonies, Burbs and Boutiques series.

In the Boonies, Burbs and Boutiques of $\pi$ we draw progressively denser patches using the digit sequence 159 to inform density. (details)

No color—just lines. Lines from Marrakesh, Prague, Istanbul, Nice and other destinations for the mind and the heart.

Roads from cities rearranged according to the digits of $\pi$. (details)

The art is featured in the Pi City on the Scientific American SA Visual blog.

Check out art from previous years: 2013 $\pi$ Day and 2014 $\pi$ Day, 2015 $\pi$ Day, 2016 $\pi$ Day and 2017 $\pi$ Day.

# Machine learning: supervised methods (SVM & kNN)

Thu 18-01-2018
Supervised learning algorithms extract general principles from observed examples guided by a specific prediction objective.

We examine two very common supervised machine learning methods: linear support vector machines (SVM) and k-nearest neighbors (kNN).

SVM is often less computationally demanding than kNN and is easier to interpret, but it can identify only a limited set of patterns. On the other hand, kNN can find very complex patterns, but its output is more challenging to interpret.

Nature Methods Points of Significance column: Machine learning: supervised methods (SVM & kNN). (read)

We illustrate SVM using a data set in which points fall into two categories, which are separated in SVM by a straight line "margin". SVM can be tuned using a parameter that influences the width and location of the margin, permitting points to fall within the margin or on the wrong side of the margin. We then show how kNN relaxes explicit boundary definitions, such as the straight line in SVM, and how kNN too can be tuned to create more robust classification.

Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Machine learning: a primer. Nature Methods 15:5–6.