Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - contact me Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca on Twitter Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Lumondo Photography Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Pi Art Martin Krzywinski / Genome Sciences Center / mkweb.bcgsc.ca - Hilbertonians - Creatures on the Hilbert Curve
Where am I supposed to go? Where was I supposed to know?Violet Indianaget lost in questionsmore quotes

words: fun


EMBO Practical Course: Bioinformatics and Genome Analysis, 5–17 June 2017.


language + fiction

Dark Matter of the English Language—the unwords

Words are easy, like the wind;
Faithful friends are hard to find.
—William Shakespeare

unanimals

Critters that definitely don't exist but, perhaps, should.

The backal is probably a feisty biter while the cakmiran probably has a quizzical look. And I would completely avoid the fangol—he sounds like trouble.

A great exercise for kids and the comedic-at-heart would be to try to draw some of these. What would a gakrin look like? Or a gorderish?

Below are the alphabetically first 4–10 letter single-word unanimals for each letter. In some cases, no names of a given length were generated for a given letter.

—4—
aytt
bebe
bick
caen
calb
dalh
dlol
fibl
file
galg
gaon
haen
hale
ilpa
jang
kall
laot
laro
mard
mean
naal
neat
orot
oton
pate
peof
qaid
radc
ranl
saol
shal
tial
walr
weil
—5—
acter
alome
boloo
brata
cabal
capir
dacwo
daxol
fimat
fogon
gatey
geass
haore
heisa
ihire
kardo
kouse
lalpy
lante
malbe
morci
nlreg
nriwe
oacda
omita
paric
ponga
radka
ramep
saage
saako
teart
ufuse
wease
weatl
—6—
acukoe
agtalt
backal
banher
caidat
calepe
dearle
dolpin
eyrita
fangol
gafala
gakrin
haamet
hadnel
iykile
jacang
kagcet
kurdot
lalper
largoz
mamket
mander
narnla
oammim
ooceat
palyus
patble
rarman
ravlil
seaise
seikol
tarbaa
tonele
usrenl
valiss
waatir
whagit
—7—
amreron
apunaed
baadber
balsidd
cadtole
calfasf
daldaug
dalfiso
eolgeal
eomrarf
fondard
gallish
gamymly
hankrer
hokloru
itarato
jatfish
jatwoss
kaister
lamushe
leittoy
madarle
malfash
narddco
nhucasf
oncigut
ootfoto
pakline
parcata
qicsoor
raacbor
raipins
sablrod
sabrilr
tenlrit
tonmede
vansoar
vatkifh
waldfil
walslil
—8—
anlonfow
arnbwict
baieslel
barnnkor
caeffuse
cakmiran
disteale
erhadiol
geepbuwl
golshowo
halalale
hocscist
loicpalt
lruzgind
mannforl
marppuse
obberose
oosgerle
pandleie
perphist
raaldope
ragprerd
saelling
saistiet
tolrfish
valcunle
wadmfish
wasshail
—9—
anlfilher
beigartal
cacdockud
cagccride
gardefand
gorderish
ipilfoyor
keosildor
laechinee
lhallaeye
malpandie
maltreuge
okrerblon
pallanmer
penrhapor
shipopish
shorgeone
ugoflifes
waadarall
waamesder
—10—
asdrosquod
cackemorel
canzlitbar
gaotemtirh
gorofoshew
hirkaflarl
honkerfosh
mapobanadl
moalarfesg
nearretlee
qoarrorule
raccistech
sancockese
sealdhicnh
waagelidhe
weendefish
—11—
condlidilin
cotarleweer
galafonllar
geatingtink
rellswobgry
soridioatar
wolfendelad

Here are all some lists with common suffixes

*ish camfish gallish gawlish gohfish gurrish jatfish mipkish polmish wamfish gorderish shipopish slarmish soulfish tolrfish wadmfish weendefish

*ile halile iykile weadnrile cragiile file gile

*ale anmale calilale disteale halalale hale saale

*use bampuse caeffuse marppuse kouse ufuse

*her banher coocher lorsher anlfilher wher

*tar codtar mistar soridioatar wortautar

*ole rorole cadtole wurkole cole

*ise seaise shoise guceyrise

VIEW ALL

news + thoughts

Machine learning: a primer

Tue 05-12-2017
Machine learning extracts patterns from data without explicit instructions.

In this primer, we focus on essential ML principles— a modeling strategy to let the data speak for themselves, to the extent possible.

The benefits of ML arise from its use of a large number of tuning parameters or weights, which control the algorithm’s complexity and are estimated from the data using numerical optimization. Often ML algorithms are motivated by heuristics such as models of interacting neurons or natural evolution—even if the underlying mechanism of the biological system being studied is substantially different. The utility of ML algorithms is typically assessed empirically by how well extracted patterns generalize to new observations.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Machine learning: a primer. (read)

We present a data scenario in which we fit to a model with 5 predictors using polynomials and show what to expect from ML when noise and sample size vary. We also demonstrate the consequences of excluding an important predictor or including a spurious one.

Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.",

...more about the Points of Significance column

Snowflake simulation

Tue 14-11-2017
Symmetric, beautiful and unique.

Just in time for the season, I've simulated a snow-pile of snowflakes based on the Gravner-Griffeath model.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
A few of the beautiful snowflakes generated by the Gravner-Griffeath model. (explore)

Gravner, J. & Griffeath, D. (2007) Modeling Snow Crystal Growth II: A mesoscopic lattice map with plausible dynamics.

Genes that make us sick

Thu 02-11-2017
Where disease hides in the genome.

My illustration of the location of genes in the human genome that are implicated in disease appears in The Objects that Power the Global Economy, a book by Quartz.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
The location of genes implicated in disease in the human genome, shown here as a spiral. (more...)

Ensemble methods: Bagging and random forests

Mon 16-10-2017
Many heads are better than one.

We introduce two common ensemble methods: bagging and random forests. Both of these methods repeat a statistical analysis on a bootstrap sample to improve the accuracy of the predictor. Our column shows these methods as applied to Classification and Regression Trees.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Ensemble methods: Bagging and random forests. (read)

For example, we can sample the space of values more finely when using bagging with regression trees because each sample has potentially different boundaries at which the tree splits.

Random forests generate a large number of trees by not only generating bootstrap samples but also randomly choosing which predictor variables are considered at each split in the tree.

Krzywinski, M. & Altman, N. (2017) Points of Significance: Ensemble methods: bagging and random forests. Nature Methods 14:933–934.

Background reading

Krzywinski, M. & Altman, N. (2017) Points of Significance: Classification and regression trees. Nature Methods 14:757–758.

...more about the Points of Significance column

Classification and regression trees

Mon 16-10-2017
Decision trees are a powerful but simple prediction method.

Decision trees classify data by splitting it along the predictor axes into partitions with homogeneous values of the dependent variable. Unlike logistic or linear regression, CART does not develop a prediction equation. Instead, data are predicted by a series of binary decisions based on the boundaries of the splits. Decision trees are very effective and the resulting rules are readily interpreted.

Trees can be built using different metrics that measure how well the splits divide up the data classes: Gini index, entropy or misclassification error.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Classification and decision trees. (read)

When the predictor variable is quantitative and not categorical, regression trees are used. Here, the data are still split but now the predictor variable is estimated by the average within the split boundaries. Tree growth can be controlled using the complexity parameter, a measure of the relative improvement of each new split.

Individual trees can be very sensitive to minor changes in the data and even better prediction can be achieved by exploiting this variability. Using ensemble methods, we can grow multiple trees from the same data.

Krzywinski, M. & Altman, N. (2017) Points of Significance: Classification and regression trees. Nature Methods 14:757–758.

Background reading

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.

Altman, N. & Krzywinski, M. (2015) Points of Significance: Multiple Linear Regression Nature Methods 12:1103-1104.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Model Selection and Overfitting. Nature Methods 13:703-704.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Regularization. Nature Methods 13:803-804.

...more about the Points of Significance column

Personal Oncogenomics Program 5 Year Anniversary Art

Wed 26-07-2017

The artwork was created in collaboration with my colleagues at the Genome Sciences Center to celebrate the 5 year anniversary of the Personalized Oncogenomics Program (POG).

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
5 Years of Personalized Oncogenomics Program at Canada's Michael Smith Genome Sciences Centre. The poster shows 545 cancer cases. (left) Cases ordered chronologically by case number. (right) Cases grouped by diagnosis (tissue type) and then by similarity within group.

The Personal Oncogenomics Program (POG) is a collaborative research study including many BC Cancer Agency oncologists, pathologists and other clinicians along with Canada's Michael Smith Genome Sciences Centre with support from BC Cancer Foundation.

The aim of the program is to sequence, analyze and compare the genome of each patient's cancer—the entire DNA and RNA inside tumor cells— in order to understand what is enabling it to identify less toxic and more effective treatment options.