Lips that taste of tears, they say, are the best for kissing.get cranky

Bioinformatics and Genome Analysis Course. Izmir International Biomedicine and Genome Institute, Izmir, Turkey. May 2–14, 2016

visualization + design

Typography geek? If you like the geometry and mathematics of these posters, you may enjoy something more letter ed. Visions of type: Type Peep Show: The Private Curves of Letters posters.

The art of Pi (pi), Phi (phi) and e

This section contains various art work based on pi, phi and e that I created over the years. pi day and pi approximation day artwork is kept separate.

The accidental similarity number (ASN) is a kind of overlap between numbers. I came up with this concept after creating typographical art about the i-ness of pi.

The poster shows the accidental similarity number for pi, phi and e.

The accidental similarity number for pi, phi and e created from the first 1,000,000 digits of each number. (posters, BUY ARTWORK)

Gene Volume Control

Thu 11-06-2015

I was commissioned by Scientific American to create an information graphic based on Figure 9 in the landmark Nature Integrative analysis of 111 reference human epigenomes paper.

The original figure details the relationships between more than 100 sequenced epigenomes and genetic traits, including disease like Crohn's and Alzheimer's. These relationships were shown as a heatmap in which the epigenome-trait cell depicted the P value associated with tissue-specific H3K4me1 epigenetic modification in regions of the genome associated with the trait.

Figure 9 from Integrative analysis of 111 reference human epigenomes (Nature (2015) 518 317–330). (details)

As much as I distrust network diagrams, in this case this was the right way to show the data. The network was meticulously laid out by hand to draw attention to the layered groups of diseases of traits.

Network diagram redesign of the heatmap for a select set of traits. Only relationships with –log P > 3.9 are displayed. Appears on Graphic Science page in June 2015 issue of Scientific American. (details)

This was my second information graphic for the Graphic Science page. Last year, I illustrated the extent of differences in the gene sequence of humans, Denisovans, chimps and gorillas.

Sampling distributions and the bootstrap

Thu 11-06-2015

The bootstrap is a computational method that simulates new sample from observed data. These simulated samples can be used to determine how estimates from replicate experiments might be distributed and answer questions about precision and bias.

Nature Methods Points of Significance column: Sampling distributions and the bootstrap. (read)

We discuss both parametric and non-parametric bootstrap. In the former, observed data are fit to a model and then new samples are drawn using the model. In the latter, no model assumption is made and simulated samples are drawn with replacement from the observed data.

Kulesa, A., Krzywinski, M., Blainey, P. & Altman, N (2015) Points of Significance: Sampling distributions and the bootstrap Nature Methods 12:477-478.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Importance of being uncertain. Nature Methods 10:809-810.

Bayesian statistics

Thu 30-04-2015

Building on last month's column about Bayes' Theorem, we introduce Bayesian inference and contrast it to frequentist inference.

Given a hypothesis and a model, the frequentist calculates the probability of different data generated by the model, P(data|model). When this probability to obtain the observed data from the model is small (e.g. alpha = 0.05), the frequentist rejects the hypothesis.

Nature Methods Points of Significance column: Bayesian Statistics. (read)

In contrast, the Bayesian makes direct probability statements about the model by calculating P(model|data). In other words, given the observed data, the probability that the model is correct. With this approach it is possible to relate the probability of different models to identify one that is most compatible with the data.

The Bayesian approach is actually more intuitive. From the frequentist point of view, the probability used to assess the veracity of a hypothesis, P(data|model), commonly referred to as the P value, does not help us determine the probability that the model is correct. In fact, the P value is commonly misinterpreted as the probability that the hypothesis is right. This is the so-called "prosecutor's fallacy", which confuses the two conditional probabilities P(data|model) for P(model|data). It is the latter quantity that is more directly useful and calculated by the Bayesian.

Puga, J.L, Krzywinski, M. & Altman, N. (2015) Points of Significance: Bayes' Theorem Nature Methods 12:277-278.

Puga, J.L, Krzywinski, M. & Altman, N. (2015) Points of Significance: Bayes' Theorem Nature Methods 12:277-278.

Bayes' Theorem

Wed 22-04-2015

In our first column on Bayesian statistics, we introduce conditional probabilities and Bayes' theorem

P(B|A) = P(A|B) × P(B) / P(A)

This relationship between conditional probabilities P(B|A) and P(A|B) is central in Bayesian statistics. We illustrate how Bayes' theorem can be used to quickly calculate useful probabilities that are more difficult to conceptualize within a frequentist framework.

Nature Methods Points of Significance column: Bayes' Theorem. (read)

Using Bayes' theorem, we can incorporate our beliefs and prior experience about a system and update it when data are collected.

Puga, J.L, Krzywinski, M. & Altman, N. (2015) Points of Significance: Bayes' Theorem Nature Methods 12:277-278.