view updates

Distractions and amusements, with a sandwich and coffee.

Lips that taste of tears, they say, are the best for kissing.
•
• get cranky
• more quotes

Martin Krzywinski

*Staff Scientist, Bioinformatics*

Genome Sciences Centre

BC Cancer Agency

570 W 7th Avenue

Vancouver BC V5Z 4S6

Canada

Genome Sciences Centre

BC Cancer Agency

570 W 7th Avenue

Vancouver BC V5Z 4S6

Canada

With some very smart people, I work on problems in data visualization applied to cancer research and genome analysis. Previously I was involved in fingerprint mapping, system administration, computer security, fashion photography, medical imaging and LHC particle physics. My work is guided by a need to rationalize, make things pretty, combine science with art, mince words, find good questions and help make connections between ideas. All while exercising snark.

Statistics for aneuploidy level `h` =
1
2
3
4
5
6
7
8
9
10
# Haploid Genome Coverage Tables

**CUSTOM DEPTH AND PLOIDY** To create a table with a specific ploidy (e.g. 12) and haploid-equivalent (see below) depth (e.g. `200 \times`), use

## sequencing redundancy for a haploid genome

**IMPORTANT** The redundancy is always calculated using the size of the haploid genome. For example, if we collect 600 Gb of reads, our sequencing redundancy is `600 / 3 = 200 \times`. We've used the length of the haploid genome (3 Gb) in the calculation. If we now apply this `200 \times` sequencing to a diploid genome, our average coverage will not be `200 \times` but slightly less than `100 \times`.
### sequencing redundancy 1-fold (`\rho / h = 1.0`)

### sequencing redundancy 2-fold (`\rho / h = 2.0`)

### sequencing redundancy 3-fold (`\rho / h = 3.0`)

### sequencing redundancy 4-fold (`\rho / h = 4.0`)

### sequencing redundancy 5-fold (`\rho / h = 5.0`)

### sequencing redundancy 6-fold (`\rho / h = 6.0`)

### sequencing redundancy 7-fold (`\rho / h = 7.0`)

### sequencing redundancy 8-fold (`\rho / h = 8.0`)

### sequencing redundancy 9-fold (`\rho / h = 9.0`)

### sequencing redundancy 10-fold (`\rho / h = 10.0`)

### sequencing redundancy 20-fold (`\rho / h = 20.0`)

### sequencing redundancy 25-fold (`\rho / h = 25.0`)

### sequencing redundancy 50-fold (`\rho / h = 50.0`)

### sequencing redundancy 75-fold (`\rho / h = 75.0`)

### sequencing redundancy 100-fold (`\rho / h = 100.0`)

Given a location `x` defined in the context of `h` chromosomes, the probability that position `x` is covered at least `\phi` times is `P_{h,\phi}` and given by $$ P_{h,\phi} = \left( 1 - \sum \frac{1}{k!} \left( \frac{\rho}{h}^k \right) e^{-\rho/h} \right)^h \tag{1} $$

For more details, see *Wendl, M.C. and R.K. Wilson. 2008. Aspects of coverage in medical DNA sequencing. BMC Bioinformatics 9: 239.*

For a given sequencing redundancy `\rho` (e.g. `\rho`-fold, as determined by the length of the haploid genome) of a haploid genome, the fraction of the haploid genome represented by at least `\phi` reads is reported by `P_{h,\phi}`. Coverage by fewer than `\phi` reads is reported as `1-P_{h,\phi}`. Coverage by exactly `\phi` reads is `P_{h,\phi} - P_{h,\phi+1}`. Entries for which fractional coverage is `\lt 10^{-5}` are not shown.

A rudimentary Monte Carlo simulation of genome coverage is also available, and is a useful supplement to the exact probabilities shown here.

http://mkweb.bcgsc.ca/coverage/?aneuploidy=12&depth=200

**EXAMPLE 1**

Suppose you carried out 3-fold redundant (`\rho=3`) sequencing of a haploid genome (`h=1`). 95.02% of the genome will be covered by at least one read (`P_{1,1}`) while 22.40% will be covered by exactly 3 reads (`P_{1,3} - P_{1,4}`).

**EXAMPLE 2**

You are sequencing a sample with a tumor content of 25% and you're interested in the depth of sequencing required to detect heterozygous mutations in the tumor. This scenario is equivalent to an aneuploidy = 8 genome—any given allele is present 8 times. If you sequence at (`\rho=200`), then 95% of the bases will be covered at a depth of at least `\phi = 14` (`P_{8,14} = 0.9494`). If you're satisfied with `\phi = 5` then you only need `\rho = 100` since now `P_{8,5} = 0.9580`.

**ANALYTICAL vs STOCHASTIC**

View plot that compares analytical vs stochastic results.

**HAPLOID vs DIPLOID**

View plot that compares 100x and 200x coverage of haploid and diploid genomes.

**CODE**

Download Perl scripts for analytical (to produce the tables below for any `\rho`) and stochastic coverage calculations.

View table for sequencing redundancy `\rho` = 1 2 3 4 5 6 7 8 9 10 20 25 50 75 100 of a haploid genome.

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

0 | 0.3679 | 0.0000 | 1.0000 |

1 | 0.3679 | 0.3679 | 0.6321 |

2 | 0.1839 | 0.7358 | 0.2642 |

3 | 0.0613 | 0.9197 | 0.0803 |

4 | 0.0153 | 0.9810 | 0.0190 |

5 | 0.0031 | 0.9963 | 0.0037 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

0 | 0.1353 | 0.0000 | 1.0000 |

1 | 0.2707 | 0.1353 | 0.8647 |

2 | 0.2707 | 0.4060 | 0.5940 |

3 | 0.1804 | 0.6767 | 0.3233 |

4 | 0.0902 | 0.8571 | 0.1429 |

5 | 0.0361 | 0.9473 | 0.0527 |

6 | 0.0120 | 0.9834 | 0.0166 |

7 | 0.0034 | 0.9955 | 0.0045 |

8 | 0.0009 | 0.9989 | 0.0011 |

9 | 0.0002 | 0.9998 | 0.0002 |

10 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

0 | 0.0498 | 0.0000 | 1.0000 |

1 | 0.1494 | 0.0498 | 0.9502 |

2 | 0.2240 | 0.1991 | 0.8009 |

3 | 0.2240 | 0.4232 | 0.5768 |

4 | 0.1680 | 0.6472 | 0.3528 |

5 | 0.1008 | 0.8153 | 0.1847 |

6 | 0.0504 | 0.9161 | 0.0839 |

7 | 0.0216 | 0.9665 | 0.0335 |

8 | 0.0081 | 0.9881 | 0.0119 |

9 | 0.0027 | 0.9962 | 0.0038 |

10 | 0.0008 | 0.9989 | 0.0011 |

11 | 0.0002 | 0.9997 | 0.0003 |

12 | 0.0001 | 0.9999 | 0.0001 |

13 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

0 | 0.0183 | 0.0000 | 1.0000 |

1 | 0.0733 | 0.0183 | 0.9817 |

2 | 0.1465 | 0.0916 | 0.9084 |

3 | 0.1954 | 0.2381 | 0.7619 |

4 | 0.1954 | 0.4335 | 0.5665 |

5 | 0.1563 | 0.6288 | 0.3712 |

6 | 0.1042 | 0.7851 | 0.2149 |

7 | 0.0595 | 0.8893 | 0.1107 |

8 | 0.0298 | 0.9489 | 0.0511 |

9 | 0.0132 | 0.9786 | 0.0214 |

10 | 0.0053 | 0.9919 | 0.0081 |

11 | 0.0019 | 0.9972 | 0.0028 |

12 | 0.0006 | 0.9991 | 0.0009 |

13 | 0.0002 | 0.9997 | 0.0003 |

14 | 0.0001 | 0.9999 | 0.0001 |

15 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

0 | 0.0067 | 0.0000 | 1.0000 |

1 | 0.0337 | 0.0067 | 0.9933 |

2 | 0.0842 | 0.0404 | 0.9596 |

3 | 0.1404 | 0.1247 | 0.8753 |

4 | 0.1755 | 0.2650 | 0.7350 |

5 | 0.1755 | 0.4405 | 0.5595 |

6 | 0.1462 | 0.6160 | 0.3840 |

7 | 0.1044 | 0.7622 | 0.2378 |

8 | 0.0653 | 0.8666 | 0.1334 |

9 | 0.0363 | 0.9319 | 0.0681 |

10 | 0.0181 | 0.9682 | 0.0318 |

11 | 0.0082 | 0.9863 | 0.0137 |

12 | 0.0034 | 0.9945 | 0.0055 |

13 | 0.0013 | 0.9980 | 0.0020 |

14 | 0.0005 | 0.9993 | 0.0007 |

15 | 0.0002 | 0.9998 | 0.0002 |

16 | 0.0000 | 0.9999 | 0.0001 |

17 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

0 | 0.0025 | 0.0000 | 1.0000 |

1 | 0.0149 | 0.0025 | 0.9975 |

2 | 0.0446 | 0.0174 | 0.9826 |

3 | 0.0892 | 0.0620 | 0.9380 |

4 | 0.1339 | 0.1512 | 0.8488 |

5 | 0.1606 | 0.2851 | 0.7149 |

6 | 0.1606 | 0.4457 | 0.5543 |

7 | 0.1377 | 0.6063 | 0.3937 |

8 | 0.1033 | 0.7440 | 0.2560 |

9 | 0.0688 | 0.8472 | 0.1528 |

10 | 0.0413 | 0.9161 | 0.0839 |

11 | 0.0225 | 0.9574 | 0.0426 |

12 | 0.0113 | 0.9799 | 0.0201 |

13 | 0.0052 | 0.9912 | 0.0088 |

14 | 0.0022 | 0.9964 | 0.0036 |

15 | 0.0009 | 0.9986 | 0.0014 |

16 | 0.0003 | 0.9995 | 0.0005 |

17 | 0.0001 | 0.9998 | 0.0002 |

18 | 0.0000 | 0.9999 | 0.0001 |

19 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

0 | 0.0009 | 0.0000 | 1.0000 |

1 | 0.0064 | 0.0009 | 0.9991 |

2 | 0.0223 | 0.0073 | 0.9927 |

3 | 0.0521 | 0.0296 | 0.9704 |

4 | 0.0912 | 0.0818 | 0.9182 |

5 | 0.1277 | 0.1730 | 0.8270 |

6 | 0.1490 | 0.3007 | 0.6993 |

7 | 0.1490 | 0.4497 | 0.5503 |

8 | 0.1304 | 0.5987 | 0.4013 |

9 | 0.1014 | 0.7291 | 0.2709 |

10 | 0.0710 | 0.8305 | 0.1695 |

11 | 0.0452 | 0.9015 | 0.0985 |

12 | 0.0263 | 0.9467 | 0.0533 |

13 | 0.0142 | 0.9730 | 0.0270 |

14 | 0.0071 | 0.9872 | 0.0128 |

15 | 0.0033 | 0.9943 | 0.0057 |

16 | 0.0014 | 0.9976 | 0.0024 |

17 | 0.0006 | 0.9990 | 0.0010 |

18 | 0.0002 | 0.9996 | 0.0004 |

19 | 0.0001 | 0.9999 | 0.0001 |

20 | 0.0000 | 1.0000 | 0.0000 |

21 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

0 | 0.0003 | 0.0000 | 1.0000 |

1 | 0.0027 | 0.0003 | 0.9997 |

2 | 0.0107 | 0.0030 | 0.9970 |

3 | 0.0286 | 0.0138 | 0.9862 |

4 | 0.0573 | 0.0424 | 0.9576 |

5 | 0.0916 | 0.0996 | 0.9004 |

6 | 0.1221 | 0.1912 | 0.8088 |

7 | 0.1396 | 0.3134 | 0.6866 |

8 | 0.1396 | 0.4530 | 0.5470 |

9 | 0.1241 | 0.5925 | 0.4075 |

10 | 0.0993 | 0.7166 | 0.2834 |

11 | 0.0722 | 0.8159 | 0.1841 |

12 | 0.0481 | 0.8881 | 0.1119 |

13 | 0.0296 | 0.9362 | 0.0638 |

14 | 0.0169 | 0.9658 | 0.0342 |

15 | 0.0090 | 0.9827 | 0.0173 |

16 | 0.0045 | 0.9918 | 0.0082 |

17 | 0.0021 | 0.9963 | 0.0037 |

18 | 0.0009 | 0.9984 | 0.0016 |

19 | 0.0004 | 0.9993 | 0.0007 |

20 | 0.0002 | 0.9997 | 0.0003 |

21 | 0.0001 | 0.9999 | 0.0001 |

22 | 0.0000 | 1.0000 | 0.0000 |

23 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

0 | 0.0001 | 0.0000 | 1.0000 |

1 | 0.0011 | 0.0001 | 0.9999 |

2 | 0.0050 | 0.0012 | 0.9988 |

3 | 0.0150 | 0.0062 | 0.9938 |

4 | 0.0337 | 0.0212 | 0.9788 |

5 | 0.0607 | 0.0550 | 0.9450 |

6 | 0.0911 | 0.1157 | 0.8843 |

7 | 0.1171 | 0.2068 | 0.7932 |

8 | 0.1318 | 0.3239 | 0.6761 |

9 | 0.1318 | 0.4557 | 0.5443 |

10 | 0.1186 | 0.5874 | 0.4126 |

11 | 0.0970 | 0.7060 | 0.2940 |

12 | 0.0728 | 0.8030 | 0.1970 |

13 | 0.0504 | 0.8758 | 0.1242 |

14 | 0.0324 | 0.9261 | 0.0739 |

15 | 0.0194 | 0.9585 | 0.0415 |

16 | 0.0109 | 0.9780 | 0.0220 |

17 | 0.0058 | 0.9889 | 0.0111 |

18 | 0.0029 | 0.9947 | 0.0053 |

19 | 0.0014 | 0.9976 | 0.0024 |

20 | 0.0006 | 0.9989 | 0.0011 |

21 | 0.0003 | 0.9996 | 0.0004 |

22 | 0.0001 | 0.9998 | 0.0002 |

23 | 0.0000 | 0.9999 | 0.0001 |

24 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

0 | 0.0000 | 0.0000 | 1.0000 |

1 | 0.0005 | 0.0000 | 1.0000 |

2 | 0.0023 | 0.0005 | 0.9995 |

3 | 0.0076 | 0.0028 | 0.9972 |

4 | 0.0189 | 0.0103 | 0.9897 |

5 | 0.0378 | 0.0293 | 0.9707 |

6 | 0.0631 | 0.0671 | 0.9329 |

7 | 0.0901 | 0.1301 | 0.8699 |

8 | 0.1126 | 0.2202 | 0.7798 |

9 | 0.1251 | 0.3328 | 0.6672 |

10 | 0.1251 | 0.4579 | 0.5421 |

11 | 0.1137 | 0.5830 | 0.4170 |

12 | 0.0948 | 0.6968 | 0.3032 |

13 | 0.0729 | 0.7916 | 0.2084 |

14 | 0.0521 | 0.8645 | 0.1355 |

15 | 0.0347 | 0.9165 | 0.0835 |

16 | 0.0217 | 0.9513 | 0.0487 |

17 | 0.0128 | 0.9730 | 0.0270 |

18 | 0.0071 | 0.9857 | 0.0143 |

19 | 0.0037 | 0.9928 | 0.0072 |

20 | 0.0019 | 0.9965 | 0.0035 |

21 | 0.0009 | 0.9984 | 0.0016 |

22 | 0.0004 | 0.9993 | 0.0007 |

23 | 0.0002 | 0.9997 | 0.0003 |

24 | 0.0001 | 0.9999 | 0.0001 |

25 | 0.0000 | 1.0000 | 0.0000 |

26 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

4 | 0.0000 | 0.0000 | 1.0000 |

5 | 0.0001 | 0.0000 | 1.0000 |

6 | 0.0002 | 0.0001 | 0.9999 |

7 | 0.0005 | 0.0003 | 0.9997 |

8 | 0.0013 | 0.0008 | 0.9992 |

9 | 0.0029 | 0.0021 | 0.9979 |

10 | 0.0058 | 0.0050 | 0.9950 |

11 | 0.0106 | 0.0108 | 0.9892 |

12 | 0.0176 | 0.0214 | 0.9786 |

13 | 0.0271 | 0.0390 | 0.9610 |

14 | 0.0387 | 0.0661 | 0.9339 |

15 | 0.0516 | 0.1049 | 0.8951 |

16 | 0.0646 | 0.1565 | 0.8435 |

17 | 0.0760 | 0.2211 | 0.7789 |

18 | 0.0844 | 0.2970 | 0.7030 |

19 | 0.0888 | 0.3814 | 0.6186 |

20 | 0.0888 | 0.4703 | 0.5297 |

21 | 0.0846 | 0.5591 | 0.4409 |

22 | 0.0769 | 0.6437 | 0.3563 |

23 | 0.0669 | 0.7206 | 0.2794 |

24 | 0.0557 | 0.7875 | 0.2125 |

25 | 0.0446 | 0.8432 | 0.1568 |

26 | 0.0343 | 0.8878 | 0.1122 |

27 | 0.0254 | 0.9221 | 0.0779 |

28 | 0.0181 | 0.9475 | 0.0525 |

29 | 0.0125 | 0.9657 | 0.0343 |

30 | 0.0083 | 0.9782 | 0.0218 |

31 | 0.0054 | 0.9865 | 0.0135 |

32 | 0.0034 | 0.9919 | 0.0081 |

33 | 0.0020 | 0.9953 | 0.0047 |

34 | 0.0012 | 0.9973 | 0.0027 |

35 | 0.0007 | 0.9985 | 0.0015 |

36 | 0.0004 | 0.9992 | 0.0008 |

37 | 0.0002 | 0.9996 | 0.0004 |

38 | 0.0001 | 0.9998 | 0.0002 |

39 | 0.0001 | 0.9999 | 0.0001 |

40 | 0.0000 | 0.9999 | 0.0001 |

41 | 0.0000 | 1.0000 | 0.0000 |

42 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

7 | 0.0000 | 0.0000 | 1.0000 |

8 | 0.0001 | 0.0000 | 1.0000 |

9 | 0.0001 | 0.0001 | 0.9999 |

10 | 0.0004 | 0.0002 | 0.9998 |

11 | 0.0008 | 0.0006 | 0.9994 |

12 | 0.0017 | 0.0014 | 0.9986 |

13 | 0.0033 | 0.0031 | 0.9969 |

14 | 0.0059 | 0.0065 | 0.9935 |

15 | 0.0099 | 0.0124 | 0.9876 |

16 | 0.0155 | 0.0223 | 0.9777 |

17 | 0.0227 | 0.0377 | 0.9623 |

18 | 0.0316 | 0.0605 | 0.9395 |

19 | 0.0415 | 0.0920 | 0.9080 |

20 | 0.0519 | 0.1336 | 0.8664 |

21 | 0.0618 | 0.1855 | 0.8145 |

22 | 0.0702 | 0.2473 | 0.7527 |

23 | 0.0763 | 0.3175 | 0.6825 |

24 | 0.0795 | 0.3939 | 0.6061 |

25 | 0.0795 | 0.4734 | 0.5266 |

26 | 0.0765 | 0.5529 | 0.4471 |

27 | 0.0708 | 0.6294 | 0.3706 |

28 | 0.0632 | 0.7002 | 0.2998 |

29 | 0.0545 | 0.7634 | 0.2366 |

30 | 0.0454 | 0.8179 | 0.1821 |

31 | 0.0366 | 0.8633 | 0.1367 |

32 | 0.0286 | 0.8999 | 0.1001 |

33 | 0.0217 | 0.9285 | 0.0715 |

34 | 0.0159 | 0.9502 | 0.0498 |

35 | 0.0114 | 0.9662 | 0.0338 |

36 | 0.0079 | 0.9775 | 0.0225 |

37 | 0.0053 | 0.9854 | 0.0146 |

38 | 0.0035 | 0.9908 | 0.0092 |

39 | 0.0023 | 0.9943 | 0.0057 |

40 | 0.0014 | 0.9966 | 0.0034 |

41 | 0.0009 | 0.9980 | 0.0020 |

42 | 0.0005 | 0.9988 | 0.0012 |

43 | 0.0003 | 0.9993 | 0.0007 |

44 | 0.0002 | 0.9996 | 0.0004 |

45 | 0.0001 | 0.9998 | 0.0002 |

46 | 0.0001 | 0.9999 | 0.0001 |

47 | 0.0000 | 0.9999 | 0.0001 |

48 | 0.0000 | 1.0000 | 0.0000 |

49 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

24 | 0.0000 | 0.0000 | 1.0000 |

25 | 0.0000 | 0.0000 | 1.0000 |

26 | 0.0001 | 0.0001 | 0.9999 |

27 | 0.0001 | 0.0001 | 0.9999 |

28 | 0.0002 | 0.0003 | 0.9997 |

29 | 0.0004 | 0.0005 | 0.9995 |

30 | 0.0007 | 0.0009 | 0.9991 |

31 | 0.0011 | 0.0016 | 0.9984 |

32 | 0.0017 | 0.0027 | 0.9973 |

33 | 0.0026 | 0.0044 | 0.9956 |

34 | 0.0038 | 0.0070 | 0.9930 |

35 | 0.0054 | 0.0108 | 0.9892 |

36 | 0.0075 | 0.0162 | 0.9838 |

37 | 0.0102 | 0.0238 | 0.9762 |

38 | 0.0134 | 0.0340 | 0.9660 |

39 | 0.0172 | 0.0474 | 0.9526 |

40 | 0.0215 | 0.0646 | 0.9354 |

41 | 0.0262 | 0.0861 | 0.9139 |

42 | 0.0312 | 0.1123 | 0.8877 |

43 | 0.0363 | 0.1435 | 0.8565 |

44 | 0.0412 | 0.1798 | 0.8202 |

45 | 0.0458 | 0.2210 | 0.7790 |

46 | 0.0498 | 0.2669 | 0.7331 |

47 | 0.0530 | 0.3167 | 0.6833 |

48 | 0.0552 | 0.3697 | 0.6303 |

49 | 0.0563 | 0.4249 | 0.5751 |

50 | 0.0563 | 0.4812 | 0.5188 |

51 | 0.0552 | 0.5375 | 0.4625 |

52 | 0.0531 | 0.5927 | 0.4073 |

53 | 0.0501 | 0.6458 | 0.3542 |

54 | 0.0464 | 0.6959 | 0.3041 |

55 | 0.0422 | 0.7423 | 0.2577 |

56 | 0.0376 | 0.7845 | 0.2155 |

57 | 0.0330 | 0.8221 | 0.1779 |

58 | 0.0285 | 0.8551 | 0.1449 |

59 | 0.0241 | 0.8836 | 0.1164 |

60 | 0.0201 | 0.9077 | 0.0923 |

61 | 0.0165 | 0.9278 | 0.0722 |

62 | 0.0133 | 0.9443 | 0.0557 |

63 | 0.0105 | 0.9576 | 0.0424 |

64 | 0.0082 | 0.9682 | 0.0318 |

65 | 0.0063 | 0.9764 | 0.0236 |

66 | 0.0048 | 0.9827 | 0.0173 |

67 | 0.0036 | 0.9875 | 0.0125 |

68 | 0.0026 | 0.9911 | 0.0089 |

69 | 0.0019 | 0.9938 | 0.0062 |

70 | 0.0014 | 0.9957 | 0.0043 |

71 | 0.0010 | 0.9970 | 0.0030 |

72 | 0.0007 | 0.9980 | 0.0020 |

73 | 0.0005 | 0.9987 | 0.0013 |

74 | 0.0003 | 0.9991 | 0.0009 |

75 | 0.0002 | 0.9994 | 0.0006 |

76 | 0.0001 | 0.9996 | 0.0004 |

77 | 0.0001 | 0.9998 | 0.0002 |

78 | 0.0001 | 0.9999 | 0.0001 |

79 | 0.0000 | 0.9999 | 0.0001 |

80 | 0.0000 | 0.9999 | 0.0001 |

81 | 0.0000 | 1.0000 | 0.0000 |

82 | 0.0000 | 1.0000 | 0.0000 |

83 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

42 | 0.0000 | 0.0000 | 1.0000 |

43 | 0.0000 | 0.0000 | 1.0000 |

44 | 0.0000 | 0.0000 | 1.0000 |

45 | 0.0001 | 0.0001 | 0.9999 |

46 | 0.0001 | 0.0001 | 0.9999 |

47 | 0.0001 | 0.0002 | 0.9998 |

48 | 0.0002 | 0.0004 | 0.9996 |

49 | 0.0003 | 0.0006 | 0.9994 |

50 | 0.0005 | 0.0009 | 0.9991 |

51 | 0.0007 | 0.0014 | 0.9986 |

52 | 0.0011 | 0.0021 | 0.9979 |

53 | 0.0015 | 0.0032 | 0.9968 |

54 | 0.0021 | 0.0047 | 0.9953 |

55 | 0.0028 | 0.0068 | 0.9932 |

56 | 0.0038 | 0.0096 | 0.9904 |

57 | 0.0050 | 0.0134 | 0.9866 |

58 | 0.0065 | 0.0184 | 0.9816 |

59 | 0.0082 | 0.0249 | 0.9751 |

60 | 0.0103 | 0.0331 | 0.9669 |

61 | 0.0126 | 0.0433 | 0.9567 |

62 | 0.0153 | 0.0560 | 0.9440 |

63 | 0.0182 | 0.0712 | 0.9288 |

64 | 0.0213 | 0.0894 | 0.9106 |

65 | 0.0246 | 0.1107 | 0.8893 |

66 | 0.0279 | 0.1353 | 0.8647 |

67 | 0.0313 | 0.1632 | 0.8368 |

68 | 0.0345 | 0.1945 | 0.8055 |

69 | 0.0375 | 0.2290 | 0.7710 |

70 | 0.0402 | 0.2665 | 0.7335 |

71 | 0.0424 | 0.3066 | 0.6934 |

72 | 0.0442 | 0.3490 | 0.6510 |

73 | 0.0454 | 0.3932 | 0.6068 |

74 | 0.0460 | 0.4386 | 0.5614 |

75 | 0.0460 | 0.4846 | 0.5154 |

76 | 0.0454 | 0.5307 | 0.4693 |

77 | 0.0442 | 0.5761 | 0.4239 |

78 | 0.0425 | 0.6203 | 0.3797 |

79 | 0.0404 | 0.6628 | 0.3372 |

80 | 0.0379 | 0.7032 | 0.2968 |

81 | 0.0350 | 0.7411 | 0.2589 |

82 | 0.0321 | 0.7761 | 0.2239 |

83 | 0.0290 | 0.8082 | 0.1918 |

84 | 0.0259 | 0.8371 | 0.1629 |

85 | 0.0228 | 0.8630 | 0.1370 |

86 | 0.0199 | 0.8858 | 0.1142 |

87 | 0.0172 | 0.9057 | 0.0943 |

88 | 0.0146 | 0.9229 | 0.0771 |

89 | 0.0123 | 0.9375 | 0.0625 |

90 | 0.0103 | 0.9498 | 0.0502 |

91 | 0.0085 | 0.9601 | 0.0399 |

92 | 0.0069 | 0.9685 | 0.0315 |

93 | 0.0056 | 0.9754 | 0.0246 |

94 | 0.0044 | 0.9810 | 0.0190 |

95 | 0.0035 | 0.9854 | 0.0146 |

96 | 0.0027 | 0.9889 | 0.0111 |

97 | 0.0021 | 0.9917 | 0.0083 |

98 | 0.0016 | 0.9938 | 0.0062 |

99 | 0.0012 | 0.9954 | 0.0046 |

100 | 0.0009 | 0.9966 | 0.0034 |

101 | 0.0007 | 0.9976 | 0.0024 |

102 | 0.0005 | 0.9983 | 0.0017 |

103 | 0.0004 | 0.9988 | 0.0012 |

104 | 0.0003 | 0.9991 | 0.0009 |

105 | 0.0002 | 0.9994 | 0.0006 |

106 | 0.0001 | 0.9996 | 0.0004 |

107 | 0.0001 | 0.9997 | 0.0003 |

108 | 0.0001 | 0.9998 | 0.0002 |

109 | 0.0000 | 0.9999 | 0.0001 |

110 | 0.0000 | 0.9999 | 0.0001 |

111 | 0.0000 | 0.9999 | 0.0001 |

112 | 0.0000 | 1.0000 | 0.0000 |

113 | 0.0000 | 1.0000 | 0.0000 |

114 | 0.0000 | 1.0000 | 0.0000 |

115 | 0.0000 | 1.0000 | 0.0000 |

`\phi` | `P_{h,\phi} - P_{h,\phi+1}` | `1-P_{h,\phi}` | `P_{h,\phi}` |
---|---|---|---|

61 | 0.0000 | 0.0000 | 1.0000 |

62 | 0.0000 | 0.0000 | 1.0000 |

63 | 0.0000 | 0.0000 | 1.0000 |

64 | 0.0000 | 0.0000 | 1.0000 |

65 | 0.0000 | 0.0001 | 0.9999 |

66 | 0.0001 | 0.0001 | 0.9999 |

67 | 0.0001 | 0.0002 | 0.9998 |

68 | 0.0002 | 0.0003 | 0.9997 |

69 | 0.0002 | 0.0004 | 0.9996 |

70 | 0.0003 | 0.0007 | 0.9993 |

71 | 0.0004 | 0.0010 | 0.9990 |

72 | 0.0006 | 0.0014 | 0.9986 |

73 | 0.0008 | 0.0020 | 0.9980 |

74 | 0.0011 | 0.0028 | 0.9972 |

75 | 0.0015 | 0.0040 | 0.9960 |

76 | 0.0020 | 0.0055 | 0.9945 |

77 | 0.0026 | 0.0074 | 0.9926 |

78 | 0.0033 | 0.0100 | 0.9900 |

79 | 0.0042 | 0.0133 | 0.9867 |

80 | 0.0052 | 0.0175 | 0.9825 |

81 | 0.0064 | 0.0226 | 0.9774 |

82 | 0.0078 | 0.0291 | 0.9709 |

83 | 0.0094 | 0.0369 | 0.9631 |

84 | 0.0112 | 0.0463 | 0.9537 |

85 | 0.0132 | 0.0575 | 0.9425 |

86 | 0.0154 | 0.0708 | 0.9292 |

87 | 0.0176 | 0.0861 | 0.9139 |

88 | 0.0201 | 0.1038 | 0.8962 |

89 | 0.0225 | 0.1238 | 0.8762 |

90 | 0.0250 | 0.1463 | 0.8537 |

91 | 0.0275 | 0.1714 | 0.8286 |

92 | 0.0299 | 0.1989 | 0.8011 |

93 | 0.0322 | 0.2288 | 0.7712 |

94 | 0.0342 | 0.2610 | 0.7390 |

95 | 0.0360 | 0.2952 | 0.7048 |

96 | 0.0375 | 0.3312 | 0.6688 |

97 | 0.0387 | 0.3687 | 0.6313 |

98 | 0.0395 | 0.4074 | 0.5926 |

99 | 0.0399 | 0.4468 | 0.5532 |

100 | 0.0399 | 0.4867 | 0.5133 |

101 | 0.0395 | 0.5266 | 0.4734 |

102 | 0.0387 | 0.5660 | 0.4340 |

103 | 0.0376 | 0.6047 | 0.3953 |

104 | 0.0361 | 0.6423 | 0.3577 |

105 | 0.0344 | 0.6784 | 0.3216 |

106 | 0.0325 | 0.7128 | 0.2872 |

107 | 0.0303 | 0.7453 | 0.2547 |

108 | 0.0281 | 0.7756 | 0.2244 |

109 | 0.0258 | 0.8037 | 0.1963 |

110 | 0.0234 | 0.8294 | 0.1706 |

111 | 0.0211 | 0.8529 | 0.1471 |

112 | 0.0188 | 0.8740 | 0.1260 |

113 | 0.0167 | 0.8928 | 0.1072 |

114 | 0.0146 | 0.9095 | 0.0905 |

115 | 0.0127 | 0.9241 | 0.0759 |

116 | 0.0110 | 0.9368 | 0.0632 |

117 | 0.0094 | 0.9478 | 0.0522 |

118 | 0.0079 | 0.9572 | 0.0428 |

119 | 0.0067 | 0.9651 | 0.0349 |

120 | 0.0056 | 0.9718 | 0.0282 |

121 | 0.0046 | 0.9773 | 0.0227 |

122 | 0.0038 | 0.9819 | 0.0181 |

123 | 0.0031 | 0.9857 | 0.0143 |

124 | 0.0025 | 0.9888 | 0.0112 |

125 | 0.0020 | 0.9912 | 0.0088 |

126 | 0.0016 | 0.9932 | 0.0068 |

127 | 0.0012 | 0.9948 | 0.0052 |

128 | 0.0010 | 0.9960 | 0.0040 |

129 | 0.0007 | 0.9970 | 0.0030 |

130 | 0.0006 | 0.9977 | 0.0023 |

131 | 0.0004 | 0.9983 | 0.0017 |

132 | 0.0003 | 0.9987 | 0.0013 |

133 | 0.0003 | 0.9991 | 0.0009 |

134 | 0.0002 | 0.9993 | 0.0007 |

135 | 0.0001 | 0.9995 | 0.0005 |

136 | 0.0001 | 0.9996 | 0.0004 |

137 | 0.0001 | 0.9997 | 0.0003 |

138 | 0.0001 | 0.9998 | 0.0002 |

139 | 0.0000 | 0.9999 | 0.0001 |

140 | 0.0000 | 0.9999 | 0.0001 |

141 | 0.0000 | 0.9999 | 0.0001 |

142 | 0.0000 | 1.0000 | 0.0000 |

143 | 0.0000 | 1.0000 | 0.0000 |

144 | 0.0000 | 1.0000 | 0.0000 |

145 | 0.0000 | 1.0000 | 0.0000 |

*With four parameters I can fit an elephant and with five I can make him wiggle his trunk. —John von Neumann.*

By increasing the complexity of a model, it is easy to make it fit to data perfectly. Does this mean that the model is perfectly suitable? No.

When a model has a relatively large number of parameters, it is likely to be influenced by the noise in the data, which varies across observations, as much as any underlying trend, which remains the same. Such a model is overfitted—it matches training data well but does not generalize to new observations.

We discuss the use of training, validation and testing data sets and how they can be used, with methods such as cross-validation, to avoid overfitting.

Altman, N. & Krzywinski, M. (2016) Points of Significance: Model Selection and Overfitting. *Nature Methods* **13**:703-704.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. *Nature Methods* **13**:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. *Nature Methods* **13**:541-542.

*It is important to understand both what a classification metric expresses and what it hides.*

We examine various metrics use to assess the performance of a classifier. We show that a single metric is insufficient to capture performance—for any metric, a variety of scenarios yield the same value.

We also discuss ROC and AUC curves and how their interpretation changes based on class balance.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. *Nature Methods* **13**:603-604.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. *Nature Methods* **13**:541-542.

Today is the day and it's hardly an approximation. In fact, `22/7` is 20% more accurate of a representation of `\pi` than `3.14`!

Time to celebrate, graphically. This year I do so with perfect packing of circles that embody the approximation.

By warping the circle by 8% along one axis, we can create a shape whose ratio of circumference to diameter, taken as twice the average radius, is 22/7.

If you prefer something more accurate, check out art from previous `\pi` days: 2013 `\pi` Day and 2014 `\pi` Day, 2015 `\pi` Day, and 2016 `\pi` Day.

*Regression can be used on categorical responses to estimate probabilities and to classify.*

The next column in our series on regression deals with how to classify categorical data.

We show how linear regression can be used for classification and demonstrate that it can be unreliable in the presence of outliers. Using a logistic regression, which fits a linear model to the log odds ratio, improves robustness.

Logistic regression is solved numerically and in most cases, the maximum-likelihood estimates are unique and optimal. However, when the classes are perfectly separable, the numerical approach fails because there is an infinite number of solutions.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. *Nature Methods* **13**:541-542.

Altman, N. & Krzywinski, M. (2016) Points of Significance: Regression diagnostics? *Nature Methods* **13**:385-386.

Altman, N. & Krzywinski, M. (2015) Points of Significance: Multiple Linear Regression *Nature Methods* **12**:1103-1104.

Altman, N. & Krzywinski, M. (2015) Points of significance: Simple Linear Regression *Nature Methods* **12**:999-1000.

Genomic instability is one of the defining characteristics of cancer and within a tumor, which is an ever-evolving population of cells, there are many genomes. Mutations accumulate and propagate to create subpopulations and these groups of cells, called clones, may respond differently to treatment.

It is now possible to sequence individual cells within a tumor to create a profile of genomes. This profile changes with time, both in the kinds of mutation that are found and in their proportion in the overall population.

Clone evolution diagrams visualize these data. These diagrams can be qualitative, showing only trends, or quantitative, showing temporal and population changes to scale. In this Molecular Cell forum article I provide guidelines for drawing these diagrams, focusing with how to use color and navigational elements, such as grids, to clarify the relationships between clones.

I'd like to thank Maia Smith and Cydney Nielsen for assistance in preparing some of the figures in the paper.

Krzywinski, M. (2016) Visualizing Clonal Evolution in Cancer. Mol Cell 62:652-656.

*Limitations in print resolution and visual acuity impose limits on data density and detail.*

Your printer can print at 1,200 or 2,400 dots per inch. At reading distance, your reader can resolve about 200–300 lines per inch. This large gap—how finely we can print and how well we can see—can create problems when we don't take visual acuity into account.

The column provides some guidelines—particularly relevant when showing whole-genome data, where the scale of elements of interest such as genes is below the visual acuity limit—for binning data so that they are represented by elements that can be comfortably discerned.

Krzywinski, M. (2016) Points of view: Binning high-resolution data. Nature Methods 13:463.

*Residual plots can be used to validate assumptions about the regression model.*

Continuing with our series on regression, we look at how you can identify issues in your regression model.

The difference between the observed value and the model's predicted value is the residual, `r = y_i - \hat{y}_i`, a very useful quantity to identify the effects of outliers and trends in the data that might suggest your model is inadequate.

We also discuss normal probability plots (or Q-Q plots) and show how these can be used to check that the residuals are normally distributed, which is one of the assumptions of regression (constant variance being another).

Altman, N. & Krzywinski, M. (2016) Points of Significance: Analyzing outliers: Influential or nuisance? *Nature Methods* **13**:281-282.

Altman, N. & Krzywinski, M. (2015) Points of Significance: Multiple Linear Regression *Nature Methods* **12**:1103-1104.

Altman, N. & Krzywinski, M. (2015) Points of significance: Simple Linear Regression *Nature Methods* **12**:999-1000.

*Some outliers influence the regression fit more than others.*

This month our column addresses the effect that outliers have on linear regression.

You may be surprised, but not all outliers have the same influence on the fit (e.g. regression slope) or inference (e.g. confidence or prediction intervals). Outliers with large leverage—points that are far from the sample average—can have a very large effect. On the other hand, if the outlier is close to the sample average, it may not influence the regression slope at all.

Quantities such as Cook's distance and the so-called hat matrix, which defines leverage, are useful in assessing the effect of outliers.

Altman, N. & Krzywinski, M. (2015) Points of Significance: Multiple Linear Regression *Nature Methods* **12**:1103-1104.

Altman, N. & Krzywinski, M. (2015) Points of significance: Simple Linear Regression *Nature Methods* **12**:999-1000.

Chirp, chirp, chirp but much better looking.

If you like these, check out my other typographical art posters.

Celebrate `\\pi` Day (March 14th) with colliding digits in space. This year, I celebrate the detection of gravitational waves at the LIGO lab and simulate the effect of gravity on masses created from the digits of `\\pi`.

Some strange things can happen.

The art is featured in the Gravity of Pi article on the Scientific American SA Visual blog.

Check out art from previous years: 2013 `\\pi` Day and 2014 `\\pi` Day and 2015 `\\pi` Day.

*Use alignment and consistency to untangle complex circuit diagrams.*

This month we apply the ideas presented in our column about drawing pathways to neural circuit diagrams. Neural circuits are networks of cells or regions, typically with a large number of variables, such as cell and neurotransmitter type.

We discuss how to effectively route arrows, how to avoid pitfalls of redundant encoding and suggest ways to encorporate emphasis in the layout.

Hunnicutt, B.J. & Krzywinski, M. (2016) Points of View: Neural circuit diagrams. Nature Methods 13:189.

Hunnicutt, B.J. & Krzywinski, M. (2016) Points of Viev: Pathways. Nature Methods 13:5.

Wong, B. (2010) Points of Viev: Gestalt principles (part 1). Nature Methods 7:863.

Wong, B. (2010) Points of Viev: Gestalt principles (part 2). Nature Methods 7:941.

*Apply visual grouping principles to add clarity to information flow in pathway diagrams.*

We draw on the Gestalt principles of connection, grouping and enclosure to construct practical guidelines for drawing pathways with a clear layout that maintains hierarchy.

We include tips about how to use negative space and align nodes to emphasizxe groups and how to effectively draw curved arrows to clearly show paths.

Hunnicutt, B.J. & Krzywinski, M. (2016) Points of Viev: Pathways. Nature Methods 13:5.

Wong, B. (2010) Points of Viev: Gestalt principles (part 1). Nature Methods 7:863.

Wong, B. (2010) Points of Viev: Gestalt principles (part 2). Nature Methods 7:941.

*When multiple variables are associated with a response, the interpretation of a prediction equation is seldom simple.*

This month we continue with the topic of regression and expand the discussion of simple linear regression to include more than one variable. As it turns out, although the analysis and presentation of results builds naturally on the case with a single variable, the interpretation of the results is confounded by the presence of correlation between the variables.

By extending the example of the relationship of weight and height—we now include jump height as a second variable that influences weight—we show that the regression coefficient estimates can be very inaccurate and even have the wrong sign when the predictors are correlated and only one is considered in the model.

Care must be taken! Accurate prediction of the response is not an indication that regression slopes reflect the true relationship between the predictors and the response.

*Nature Methods* **12**:1103-1104.

*Nature Methods* **12**:999-1000.

Taught how Circos and hive plots can be used to show sequence relationships at Biotalent Functional Annotation of Genome Sequences Workshop at the Institute for Plant Genetics in Poznan, Poland.

Students generated images published in Fast Diploidization in Close Mesopolyploid Relatives of Arabidopsis.

Workshop materials: slides, handout, Circos and hive plot files.

Students also learned how to use hive plots to show synteny.

Mandakova, T. et al. Fast Diploidization in Close Mesopolyploid Relatives of Arabidopsis The Plant Cell, Vol. 22: 2277-2290, July 2010

*Choose your own dust adventure!*

Nobody likes dusting but everyone should find dust interesting.

Working with Jeannie Hunnicutt and with Jen Christiansen's art direction, I created this month's Scientific American Graphic Science visualization based on a recent paper The Ecology of microscopic life in household dust.

We have also written about the making of the graphic, for those interested in how these things come together.

This was my third information graphic for the Graphic Science page. Unlike the previous ones, it's visually simple and ... interactive. Or, at least, as interactive as a printed page can be.

More of my American Scientific Graphic Science designs

Barberan A et al. (2015) The ecology of microscopic life in household dust. Proc. R. Soc. B 282: 20151139.

A very large list of named colors generated from combining some of the many lists that already exist (X11, Crayola, Raveling, Resene, wikipedia, xkcd, etc).

For each color, coordinates in RGB, HSV, XYZ, Lab and LCH space are given along with the 5 nearest, as measured with ΔE, named neighbours.

I also provide a web service. Simply call this URL with an RGB string.

*It is possible to predict the values of unsampled data by using linear regression on correlated sample data.*

This month, we begin our column with a quote, shown here in its full context from Box's paper Science and Statistics.

In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless. The physicist knows that particles have mass and yet certain results, approximating what really happens, may be derived from the assumption that they do not. Equally, the statistician knows, for example, that in nature there never was a normal distribution, there never was a straight line, yet with normal and linear assumptions, known to be false, he can often derive results which match, to a useful approximation, those found in the real world.

—Box, G. J. Am. Stat. Assoc. 71, 791–799 (1976).

This column is our first in the series about regression. We show that regression and correlation are related concepts—they both quantify trends—and that the calculations for simple linear regression are essentially the same as for one-way ANOVA.

While correlation provides a measure of a specific kind of association between variables, regression allows us to fit correlated sample data to a model, which can be used to predict the values of unsampled data.

Altman, N. & Krzywinski, M. (2015) Points of Significance: Simple Linear Regression *Nature Methods* **12**:999-1000.

Altman, N. & Krzywinski, M. (2015) Points of significance: Association, correlation and causation *Nature Methods* **12**:899-900.

Krzywinski, M. & Altman, N. (2014) Points of significance: Analysis of variance (ANOVA) and blocking. Nature Methods 11:699-700.

*Correlation implies association, but not causation. Conversely, causation implies association, but not correlation.*

This month, we distinguish between association, correlation and causation.

Association, also called dependence, is a very general relationship: one variable provides information about the other. Correlation, on the other hand, is a specific kind of association: an increasing or decreasing trend. Not all associations are correlations. Moreover, causality can be connected only to association.

We discuss how correlation can be quantified using correlation coefficients (Pearson, Spearman) and show how spurious corrlations can arise in random data as well as very large independent data sets. For example, per capita cheese consumption is correlated with the number of people who died by becoming tangled in bedsheets.

Altman, N. & Krzywinski, M. (2015) Points of Significance: Association, correlation and causation *Nature Methods* **12**:899-900.

*For making probabilistic inferences, a graph is worth a thousand words.*

This month we continue with the theme of Bayesian statistics and look at Bayesian networks, which combine network analysis with Bayesian statistics.

In a Bayesian network, nodes represent entities, such as genes, and the influence that one gene has over another is represented by a edge and probability table (or function). Bayes' Theorem is used to calculate the probability of a state for any entity.

In our previous columns about Bayesian statistics, we saw how new information (likelihood) can be incorporated into the probability model (prior) to update our belief of the state of the system (posterior). In the context of a Bayesian network, relationships called conditional dependencies can arise between nodes when information is added to the network. Using a small gene regulation network we show how these dependencies may connect nodes along different paths.

Puga, J.L, Krzywinski, M. & Altman, N. (2015) Points of Significance: Bayesian Statistics *Nature Methods* **12**:277-278.

Puga, J.L, Krzywinski, M. & Altman, N. (2015) Points of Significance: Bayes' Theorem *Nature Methods* **12**:277-278.

The Points of Significance column is on vacation this month.

Meanwhile, we're showing you how to manage small multiple plots in the Points of View column Unentangling Complex Plots.

Data in small multiples can vary in range, noise level and trend. Gregor McInerny and myself show you how you can deal with this by cropped and scaling the multiples to a different range to emphasize relative changes while preserving the context of the full data range to show absolute changes.

McInerny, G. & Krzywinski, M. (2015) Points of View: Unentangling complex plots. *Nature Methods* **12**:591.

The Jurassic World Creation Lab webpage shows you how one might create a dinosaur from a sample of DNA. First extract, sequence, assemble and fill in the gaps in the DNA and then incubate in an egg and wait.

With enough time, you'll grow your own brand new dinosaur. Or a stalk of corn ... with more teeth.

What went wrong? Let me explain.

You've seen bound volumes of printouts of the human reference genome. But what if at the Genome Sciences Center we wanted to print everything we sequence today?

I was commissioned by Scientific American to create an information graphic based on Figure 9 in the landmark Nature Integrative analysis of 111 reference human epigenomes paper.

The original figure details the relationships between more than 100 sequenced epigenomes and genetic traits, including disease like Crohn's and Alzheimer's. These relationships were shown as a heatmap in which the epigenome-trait cell depicted the *P* value associated with tissue-specific H3K4me1 epigenetic modification in regions of the genome associated with the trait.

As much as I distrust network diagrams, in this case this was the right way to show the data. The network was meticulously laid out by hand to draw attention to the layered groups of diseases of traits.

This was my second information graphic for the Graphic Science page. Last year, I illustrated the extent of differences in the gene sequence of humans, Denisovans, chimps and gorillas.

The bootstrap is a computational method that simulates new sample from observed data. These simulated samples can be used to determine how estimates from replicate experiments might be distributed and answer questions about precision and bias.

We discuss both parametric and non-parametric bootstrap. In the former, observed data are fit to a model and then new samples are drawn using the model. In the latter, no model assumption is made and simulated samples are drawn with replacement from the observed data.

Kulesa, A., Krzywinski, M., Blainey, P. & Altman, N (2015) Points of Significance: Sampling distributions and the bootstrap *Nature Methods* **12**:477-478.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Importance of being uncertain. *Nature Methods* **10**:809-810.

Building on last month's column about Bayes' Theorem, we introduce Bayesian inference and contrast it to frequentist inference.

Given a hypothesis and a model, the frequentist calculates the probability of different data generated by the model, *P*(data|model). When this probability to obtain the observed data from the model is small (e.g. `alpha` = 0.05), the frequentist rejects the hypothesis.

In contrast, the Bayesian makes direct probability statements about the model by calculating P(model|data). In other words, given the observed data, the probability that the model is correct. With this approach it is possible to relate the probability of different models to identify one that is most compatible with the data.

The Bayesian approach is actually more intuitive. From the frequentist point of view, the probability used to assess the veracity of a hypothesis, P(data|model), commonly referred to as the *P* value, does not help us determine the probability that the model is correct. In fact, the *P* value is commonly misinterpreted as the probability that the hypothesis is right. This is the so-called "prosecutor's fallacy", which confuses the two conditional probabilities *P*(data|model) for *P*(model|data). It is the latter quantity that is more directly useful and calculated by the Bayesian.

Puga, J.L, Krzywinski, M. & Altman, N. (2015) Points of Significance: Bayes' Theorem *Nature Methods* **12**:277-278.

Puga, J.L, Krzywinski, M. & Altman, N. (2015) Points of Significance: Bayes' Theorem *Nature Methods* **12**:277-278.

In our first column on Bayesian statistics, we introduce conditional probabilities and Bayes' theorem

*P*(B|A) = *P*(A|B) × *P*(B) / *P*(A)

This relationship between conditional probabilities *P*(B|A) and *P*(A|B) is central in Bayesian statistics. We illustrate how Bayes' theorem can be used to quickly calculate useful probabilities that are more difficult to conceptualize within a frequentist framework.

Using Bayes' theorem, we can incorporate our beliefs and prior experience about a system and update it when data are collected.

*Nature Methods* **12**:277-278.

Oldford, R.W. & Cherry, W.H. Picturing probability: the poverty of Venn diagrams, the richness of eikosograms. (University of Waterloo, 2006)

Celebrate `pi` Day (March 14th) with splitting its digit endlessly. This year I use a treemap approach to encode the digits in the style of Piet Mondrian.

The art has been featured in Ana Swanson's Wonkblog article at the Washington Post—10 Stunning Images Show The Beauty Hidden in `pi`.

I also have art from 2013 `pi` Day and 2014 `pi` Day.

The split plot design originated in agriculture, where applying some factors on a small scale is more difficult than others. For example, it's harder to cost-effectively irrigate a small piece of land than a large one. These differences are also present in biological experiments. For example, temperature and housing conditions are easier to vary for groups of animals than for individuals.

The split plot design is an expansion on the concept of blocking—all split plot designs include at least one randomized complete block design. The split plot design is also useful for cases where one wants to increase the sensitivity in one factor (sub-plot) more than another (whole plot).

Altman, N. & Krzywinski, M. (2015) Points of Significance: Split Plot Design *Nature Methods* **12**:165-166.

1. Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

2. Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of variance (ANOVA) and blocking *Nature Methods* **11**:699-700.

3. Blainey, P., Krzywinski, M. & Altman, N. (2014) Points of Significance: Replication *Nature Methods* **11**:879-880.

In an audience of 8 men and 8 women, chances are 50% that at least one has some degree of color blindness^{1}. When encoding information or designing content, use colors that is color-blind safe.

Nature Methods has announced the launch of a new statistics collection for biologists.

As part of that collection, announced that the entire Points of Significance collection is now open access.

This is great news for educators—the column can now be freely distributed in classrooms.

I've posted a writeup about the design and redesign process behind the figures in our Nature Methods Points of Significance column.

I have selected several figures from our past columns and show how they evolved from their draft to published versions.

Clarity, concision and space constraints—we have only 3.4" of horizontal space— all have to be balanced for a figure to be effective.

It's nearly impossible to find case studies of scientific articles (or figures) through the editing and review process. Nobody wants to show their drafts. With this writeup I hope to add to this space and encourage others to reveal their process. Students love this. See whether you agree with my decisions!

Past columns have described experimental designs that mitigate the effect of variation: random assignment, blocking and replication.

The goal of these designs is to observe a reproducible effect that can be due only to the treatment, avoiding confounding and bias. Simultaneously, to sample enough variability to estimate how much we expect the effect to differ if the measurements are repeated with similar but not identical samples (replicates).

We need to distinguish between sources of variation that are nuisance factors in our goal to measure mean biological effects from those that are required to assess how much effects vary in the population.

Altman, N. & Krzywinski, M. (2014) Points of Significance: Two Factor Designs *Nature Methods* **11**:5-6.

1. Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

2. Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of variance (ANOVA) and blocking *Nature Methods* **11**:699-700.

3. Blainey, P., Krzywinski, M. & Altman, N. (2014) Points of Significance: Replication *Nature Methods* **11**:879-880.

We've previously written about how to analyze the impact of one variable in our ANOVA column. Complex biological systems are rarely so obliging—multiple experimental factors interact and producing effects.

ANOVA is a natural way to analyze multiple factors. It can incorporate the possibility that the factors interact—the effect of one factor depends on the level of another factor. For example, the potency of a drug may depend on the subject's diet.

We can increase the power of the analysis by allowing for interaction, as well as by blocking.

Krzywinski, M., Altman, (2014) Points of Significance: Two Factor Designs *Nature Methods* **11**:1187-1188.

Blainey, P., Krzywinski, M. & Altman, N. (2014) Points of Significance: Replication *Nature Methods* **11**:879-880.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of variance (ANOVA) and blocking *Nature Methods* **11**:699-700.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

Sources of noise in experiments can be mitigated and assessed by nested designs. This kind of experimental design naturally models replication, which was the topic of last month's column.

Nested designs are appropriate when we want to use the data derived from experimental subjects to make general statements about populations. In this case, the subjects are *random* factors in the experiment, in contrast to *fixed* factors, such as we've seen previously.

In ANOVA analysis, random factors provide information about the amount of noise contributed by each factor. This is different from inferences made about fixed factors, which typically deal with a change in mean. Using the F-test, we can determine whether each layer of replication (e.g. animal, tissue, cell) contributes additional variation to the overall measurement.

Krzywinski, M., Altman, N. & Blainey, P. (2014) Points of Significance: Nested designs *Nature Methods* **11**:977-978.

Blainey, P., Krzywinski, M. & Altman, N. (2014) Points of Significance: Replication *Nature Methods* **11**:879-880.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of variance (ANOVA) and blocking *Nature Methods* **11**:699-700.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

It's fitting that the column published just before Labor day weekend is all about how to best allocate labor.

Replication is used to decrease the impact of variability from parts of the experiment that contribute noise. For example, we might measure data from more than one mouse to attempt to generalize over all mice.

It's important to distinguish technical replicates, which attempt to capture the noise in our measuring apparatus, from biological replicates, which capture biological variation. The former give us no information about biological variation and cannot be used to directly make biological inferences. To do so is to commit *pseudoreplication*. Technical replicates are useful to reduce the noise so that we have a better chance to detect a biologically meaningful signal.

Blainey, P., Krzywinski, M. & Altman, N. (2014) Points of Significance: Replication *Nature Methods* **11**:879-880.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of variance (ANOVA) and blocking *Nature Methods* **11**:699-700.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Designing Comparative Experiments *Nature Methods* **11**:597-598.

I was commissioned by Scientific American to create an information graphic that showed how our genomes are more similar to those of the chimp and bonobo than to the gorilla.

I had about 5 x 5 inches of print space to work with. For 4 genomes? No problem. Bring out the Hilbert curve!

To accompany the piece, I will be posting to the Scientific American blog about the process of creating the figure. And to emphasize that the *genome is not a blueprint*!

As part of this project, I created some Hilbert curve art pieces. And while exploring, found thousands of Hilbertonians!

Celebrate Pi Approximation Day (July 22nd) with the art of arm waving. This year I take the first 10,000 most accurate approximations (*m*/*n*, *m*=1..10,000) and look at their accuracy.

I turned to the spiral again after applying it to stack stacked ring plots of frequency distributions in Pi for the 2014 Pi Day.

Our 10th Points of Significance column! Continuing with our previous discussion about comparative experiments, we introduce ANOVA and blocking. Although this column appears to introduce two new concepts (ANOVA and blocking), you've seen both before, though under a different guise.

If you know the *t*-test you've already applied analysis of variance (ANOVA), though you probably didn't realize it. In ANOVA we ask whether the variation within our samples is compatible with the variation between our samples (sample means). If the samples don't all have the same mean then we expect the latter to be larger. The ANOVA test statistic (*F*) assigns significance to the ratio of these two quantities. When we only have two-samples and apply the *t*-test, *t*^{2} = *F*.

ANOVA naturally incorporates and partitions sources of variation—the effects of variables on the system are determined based on the amount of variation they contribute to the total variation in the data. If this contribution is large, we say that the variation can be "explained" by the variable and infer an effect.

We discuss how data collection can be organized using a randomized complete block design to account for sources of uncertainty in the experiment. This process is called blocking because we are blocking the variation from a known source of uncertainty from interfering with our measurements. You've already seen blocking in the paired *t*-test example, in which the subject (or experimental unit) was the block.

We've worked hard to bring you 20 pages of statistics primers (though it feels more like 200!). The column is taking a month off in August, as we shrink our error bars.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Analysis of Variance (ANOVA) and Blocking *Nature Methods* **11**:699-700.

*Nature Methods* **11**:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

This month, Points of Significance begins a series of articles about experimental design. We start by returning to the two-sample and paired *t*-tests for a discussion of biological and experimental variability.

We introduce the concept of blocking using the paired *t*-test as an example and show how biological and experimental variability can be related using the correlation coefficient, *ρ*, and how its value imapacts the relative performance of the paired and two-sample *t*-tests.

We also emphasize that when reporting data analyzed with the paired t-test, differences in sample means (and their associated 95% CI error bars) should be shown—not the original samples—because the correlation in the samples (and its benefits) cannot be gleaned directly from the sample data.

*Nature Methods* **11**:597-598.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

Our May Points of Significance Nature Methods column jumps straight into dealing with skewed data with Non Parametric Tests.

We introduce non-parametric tests and simulate data scenarios to compare their performance to the *t*-test. You might be surprised—the *t*-test is extraordinarily robust to distribution shape, as we've discussed before. When data is highly skewed, non-parametric tests perform better and with higher power. However, if sample sizes are small they are limited to a small number of possible *P* values, of which none may be less than 0.05!

Krzywinski, M. & Altman, N. (2014) Points of Significance: Non Parametric Testing *Nature Methods* **11**:467-468.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I — *t*-tests *Nature Methods* **11**:215-216.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Significance, *P* values and *t*-tests *Nature Methods* **10**:1041-1042.

In the April Points of Significance Nature Methods column, we continue our and consider what happens when we run a large number of tests.

Observing statistically rare test outcomes is expected if we run enough tests. These are statistically, not biologically, significant. For example, if we run *N* tests, the smallest *P* value that we have a 50% chance of observing is 1–exp(–ln2/*N*). For *N* = 10^{k} this *P* value is *P*_{k}=10^{–k}ln2 (e.g. for 10^{4}=10,000 tests, *P*_{4}=6.9×10^{–5}).

We discuss common correction schemes such as Bonferroni, Holm, Benjamini & Hochberg and Storey's *q* and show how they impact the false positive rate (FPR), false discovery rate (FDR) and power of a batch of tests.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part II — Multiple Testing *Nature Methods* **11**:215-216.

*t*-tests *Nature Methods* **11**:215-216.

*P* values and *t*-tests *Nature Methods* **10**:1041-1042.

Celebrate Pi Day (March 14th) with the art of folding numbers. This year I take the number up to the Feynman Point and apply a protein folding algorithm to render it as a path.

For those of you who liked the minimalist and colorful digit grid, I've expanded on the concept to show stacked ring plots of frequency distributions.

And if spirals are your thing...

In the March Points of Significance Nature Methods column, we continue our discussion of *t*-tests from November (Significance, *P* values and *t*-tests).

We look at what happens how uncertainty of two variables combines and how this impacts the increased uncertainty when two samples are compared and highlight the differences between the two-sample and paired *t*-tests.

When performing any statistical test, it's important to understand and satisfy its requirements. The *t*-test is very robust with respect to some of its assumptions, but not others. We explore which.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Comparing Samples — Part I *Nature Methods* **11**:215-216.

*P* values and *t*-tests *Nature Methods* **10**:1041-1042.

Beautiful Science explores how our understanding of ourselves and our planet has evolved alongside our ability to represent, graph and map the mass data of the time. The exhibit runs 20 February — 26 May 2014 and is free to the public. There is a good Nature blog writeup about it, a piece in The Guardian, and a great video that explains the the exhibit narrated by Johanna Kieniewicz, the curator.

I am privileged to contribute an information graphic to the exhibit in the Tree of Life section. The piece shows how sequence similarity varies across species as a function of evolutionary distance. The installation is a set of 6 30x30 cm backlit panels. They look terrific.

Quick, name three chart types. Line, bar and scatter come to mind. Perhaps you said pie too—tsk tsk. Nobody ever thinks of the box plot.

Box plots reveal details about data without overloading a figure with a full frequency distribution histogram. They're easy to compare and now easy to make with BoxPlotR (try it). In our fifth Points of Significance column, we take a break from the theory to explain this plot type and—I hope— convince you that they're worth thinking about.

The February issue of Nature Methods kicks the bar chart two more times: Dan Evanko's Kick the Bar Chart Habit editorial and a Points of View: Bar charts and box plots column by Mark Streit and Nils Gehlenborg.

Krzywinski, M. & Altman, N. (2014) Points of Significance: Visualizing samples with box plots *Nature Methods* **11**:119-120.

I recently presented at the Wired Data|Life 2013 conference, sharing my thoughts on The Art and Science of Data Visualization.

For specialists, visualizations should expose detail to allow for exploration and inspiration. For enthusiasts, they should provide context, integrate facts and inform. For the layperson, they should capture the essence of the topic, narrate a story and deligt.

Wired's Brandon Keim wrote up a short article about me and some of my work—Circle of Life: The Beautiful New Way to Visualize Biological Data.

Experimental designs that lack power cannot reliably detect real effects. Power of statistical tests is largely unappreciated and many underpowered studies continue to be published.

This month, Naomi and I explain what power is, how it relates to Type I and Type II errors and sample size. By understanding the relationship between these quantities you can design a study that has both low false positive rate and high power.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Power and Sample Size *Nature Methods* **10**:1139-1140.

20 Tips for Interpreting Scientific Claims is a wonderful comment in Nature warning us about the limits of evidence.

I've made a poster (download hires PDF, PNG) of this list, grouping them into categories that are my own. Thrust this into everyone's hands, including your own.

Sutherland WJ, Spiegelhalter D & Burgman M (2013) Policy: Twenty tips for interpreting scientific claims. Nature 503:335–337.

Have you wondered how statistical tests work? Why does everyone want such a small *P* value?

This month, Naomi and I explain how significance is measured in statistics and remind you that it does not imply biological significance. You'll also learn why the *t*-distribution is so important and why its shape is similar to that of a normal distribution, but not quite.

*P* values and *t*-tests *Nature Methods* **10**:1041-1042.

Your slides are not your presentation. They are a representation of your presentation.

Effective presentations require that you have a clear narrative—control detail and emphasis to deliver your message. Engage the audience early. Don't dump on them.

Effective slides are visual cues. Show only what you can't easily say. Text should acts as emphasis. Don't read.

A workshop I gave on Oct 8th at Science Online Vancouver at Science World.

Error bar overlap does not imply significance. Error bar gap does not imply lack of significance. Chances are you find these statements surprising.

You've seen and used error bars. But do you understand how to interpret them in the context of statistical signifiance? This month we address the most common (and commonly misunderstood) method of visualizing uncertainty.

We discuss error bars based on standard deviation, standard error of the mean and confidence intervals. It turns out that none of these behave as our intuition would wish.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Error Bars *Nature Methods* **10**:921-922.

This month, Nature Method is launching Points of Significance a new column to educate, enlighten and, if possible, entertaining bench scientists about statistics.

I will be working closely with with Naomi Altman from The Pennsylvania State University and Dan Evanko, the Chief Editor at Nature Methods, to make the column engaging and useful.

Our first publication — The Importance of Being Uncertain — acknowledges not only the imperative of being right about how we're wrong, but also our appreciation for Oscar Wilde.

Krzywinski, M. & Altman, N. (2013) Points of Significance: Importance of Being Uncertain *Nature Methods* **10**:809-810.

Interested in data visualization? The Points of View columns are an excellent way to learn practical tips and design principles that help you communicate clearly. All the columns are now available as a collection, and open access during August 2013.

The columns were written by Bang Wong, Martin Krzywinski, Nils Gehlenborg, Cydney Nielsen, Noam Shoresh, Rikke Schmidt Kjærgaard, Erica Savig and Alberto Cairo.

This month, Alberto Cairo and I examine the importance of storytelling in presenting data. A strong narrative captures the reader's attention, informs and inspires.

Instead of "explain, not merely show," seek to "narrate, not merely explain."

The distinction between the specialist and the communicator was made by Albert Cairo at 2013 Bloomberg Design Conference. I have used this principle to structure my talk to the UBC Tableau Users Group.

Design is algorithmics for the page. Use its principles to inform how to choose from among the options offered by your software. Recognize the limitations of your tool, as well as those features that are ineffective.

Don't practise visual intuitics—use shapes whose size and proportion can be well judged.

A collaboration of science and art with Joanna Rudnick and Aaron De La Cruz.

The science of cancer genomics will be interpreted by individuals whose lives are affected by genomic mutations using the art style of Aaron De La Cruz.

Beautiful, meaningful and personal.

This month, Erica Savig and I look at the design process for a figure from her paper Multiplexed mass cytometry profiling of cellular states perturbed by small-molecule regulators. The underlying data set has 1.2 billion individual observations, categorized by drug, cell line, protein and stimulation condition.

2012 Multiplexed mass cytometry profiling of cellular states perturbed by small-molecule regulators Nature Biotechnology 30:858-867.

Although spatial encoding is the most perceptually accurate, in this case it's not the best channel to display quantitative information. Instead, the x/y position on the page is used to organize small multiples of the network of affected proteins.

In this months column, Bang and I consider how to choose effective plotting symbols in the Points of View column Plotting Symbols.

Choose symbols that overlap without ambiguity and communicate relationships in data.

This month I look at how creating effective figures is similar to the process of writing well in the Points of View column Elements of Visual Style.

Using Strunk's Elements of Style as an example of writing guidelines, I look how these can be translated to creating figures.

When we create figures, we must communicate *and* design. In my talk I discuss some of the rules that turn graphical improvisation into a structured and reproducible process.

The fractal tree was created with OneZoom, which received the best poster award at the conference.

Celebrate Pi Day (March 14th) with a funky modern posters. Transcend, don't repeat, yourself and watch the dots shimmer.

The posters were inspired by the beautiful AIDS posters by Elena Miska.

I am always drawn to type and periodically I must do something about it.

If you were a type, what type would you be? Me, Gill Sans on weekdays and Perpetua on the weekend.

I take over from Bang Wong as primary contributor to the Points of View column, a monthly advice and opinion piece about data visualization and information and figure design in molecular biology.

Nature's special issue dedicated to the Encode Project uses the Circos motif on its cover as well as the interactive Encode Explorer, which is available as an app at iTunes.

Together with Alberto Cairo, and then in conversation with Sam Grobart, I presented about science and design at Bloomberg's Businessweek Design Conference in San Francisco.

My ICDM2012 keynote on genomics and data mining: Needles in Stacks of Needles.

Creating strings of genome jewelery. Read about how it was done.

The design accompanies Cydney Nielsen's Spark manuscript, which appeared in Genome Research.

Guidelines for data encoding and visualization in biology, presented presented at Biovis 2012 (Visweek 2012).

Building on the method I used to analyze the 2008 debates, I look at the 2012 Debates between Obama and Romney, lexically speaking. Obama speaks to "folks", while Romney fearmongers with "kill" and "hurt".

Making things round, not square. Read about how it was done.

A Circos-based design for the cover of the human genetics special issue of Trends in Genetics (Trends in Genetics October 2012, 28 (10)).

My talk — Visualization: communicating, clearly from the Biological Data Visualization seminar at Schloss Dagstuhl.

And usually, really long and funny ones.

My neologisms were picked up by James Gorman of the New York Times in an article Ome, the sound of the scientific universe expanding.

Biology or astrophysics? Read about how it was done.

The image was published on the cover of PNAS (PNAS 1 May 2012; 109 (18))

Numerology is bogus but art based on numbers has a beautiful random quality. Oh, and none of the metaphysical baggage.

How much 4ness does π have?

Compare the iness of π to that of the other famous transcendental number, e, and the mysterious but attractive Golden Ratio, φ.

I have found a way to combine my curiosity about space, fear of large sequence assemblies and love of typography in a single illustration. Inspired by typographical portraits, I wanted to automate representing an image with multiple font weights, while sampling characters from a quote or debate transcripts.

If you made widgets, you could be justified in campaigning a widget of the year. Business acumen suggests it should be one of your widgets. Pantone has done exactly that, naming their 17-1463 color (tangerine tango), as color of the year 2012.

I prefer green—green jive.

I really like the world's most expensive photograph, Rhein II by Andreas Gursky. Cautious use of the word "expensive" should be practised — in this case, merely meaning that only *one* person saw the $4.3 million price tag. Others saw lower prices, or no price tag at all.

Here's my own attempt at such compositions.

I could not find Illustrator swatch files for this awesome color resource, so I created them myself.

If you're interested in color and design and don't know about Brewer palettes, see my presentation.

World-wide Google searches, categorized by one of 21 languages, are visualized with WebGL, available from Chrome Experiments. The data offers some fascinating insights such as (a) in what two places in the US are Google searches in Chinese are performed? (b) what are the most remote locations are from which Google searches were detected? (c) Why is Istanbul the 3rd top location for searches? Why is Miami in the top 10?

**Designing effective visualizations in the biological sciences.**

**Circos and Hive Plots: Challenging visualization paradigms in genomics and network analysis.**

In a recent conversation, I was challenged to name as many organisms with the same genus and species as I could. Neither a biologist, and especially not a taxonomist, my responses were limited to organisms with sequenced genomes I had come across in the literature. Immediately to mind sprung *Gallus gallus* (chicken) and ... nothing else. Well, that was embarrassing.

I was suddently taken up by the urge to find *all* instances of this occurrence. Using resources at the NCBI Taxonomy Browser I downloaded the NCBI taxonomy table which contains 1,097,405 entries in the names.dmp file (not all of these are unique genus/species combinations).

To my suprise I discovered that my performance in this challenge was beyond dysmal. In fact, there are 380 genuses which contain organisms that have the same genus and species name. Most of them (317) include a single organism, but some have many. For example the genus *Salamandra* has 14 organisms with the species *salamandra*, including *Salamandra salamandra*, *Salamandra salamandra crespoi* and *Salamandra salamandra morenica*. The genus *Regulus* has 13 organisms, including *Regulus regulus azoricus*, *Regulus regulus japonensis* and *Regulus regulus regulus* (these are all Goldcrests).

In total, there are 546 unique entries, when organisms with a unique subspecies name are considered distinct. If subspecies is not considered, the number of organisms with the same genus as species (i.e., regardless of subspecies) is 383. Here are organisms whose genus/species name is shorter than 6 letters (82 entries).

*Coccothraustes coccothraustes*

*Labiostrongylus labiostrongylus*

*Macrobilharzia macrobilharzia*

*Macropostrongylus macropostrongylus*

*Xanthocephalus xanthocephalus*

The nematode worm *Macropostrongylus macropostrongylus* has the honour of being the longest genus/species duplicate organism. Given this distinction, it is surprising that Pubmed returns only 2 papers that refer to it.

Download the full list. The number next to each ENTRY field is the NCBI Taxonomy ID for the organism. In a small number of cases there are ambiguities in parsing the data file (e.g. Troglodytes cf. troglodytes PS-2, Troglodytes sp. troglodytes PS-1). I left these in.

Visual acuity limits of the human eye restrict the resolution at which we can comfortably visualize data.

In this short guide, I explain why dividing a scale into no more than 500 divisions is a good idea.

For the EMBO Journal 2011 Cover Contest, I prepared two entries, one for the scientific category and one for the non-scientific category.

The 2011 winners have been announced. My non-scientific entry (photo of fiber optics) received honourable mention and was included in the Favourites of the Jury gallery.

Until now, Circos did not have its own domain name, having been served from the lengthy and boring http://mkweb.bcgsc.ca/circos.

Recently, I was surprised to find out that the following domains were available

All these now point to the Circos site.

*ee spammings* are spam edited into a format reminiscent of the poetry of ee cummings. Unwanted solicitations for questionable endeavours and products suddenly turn into heady words of the new literature. Art suddenly freed from the husk of spam.

Literature 2.0 — from unlikely origins.

Here's one example that emphasizes that today is ok.

i got to touch you i like us and know the more. believe recontact me today ok! but matters waiting for happy

I now have over 20 *ee spammings* — enjoy them all.

What do *inconversible*, *mystific*, *postpetizer*, *prenopsis* and *suscitate* have in common?

They are words that don't exist, but should. Learn new words.

Today's Zeitgeist

What are the world's top questions?

Using Google's autocomplete feature, I have tabulated the world's most popular questions. By combining a interrogative term, such as *what*, *who* or *why*, with a term from a related set, such as *do I*, *can I*, and *can't I*, it is possible to sample the space of questions and obtain the most popular for a given start word combination.

I have tabulated the most popular questions by category.

general | limits & desires |

love | money |

career & education | health |

sizes & extremes | religion & faith |

What kind of questions about science are people asking? From the Career & Education section,

- Can biology lead to new theorems?
- Can physics explain miracles?
- Can math be fun?
- Can science and religion coexist?
- Can history repeat itself?
- Can psychology be morally neutral?

What are the strangest questions? I'll let you explore, but these have me wondering:

- Has the world gone mad or is it me?
- Why can't I hold all these limes?
- What happens if I make a formal commitment to Satan?
- Why can't I sell my kidney?
- Who is the most powerful Jedi?
- Can Jesus microwave a burrito?
- Where is the hardest part of your head?

Circos can be used to visualize tabular data, such as spreadsheets.

1,000s of tables have already been visualized. Has yours?

Hive plots are excellent at visualizing ratios. They're not just an anti-hairball network visualization agent.

Below are visualized 3 x 8 x 27 = 648 (axes, ribbons, plots) ratios visualized.

The image above compares the relative ratios of region annotations in human, mouse and dog genomes.

Cáceres is a small city of 100,000 inhabitants in western Spain, where the city government is promoting Cáceres Creativa, a project to build citizens collaboratively sustainable future for the city based on activating the creative capacity of the population.

The project has been published as a book (excerpt), which provides a basis for working with city residents and businesses in this collaborative design.

Circos proved useful in showing the complex relationships that are established in such an environment is a city which combines flows of energy and resources, physical items and intellectual concepts. The online Circos tableviewer was used to generate the images.

Taking photos of inanimate objects is rewarding. Your subject doesn't complain, nor move, and a coffee break fits naturally into the workflow at any time. In this case, the inanimate object is over 3 Pb (3,000 Tb) of storage composed of a variety of Netapp appliances.

Using three gelled Hensel Integras (500 Ws monoheads — here I'm using only the modelling light for illumination along with red, blue and green filters) (lighting details), I spent some time getting to know the components up close.

See more photos.

All photos by Martin Krzywinski (Lumondo Photography).

Our new compute cluster has been released to the user community.

This cluster consists of **420 compute nodes** each with 12 cores and
48GB RAM, totaling **5,040 cores** and **20TB RAM**. Each node
has 160GB local /tmp space and all nodes are tied together over an
**Inifiniband 40Gbs** network.

The nodes all have access to a dedicated storage system over the Infiniband Network running GPFS with a total
**700TB of usable scratch space**. The filesystem is served by 8 IBM x3850
servers. All nodes are running **CentOS5.4** and are using open source
**Grid Engine 6.2u5** as their scheduler.

Lighting details and more photos.

All photos by Martin Krzywinski (Lumondo Photography).

1 *First the server room was expanded* 2 *It was empty and without racks, and the lights were dim. Sysadmins scurried about and unpacked equipment* 3 *The circuit was closed and there were electrons* 4 *IT staff were pleased and accounts were handed out to users* 5 *Who had work they called "important"* 6 *But which the IT staff merely called "jobs"*.

Periodically, I take my camera, point it at things. Here, I'll share a favourite from my creations.

This image — I will keep the subject a mystery — gives me the same feeling as some of the Hubble images. For this shot, I didn't need to reach orbit.

Other images in this series are available on flickr.

I also like geometry and lines. This shot is a tense composition of the Hancock Building at Copley Square in Boston.

and an assortment of baggage carts at St Pancreas station (London) which catches the eye.

I like to collect time in a photo, be it uniformly as in this diptych of street and traffic lights from a moving car

or blended, as in this skyline of Vancouver showing the flow of time from 5.30pm to 9.30pm.

DNA is composed of two strands, which are complementary. Given a sequence, its reverse complement is created by swapping A/T and G/C and writing the remapped sequence backwards (e.g. ATGC is first remapped to TACG and then reversed to GCAT).

Consider the corresponding concept applied to English words (or any language, for that matter). First, construct the complementarity map, which assigns to the *n*th letter of the alphabet the *N*-*n* letter, given an alphabet of *N* letters.

abcdefghijklmnopqrstuvwxyz |||||||||||||||||||||||||| zyxwvutsrqponmlkjihgfedcba

For example, *a* becomes *z*, *b* becomes *y*, and so on. To create a reverse complement of a word, apply this mapping and then reverse the new word (e.g. 'dog' is remapped to 'wlt' and then reversed to obtain 'tlw').

So far, that's not very exciting.

But consider the question: *What is the longest English word that is a palindrome under this set of rules (reverse complementarity)*. In other words, it's the same forward and backward after complementing the letters. Clearly "dog" is not such a palindrome since its reverse complement is "tlw".

The answer? **wizard** and **hovels**.

wizard |||||| draziw -> 'wizard' backwards

It's an amazingly fitting answer, since a wizard is someone with special powers.

A few interesting 4-letter words that are their own reverse complement palindromes are **bevy**, **grit**, **trig** and **wold**. Common surnames that match are **Ghrist**, **Elizarov** and **Prawdzik**. Female first name **Zola** and male first name **Iver** are also reverse complement palindromes, as are **trolig** (Norwegian for 'likely', as well as an IKEA curtain product) and **aviverez** (2nd person plural future of 'aviver', French for 'brighten').

I've scanend a very large word list (4,138,000 unique English and foreign words) and identified 108 reverse complement palindromes. If you find a new entry longer than 6 letters, let me know.

Finding just the right font is hard work. There are so many to choose from. Or are there?

If the type face is not on this list, don't use it (except Bodoni &mdash I hate Bodoni &mdash don't use it). If you need a shorter list, consult the quintissential 15 serif and 15 sans-serif fonts.

You'll notice a rotating image of type faces at the top of this page. Here's the full list.

I love Gotham and have used it in visualization projects. It's more rational than Helvetica and still enjoys a freshness that has evapourated from Helvetica after near-ubiquitous use. Don't get me wrong, there is still not enough Helvetica in the world, but more Gotham would be nice.

Anyone who has met me, quickly learns that I have a personal and antagonistic relationship with Comic Sans, the type face that shouldn't have been.

In a recent article in the journal Cognition, Fortune
favours the **bold** (*and the italicized*): Effects of
disfluence on educational outcomes, Diemand-Yauman *et
al.* suggest that rendering
educational materials in a hard-to-read font, and thereby recruiting the effects of the *disfluency* ("the subjective experience of difficulty associated with cognitive operations"), improves retention of material.

Regardless whether the effect is real, there must be better ways to improve education than through bad design.

In a cosmically improbable confluence of multidisciplinary pursuits, my work on keyboard layouts, which as one of its fruits has produced the TNWMLC keyboard layout — the most difficult for English typing — has been incorporated into the eponymously named Brazilian fashion line by Julia Valle.

Looking into network data sets for the linear layout project, I found pretty hairballs which make a juicy spatter pattern.

hitchmas
·
τ manifesto
·
deep space
·
sublime sound
·
data viz
·
wcs
·
jarre
·
10^{x}
·
genius
·
biting wit
·
subversion
·
∫*dt* → photo
·
juggernauts
·
science education
·
other universes
·