The EMBO Journal non-scientific cover prize is awarded for the most interesting and beautiful image made outside the lab. Contestants may submit, for example, photos or artistic impressions of wildlife animals, plants or landscapes. Particularly welcome will also be hand or computer-generated paintings or drawings (or photographs of other works of art) related to a biological or molecular biological topic.
The EMBO Journal scientific cover prize is awarded for the most captivating and thought-provoking contribution depicting a piece of molecular biology research. Entries can include light or electron micrographs, 3D reconstructions or models of biological specimen or molecules, spectacular artefacts collected in the lab, original new views of lab equipment (but not of colleagues!), or other research-based images to be of interest to molecular biologists.
The 2011 winners have been announced. The scientific image winner was Heiti Paves, who submitted a confocal image of an Arabidopsis thaliana anther filled with pollen grains. The non-scientific winner was Dieter Lampl, with his "Blue Ice" photo — a glacier in Los Glaciares National Park in Patagonia.
My non-scientific entry (photo of fiber optics) received honourable mention and was included in the Favourites of the Jury gallery.
In 2010 EMBO selected my submission of a large Circos figure for its cover (see right). Front page exposure of this sort has made Circos a very popular tool for visualization in genomics, and in particular, in cancer research where there is a need to illustrate differences between genomes.
It was now time to try something else — the hive panel (learn about hive plots and hive panels).
My other entry for the 2011 cover contest was a non-scientific abstract image photo of fiber optics.
A large number of layout algorithms already exist to attempt to visualize networks. In an attempt to create attractive layouts, node and edge positions are optimized to minimize some fitness function, such as overlap or force (if edges are treated as springs). Unfortunately, as a result it is impossible to relate the position of a node (or the distance between any two nodes in the layout) to their connected neighbourhood in the network. This particularly holds for large networks, where nodes and edge overlap in the layout is unavoidable.
The hive plot is a rational approach to visualizing networks. It is designed to complement (at times, replace) the network hairball.
In a hive plot, network nodes are assigned to and placed on axes using rational rules. These rules typically are a function of local network structure around the node (connectivity, density, centrality, etc). The resulting plot is interpretable.
The hive plot can be applied to visualize a large number of ratios between three or more scales.
Instead of network edges, the lines in a hive plot now correspond to an (x,y) data pair, which can be interpreted as a ratio (x/y). This approach is particularly effective when lines are drawn as ribbons, which are then stacked. This is shown in the figure below.
The resulting visualization bears resemblance to a stacked bar plot. The circular layout grants the advantage of being able to instantly compare all pair-wise comparisons between the axes (when three axes are used). This layout also gives the image a compare compact feel and is particularly suitable for tiling.
In the examples below, a 3-axis hive plot is shown with 8 ratios between each axis. The ratios are independent, in the sense that corresponding ribbons (e.g. blue) may have different thickness on either side of an axis. For example, if x:z = 2:3 and x:y = 1:3 then the ribbon on the left of the x axis will be twice as thick as on the right (see black arrow in figure below).
The axes in a hive plot can be arranged arbitrarily. In the figure above panels A and B show 24 ratios — 8 each between x/y, x/z, and y/x axes. In panels C-F each axis is split to create a single 6-axis plot from a dual 3-axis plot. The split axes reveal the transition between ribbons from the left and right sides.
The dual 3-axis plot appears more stylized and mathematical, whereas the single 6-axis plot is softer and organic. As the axis split distance is increased, the plots begin to look like surface density maps, which to some degree occludes the relationships between the ratio ribbons.
For each of human (hg18), mouse (mm8) and dog (canfam2) genome assemblies, UCSC annotations, available for each genome from the table browser, were used to hierarchically organize each base in the assembly using the following criteria: gene, repeat and gene+repeat. For each of these, bases were further categorized as conserved or not.
By exhaustively intersecting each of the annotation regions, the assembly was divided into disjoint segments, each with its annotation states. For example, below are a few adjacent regions from hg18 chr1 (a assembly, r repeat, c-cf conserved with dog, c-mm conserved with mouse).
... hg 1 120,942,663 120,945,658 2,996 a r hg 1 120,945,659 120,945,665 7 a hg 1 120,945,666 120,947,239 1,574 a c-cf c-mm hg 1 120,947,240 120,947,243 4 a c-cf c-mm r hg 1 120,947,244 120,947,268 25 a c-mm r hg 1 120,947,269 120,950,367 3,099 a r hg 1 120,950,368 120,950,386 19 a ...
Next, the total size of regions for each combination of annotation was calculated for each pairwise combination of genomes. The second genome in the pair dictates which conservation is used. For example, for the human-mouse pair, the relative fractions of the human genome that fall into each of the categories are
hg mm a 1,839,255,050 0.643542044483869 hg mm a,c-mm 757,027,260 0.264878365091574 hg mm a,r 206,719,589 0.0723296896425132 hg mm a,c-mm,r 42,358,464 0.0148209203088807 hg mm a,g 8,139,587 0.00284798264342638 hg mm a,c-mm,g 4,435,658 0.0015520046651231 hg mm a,g,r 48,994 1.71426463814481e-05 hg mm a,c-mm,g,r 33,869 1.18505182327074e-05
thus categorizing all the 2.86 Gb of the assembled human genome. The corresponding ratios for the mouse genome are
mm hg a 1,388,193,028 0.544355712823795 mm hg a,c-hg 892,892,218 0.350132128602082 mm hg a,r 196,173,508 0.0769260237089193 mm hg a,c-hg,r 62,305,053 0.0244318411447455 mm hg a,g 6,377,904 0.00250098394691097 mm hg a,c-hg,g 4,076,727 0.00159861747416369 mm hg a,g,r 81,889 3.21113447973805e-05 mm hg a,c-hg,g,r 57,585 2.2580954586784e-05
Using these two lists, all the ratios between the human and mouse axes can be determined. For example, for the conserved/gene/non-repeat regions the ratio of human:mouse is 0.00155:0.00160 (lines are bolded above). The corresponding ribbon for this ratio is shown below.
Category assignment into repeat, gene and conserved region was parametrized into three ranges for each criteria. These values were selected heuristically, to obtain a reasonable sample for each combination.
Given 3 parameters for each of the categories, the full comparison is represented by 27 hive plots. These plots are arranged on the cover as follows
The scale of the axes was logarithmic to maintain visibility of all categories.
The final cover designs for the cluster of 27 hive plots are shown below.
Celebrate `\pi` Day (March 14th) and go to brand new places. Together with Jake Lever, this year we shrink the world and play with road maps.
Streets are seamlessly streets from across the world. Finally, a halva shop on the same block!
Intriguing and personal patterns of urban development for each city appear in the Boonies, Burbs and Boutiques series.
No color—just lines. Lines from Marrakesh, Prague, Istanbul, Nice and other destinations for the mind and the heart.
We examine two very common supervised machine learning methods: linear support vector machines (SVM) and k-nearest neighbors (kNN).
SVM is often less computationally demanding than kNN and is easier to interpret, but it can identify only a limited set of patterns. On the other hand, kNN can find very complex patterns, but its output is more challenging to interpret.
We illustrate SVM using a data set in which points fall into two categories, which are separated in SVM by a straight line "margin". SVM can be tuned using a parameter that influences the width and location of the margin, permitting points to fall within the margin or on the wrong side of the margin. We then show how kNN relaxes explicit boundary definitions, such as the straight line in SVM, and how kNN too can be tuned to create more robust classification.
Bzdok, D., Krzywinski, M. & Altman, N. (2018) Points of Significance: Machine learning: a primer. Nature Methods 15:5–6.
Bzdok, D., Krzywinski, M. & Altman, N. (2017) Points of Significance: Machine learning: a primer. Nature Methods 14:1119–1120.