One of my goals in life, which I can now say has been accomplished, is to make biology look like astrophysics. Call it my love for the Torino Impact Hazard Scale.
Recently, I was given an opportunity to attend to this (admittedly vague) goal when Linda Chang from Aly Karsan's group approached me with some microscopy photos of mouse veins. I was asked to do "something" with these images for a cover submission to accompany the manuscript.
When people see my covers, sometimes they ask "How did you do that?" Ok, actually they never ask this. But being a scientist, I'm trained me to produce answers in anticipation of such questions. So, below, I show you how the image was constructed.
The image was published on the cover of PNAS (PNAS 1 May 2012; 109 (18))
Below are a few of the images I had the option to work with. These are mouse embryonic blood vessels, with a carotid artery shown in the foreground with endothelial cells in green, vascular smooth muscle cells in red and the nuclei in blue.
Of course, as soon as I saw the images, I realized that there was very little that I needed to do to trigger the viewer's imagination. These photos were great!
Immediately I thought of two episodes of Star Trek (original series): Doomsday Machine and the Immunity Syndrome, as well as of images from the Hubble Telescope.
I though it would be pretty easy to make the artery images look all-outer-spacey. They already looked it.
And then I saw the image below.
The background was created from the two images shown here. The second image was sampled three times, at different rotations.
The channel mixer was used to remove the green channel and leave only red and blue.
The next layer was composed of what looked like ribbons of blue gas. This was created by sampling the oval shapes from the source images. Here the red channel was a great source for cloud shapes, and this was the only channel that was kept. The hue was shifted to blue and a curve adjustment was applied to increase the contrast.
When the foreground and middle ground elements were combined, the result was already 40 parsecs away.
The foreground was created from the spectacular comet-like image of a mouse artery. Very little had to be done to make this element look good. It already looked good.
I applied a little blur using Alien Skin's Bokeh 2 to narrow the apparent depth of field, masked out elements at the bottom of the image and removed some of the green channel. The entire blue channel was removed altogether (this gave the tail of the comet a mottled, flame-like appearance).
And here we have the final image.
Decision trees classify data by splitting it along the predictor axes into partitions with homogeneous values of the dependent variable. Unlike logistic or linear regression, CART does not develop a prediction equation. Instead, data are predicted by a series of binary decisions based on the boundaries of the splits. Decision trees are very effective and the resulting rules are readily interpreted.
Trees can be built using different metrics that measure how well the splits divide up the data classes: Gini index, entropy or misclassification error.
When the predictor variable is quantitative and not categorical, regression trees are used. Here, the data are still split but now the predictor variable is estimated by the average within the split boundaries. Tree growth can be controlled using the complexity parameter, a measure of the relative improvement of each new split.
Individual trees can be very sensitive to minor changes in the data and even better prediction can be achieved by exploiting this variability. Using ensemble methods, we can grow multiple trees from the same data.
Krzywinski, M. & Altman, N. (2017) Points of Significance: Classification and regression trees. Nature Methods 14:757–758.
Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Logistic regression. Nature Methods 13:541-542.
Altman, N. & Krzywinski, M. (2015) Points of Significance: Multiple Linear Regression Nature Methods 12:1103-1104.
Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Classifier evaluation. Nature Methods 13:603-604.
Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Model Selection and Overfitting. Nature Methods 13:703-704.
Lever, J., Krzywinski, M. & Altman, N. (2016) Points of Significance: Regularization. Nature Methods 13:803-804.
The artwork was created in collaboration with my colleagues at the Genome Sciences Center to celebrate the 5 year anniversary of the Personalized Oncogenomics Program (POG).
The Personal Oncogenomics Program (POG) is a collaborative research study including many BC Cancer Agency oncologists, pathologists and other clinicians along with Canada's Michael Smith Genome Sciences Centre with support from BC Cancer Foundation.
The aim of the program is to sequence, analyze and compare the genome of each patient's cancer—the entire DNA and RNA inside tumor cells— in order to understand what is enabling it to identify less toxic and more effective treatment options.
Principal component analysis (PCA) simplifies the complexity in high-dimensional data by reducing its number of dimensions.
To retain trend and patterns in the reduced representation, PCA finds linear combinations of canonical dimensions that maximize the variance of the projection of the data.
PCA is helpful in visualizing high-dimensional data and scatter plots based on 2-dimensional PCA can reveal clusters.
Altman, N. & Krzywinski, M. (2017) Points of Significance: Principal component analysis. Nature Methods 14:641–642.
Altman, N. & Krzywinski, M. (2017) Points of Significance: Clustering. Nature Methods 14:545–546.
To achieve a `k` index for a movement you must perform `k` unbroken reps at `k`% 1RM.
The expected value for the `k` index is probably somewhere in the range of `k = 26` to `k=35`, with higher values progressively more difficult to achieve.
In my `k` index introduction article I provide detailed explanation, rep scheme table and WOD example.