2023 Pi Daylatest newsbuy art
Embrace me, surround me as the rush comes.Motorcycledrift deeper into the soundmore quotes
very clickable
visualization + design
If you are interested in color, explore my other color tools, Brewer palettes resources, color blindness palettes and math and an exhausting list of 10,000 color names for all those times you couldn't distinguish between tan hide, sea buckthorn, orange peel, west side, sunshade, california and pizzaz.

Designing for Color blindness

Color choices and transformations for deuteranopia and other afflictions

Here, I help you understand color blindness and describe a process by which you can make good color choices when designing for accessibility.

The opposite of color blindness is seeing all the colors and I can help you find 1,000 (or more) maximally distinct colors.

You can also delve into the mathematics behind the color blindness simulations and learn about copunctal points (the invisible color!) and lines of confusion.

Color blindness R code

R code for converting an RGB color for color blindness. For details see the math tab and the resources section for background reading.

---
title: 'RGB color correction for color blindess: protanopia, deuteranopia, tritanopia'
author: 'Martin Krzywinski'
web: http://mkweb.bcgsc.ca/colorblind
---

```{r}
gamma = 2.4
###############################################
# Linear RGB to XYZ
# https://en.wikipedia.org/wiki/SRGB
XYZ = matrix(c(0.4124564, 0.3575761, 0.1804375,
               0.2126729, 0.7151522, 0.0721750,
               0.0193339, 0.1191920, 0.9503041),
               byrow=TRUE,nrow=3)

SA = matrix(c(0.2126,0.7152,0.0722,
              0.2126,0.7152,0.0722,
              0.2126,0.7152,0.0722),byrow=TRUE,nrow=3)

###############################################
# XYZ to LMS, normalized to D65
# https://en.wikipedia.org/wiki/LMS_color_space
# Hunt, Normalized to D65
LMSD65 = matrix(c( 0.4002, 0.7076, -0.0808,
                   -0.2263, 1.1653,  0.0457,
                    0     , 0     ,  0.9182),
                   byrow=TRUE,nrow=3)
# Hunt, equal-energy illuminants
LMSEQ = matrix(c( 0.38971, 0.68898,-0.07868,
                 -0.22981, 1.18340, 0.04641,
                  0      , 0      , 1      ),
                byrow=TRUE,nrow=3)
# CIECAM97
SMSCAM97 = matrix(c(  0.8951,  0.2664, -0.1614,
                     -0.7502,  1.7135,  0.0367,
                      0.0389, -0.0685,  1.0296),
                  byrow=TRUE,nrow=3)
# CIECAM02
LMSCAM02 = matrix(c( 0.7328, 0.4296, -0.1624,
                    -0.7036, 1.6975,  0.0061,
                     0.0030, 0.0136,  0.9834),
                  byrow=TRUE,nrow=3)

###############################################
# Determine the color blindness correction in LMS space
# under the condition that the correction does not
# alter the appearance of white as well as 
# blue (for protanopia/deuteranopia) or red (for tritanopia).
# For achromatopsia, greyscale conversion is applied
# to the linear RGB values.
getcorrection = function(LMS,type="p",g=gamma) {
  red = matrix(c(255,0,0),nrow=3)
  blue = matrix(c(0,0,255),nrow=3)
  white = matrix(c(255,255,255),nrow=3)
  LMSr = LMS %*% XYZ %*% apply(red,1:2,linearize,g)
  LMSb = LMS %*% XYZ %*% apply(blue,1:2,linearize,g)
  LMSw = LMS %*% XYZ %*% apply(white,1:2,linearize,g)
  if(type == "p") {
    x = matrix(c(LMSb[2,1],LMSb[3,1],
                  LMSw[2,1],LMSw[3,1]),byrow=T,nrow=2)
    y = matrix(c(LMSb[1,1],LMSw[1,1]),nrow=2)
    ab = solve(x) %*% y
    C = matrix(c(0,ab[1,1],ab[2,1],0,1,0,0,0,1),byrow=T,nrow=3)
  } else if (type == "d") {
    x = matrix(c(LMSb[1,1],LMSb[3,1],
                  LMSw[1,1],LMSw[3,1]),byrow=T,nrow=2)
    y = matrix(c(LMSb[2,1],LMSw[2,1]),nrow=2)
    ab = solve(x) %*% y
    C = matrix(c(1,0,0,ab[1,1],0,ab[2,1],0,0,1),byrow=T,nrow=3)
  } else if (type == "t") {
    x = matrix(c(LMSr[1,1],LMSr[2,1],
                  LMSw[1,1],LMSw[2,1]),byrow=T,nrow=2)
    y = matrix(c(LMSr[3,1],LMSw[3,1]),nrow=2)
    ab = solve(x) %*% y
    C = matrix(c(1,0,0,0,1,0,ab[1,1],ab[2,1],0),byrow=T,nrow=3)
  } else if (type == "a" | type == "g") {
    C = matrix(c(0.2126,0.7152,0.0722,
                 0.2126,0.7152,0.0722,
                 0.2126,0.7152,0.0722),byrow=TRUE,nrow=3)
  }
  return(C)
}

# rgb is a column vector
convertcolor = function(rgb,LMS=LMSD65,type="d",g=gamma) {
  C = getcorrection(LMS,type)
  if(type == "a" | type == "g") {
    T = SA
  } else {
    M = LMS %*% XYZ
    Minv = solve(M)
    T = Minv %*% C %*% M
  }
  print(T)
  rgb_converted = T %*% apply(rgb,1:2,linearize,g)
  return(apply(rgb_converted,1:2,delinearize,g))
}

# This function implements the method by Vienot, Brettel, Mollon 1999.
# The approach is the same, just the values are different.
# http://vision.psychol.cam.ac.uk/jdmollon/papers/colourmaps.pdf
convertcolor2 = function(rgb,type="d",g=2.2) {
  xyz = matrix(c(40.9568, 35.5041, 17.9167,
                 21.3389, 70.6743, 7.98680,
                 1.86297, 11.4620, 91.2367),byrow=T,nrow=3)
  lms = matrix(c(0.15514, 0.54312, -0.03286,
                 -0.15514, 0.45684,0.03286,
                 0,0,0.01608),byrow=T,nrow=3)
  rgb = (rgb/255)**g
  if(type=="p") {
    S = matrix(c(0,2.02344,-2.52581,0,1,0,0,0,1),byrow=T,nrow=3)
    rgb = 0.992052*rgb+0.003974
  } else if(type=="d") {
    S = matrix(c(1,0,0,0.494207,0,1.24827,0,0,1),byrow=T,nrow=3)
    rgb = 0.957237*rgb+0.0213814
  } else {
    stop("Only type p,d defined for this function.")
  }
  M = lms %*% xyz
  T = solve(M) %*% S %*% M
  print(T)
  rgb = T %*% rgb
  rgb = 255*rgb**(1/g)
  return(rgb)
}

###############################################
# RGB to Lab
rgb2lab = function(rgb,g=gamma) {
  rgb = apply(rgb,1:2,linearize,g)
  xyz = XYZ %*% rgb
  delta = 6/29
  xyz = xyz / (c(95.0489,100,108.8840)/100)
  f = function(t) {
    if(t > delta**3) {
      return(t**(1/3))
    } else {
      return (t/(3*delta**2) + 4/29)
    }
  }
  L = 116*f(xyz[2]) - 16
  a = 500*(f(xyz[1]) - f(xyz[2]))
  b = 200*(f(xyz[2]) - f(xyz[3]))
  return(matrix(c(L,a,b),nrow=3))
}

# CIE76 (https://en.wikipedia.org/wiki/Color_difference)
deltaE = function(rgb1,rgb2) {
  lab1 = rgb2lab(rgb1)
  lab2 = rgb2lab(rgb2)
  return(sqrt(sum((lab1-lab2)**2)))
}

clip = function(v) {
  return(max(min(v,1),0))
}

###############################################
# RGB to/from linear RGB
#https://en.wikipedia.org/wiki/SRGB
linearize = function(v,g=gamma) {
  if(v <= 0.04045) {
    return(v/255/12.92)
  } else {
    return(((v/255 + 0.055)/1.055)**g)
  }
}

delinearize = function(v,g=gamma) {
  if(v <= 0.003130805) {
    return(255*12.92*clip(v))
  } else {
    return(255*clip(1.055*(clip(v)**(1/g))-0.055))
  }
}
pretty = function(x) {
  noquote(formatC(x,digits=10,format="f",width=9))
}

# a dark red
rgb1 = matrix(c(0,209,253),nrow=3)
# dark green
rgb2 = matrix(c(60,135,0),nrow=3)
# simulate deuteranopia
convertcolor(rgb1,type="d")
convertcolor(rgb2,type="d")
# get color distance before and after simulation
deltaE(rgb1,rgb2)
deltaE(convertcolor(rgb1,type="d"),convertcolor(rgb2,type="d"))
# transformation matrices for each color blindness type
M = LMSD65 %*% XYZ
pretty(solve(M) %*% getcorrection(LMSD65,"p") %*% M)
pretty(solve(M) %*% getcorrection(LMSD65,"d") %*% M)
pretty(solve(M) %*% getcorrection(LMSD65,"t") %*% M)
pretty(SA)
# method by Vienot, Brettel, Mollon, 1999
convertcolor2(rgb1,type="d",g=2.2)
convertcolor2(rgb2,type="d",g=2.2)
```

# a dark red
rgb1 = matrix(c(225,0,30),nrow=3)

# dark green
rgb2 = matrix(c(60,135,0),nrow=3)

# simulate deuteranopia
convertcolor(rgb1,type="d")
         [,1]
[1,] 136.7002
[2,] 136.7002
[3,]   0.0000
convertcolor(rgb2,type="d")
          [,1]
[1,] 116.76071
[2,] 116.76071
[3,]  16.73263
# get color distance before and after simulation
deltaE(rgb1,rgb2)
[1] 116.9496
deltaE(convertcolor(rgb1,type="d"),convertcolor(rgb2,type="d"))
[1] 12.72204
# transformation matrices for each color blindness type
M = LMSD65 %*% XYZ

pretty(solve(M) %*% getcorrection(LMSD65,"p") %*% M)
     [,1]          [,2]         [,3]         
[1,] 0.1705569911  0.8294430089 0.0000000000 
[2,] 0.1705569911  0.8294430089 -0.0000000000
[3,] -0.0045171442 0.0045171442 1.0000000000 

pretty(solve(M) %*% getcorrection(LMSD65,"d") %*% M)
     [,1]          [,2]         [,3]         
[1,] 0.3306600735  0.6693399265 -0.0000000000
[2,] 0.3306600735  0.6693399265 0.0000000000 
[3,] -0.0278553826 0.0278553826 1.0000000000 

pretty(solve(M) %*% getcorrection(LMSD65,"t") %*% M)
     [,1]          [,2]         [,3]         
[1,] 1.0000000000  0.1273988634 -0.1273988634
[2,] -0.0000000000 0.8739092990 0.1260907010 
[3,] 0.0000000000  0.8739092990 0.1260907010 

pretty(SA)
     [,1]         [,2]         [,3]        
[1,] 0.2126000000 0.7152000000 0.0722000000
[2,] 0.2126000000 0.7152000000 0.0722000000
[3,] 0.2126000000 0.7152000000 0.0722000000
# method by Vienot, Brettel, Mollon, 1999
convertcolor2(rgb1,type="d",g=2.2)
            [,1]       [,2]          [,3]
[1,]  0.29275003 0.70724967 -2.978356e-08
[2,]  0.29275015 0.70724997  1.232823e-08
[3,] -0.02233659 0.02233658  1.000000e+00
          [,1]
[1,] 131.81223
[2,] 131.81226
[3,]  36.37274
convertcolor2(rgb2,type="d",g=2.2)
            [,1]       [,2]          [,3]
[1,]  0.29275003 0.70724967 -2.978356e-08
[2,]  0.29275015 0.70724997  1.232823e-08
[3,] -0.02233659 0.02233658  1.000000e+00
          [,1]
[1,] 122.71798
[2,] 122.71801
[3,]  48.34316
news + thoughts

Convolutional neural networks

Thu 17-08-2023

Nature uses only the longest threads to weave her patterns, so that each small piece of her fabric reveals the organization of the entire tapestry. – Richard Feynman

Following up on our Neural network primer column, this month we explore a different kind of network architecture: a convolutional network.

The convolutional network replaces the hidden layer of a fully connected network (FCN) with one or more filters (a kind of neuron that looks at the input within a narrow window).

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Convolutional neural networks. (read)

Even through convolutional networks have far fewer neurons that an FCN, they can perform substantially better for certain kinds of problems, such as sequence motif detection.

Derry, A., Krzywinski, M & Altman, N. (2023) Points of significance: Convolutional neural networks. Nature Methods 20:.

Background reading

Derry, A., Krzywinski, M. & Altman, N. (2023) Points of significance: Neural network primer. Nature Methods 20:165–167.

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of significance: Logistic regression. Nature Methods 13:541–542.

Neural network primer

Tue 10-01-2023

Nature is often hidden, sometimes overcome, seldom extinguished. —Francis Bacon

In the first of a series of columns about neural networks, we introduce them with an intuitive approach that draws from our discussion about logistic regression.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Neural network primer. (read)

Simple neural networks are just a chain of linear regressions. And, although neural network models can get very complicated, their essence can be understood in terms of relatively basic principles.

We show how neural network components (neurons) can be arranged in the network and discuss the ideas of hidden layers. Using a simple data set we show how even a 3-neuron neural network can already model relatively complicated data patterns.

Derry, A., Krzywinski, M & Altman, N. (2023) Points of significance: Neural network primer. Nature Methods 20:165–167.

Background reading

Lever, J., Krzywinski, M. & Altman, N. (2016) Points of significance: Logistic regression. Nature Methods 13:541–542.

Cell Genomics cover

Mon 16-01-2023

Our cover on the 11 January 2023 Cell Genomics issue depicts the process of determining the parent-of-origin using differential methylation of alleles at imprinted regions (iDMRs) is imagined as a circuit.

Designed in collaboration with with Carlos Urzua.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Our Cell Genomics cover depicts parent-of-origin assignment as a circuit (volume 3, issue 1, 11 January 2023). (more)

Akbari, V. et al. Parent-of-origin detection and chromosome-scale haplotyping using long-read DNA methylation sequencing and Strand-seq (2023) Cell Genomics 3(1).

Browse my gallery of cover designs.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
A catalogue of my journal and magazine cover designs. (more)

Science Advances cover

Thu 05-01-2023

My cover design on the 6 January 2023 Science Advances issue depicts DNA sequencing read translation in high-dimensional space. The image showss 672 bases of sequencing barcodes generated by three different single-cell RNA sequencing platforms were encoded as oriented triangles on the faces of three 7-dimensional cubes.

More details about the design.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
My Science Advances cover that encodes sequence onto hypercubes (volume 9, issue 1, 6 January 2023). (more)

Kijima, Y. et al. A universal sequencing read interpreter (2023) Science Advances 9.

Browse my gallery of cover designs.

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
A catalogue of my journal and magazine cover designs. (more)

Regression modeling of time-to-event data with censoring

Thu 17-08-2023

If you sit on the sofa for your entire life, you’re running a higher risk of getting heart disease and cancer. —Alex Honnold, American rock climber

In a follow-up to our Survival analysis — time-to-event data and censoring article, we look at how regression can be used to account for additional risk factors in survival analysis.

We explore accelerated failure time regression (AFTR) and the Cox Proportional Hazards model (Cox PH).

Martin Krzywinski @MKrzywinski mkweb.bcgsc.ca
Nature Methods Points of Significance column: Regression modeling of time-to-event data with censoring. (read)

Dey, T., Lipsitz, S.R., Cooper, Z., Trinh, Q., Krzywinski, M & Altman, N. (2022) Points of significance: Regression modeling of time-to-event data with censoring. Nature Methods 19:1513–1515.


© 1999–2023 Martin Krzywinski | contact | Canada's Michael Smith Genome Sciences CentreBC Cancer Research CenterBC CancerPHSA