Martin Krzywinski is a staff scientist at Canada’s Michael Smith Genome Sciences Centre.
Naomi Altman is a Professor of Statistics at The Pennsylvania State University.
Paul Blainey is an Assistant Professor of Biological Engineering at MIT and Core Member of the Broad Institute.
Danilo Bzdok is an Assistant Professor at the Department of Psychiatry, RWTH Aachen University, Germany, and a Visiting Professor at INRIA/Neurospin Saclay in France.
Kiranmoy Das is a faculty member at the Indian Statistical Institute in Kolkata, India.
Luca Greco is an Assistant Professor of Statistics at the University of Sannio in Benevento, Italy.
Jasleen Grewal is a graduate student in the Jones lab at Canada's Michael Smith Genome Sciences Centre.
Anthony Kulesa is a graduate student in the Department of Biological Engineering at MIT.
Jake Lever is a Postdoctoral Research Fellow in Bioengineering at Stanford University in Stanford, California, USA.
Geroge Luta Associate Professor of Biostatistics at the Georgetown University in Washington, DC, USA.
Jorge López Puga is a Professor of Research Methodology at UCAM Universidad Católica de Murcia.
Byran Smucker is an Associate Professor of Statistics at Miami University in Oxford, OH, USA.
Bernhard Voelkl is a Postdoctoral Research Fellow in the Division of Animal Welfare at the Veterinary Public Health Institute, University of Bern, Bern, Switzerland
Hanno Würbel is a Professor in the Division of Animal Welfare at the Veterinary Public Health Institute, University of Bern, Bern, Switzerland
Nature is often hidden, sometimes overcome, seldom extinguished. —Francis Bacon
In the first of a series of columns about neural networks, we introduce them with an intuitive approach that draws from our discussion about logistic regression.
Simple neural networks are just a chain of linear regressions. And, although neural network models can get very complicated, their essence can be understood in terms of relatively basic principles.
We show how neural network components (neurons) can be arranged in the network and discuss the ideas of hidden layers. Using a simple data set we show how even a 3-neuron neural network can already model relatively complicated data patterns.
Derry, A., Krzywinski, M & Altman, N. (2023) Points of significance: Neural network primer. Nature Methods 20.
Lever, J., Krzywinski, M. & Altman, N. (2016) Points of significance: Logistic regression. Nature Methods 13:541–542.
Our cover on the 11 January 2023 Cell Genomics issue depicts the process of determining the parent-of-origin using differential methylation of alleles at imprinted regions (iDMRs) is imagined as a circuit.
Designed in collaboration with with Carlos Urzua.
Akbari, V. et al. Parent-of-origin detection and chromosome-scale haplotyping using long-read DNA methylation sequencing and Strand-seq (2023) Cell Genomics 3(1).
Browse my gallery of cover designs.
My cover design on the 6 January 2023 Science Advances issue depicts DNA sequencing read translation in high-dimensional space. The image showss 672 bases of sequencing barcodes generated by three different single-cell RNA sequencing platforms were encoded as oriented triangles on the faces of three 7-dimensional cubes.
More details about the design.
Kijima, Y. et al. A universal sequencing read interpreter (2023) Science Advances 9.
Browse my gallery of cover designs.
If you sit on the sofa for your entire life, you’re running a higher risk of getting heart disease and cancer. —Alex Honnold, American rock climber
In a follow-up to our Survival analysis — time-to-event data and censoring article, we look at how regression can be used to account for additional risk factors in survival analysis.
We explore accelerated failure time regression (AFTR) and the Cox Proportional Hazards model (Cox PH).
Dey, T., Lipsitz, S.R., Cooper, Z., Trinh, Q., Krzywinski, M & Altman, N. (2022) Points of significance: Regression modeling of time-to-event data with censoring. Nature Methods 19.