Memory

Memory is a fascinating topic. I recently had a bike accident where I don’t recall any of the events that led up to the event or after the event. That entire series of events (~30 minutes) was completely wiped from my brain.

I do recall taking a picture of a rose (memory formed prior to accident). Next, I’m in an ambulance with a huge bruise on my face, fractured arms, and pain all over my skull.

This started the process of thinking about what drives memory formation. Alzheimer’s disease is a phenotype that we typically think of when we consider memory loss. Another phenotype is Chronic traumatic encephalopathy, a disorder attributed to repeated concussions.

With my colleague Mark Daly we came across a condition referred to as Transient Global Amnesia, a neurological disorder whose defining characteristic is a temporary (~24 hours) but total disruption of short-term memory.

TGA from a clinician’s perspective

For complex traits, like memory, we often times found that studying traits with lower prevalence improved our ability to dissect the molecular and genetic underpinnings. A case study is coronary artery disease and familial hypercholesterolemia. We found, via the analysis of multiple population biobanks, that transient global amnesia has a pretty striking heritable component and the genetics pinpoints us to very specific cell types, i.e. pericyte cells and mural cells.

Pericyte cells are increasingly recognized for their roles in brain function, including memory. Their influence on memory primarily relates to their roles in maintaining the blood-brain barrier (BBB), supporting neurovascular coupling, and contributing to the health of brain vasculature.

A simplified neurovascular unit (NVU) diagram showing the interactive cellular network at the level of brain capillaries that comprises vascular cells, glial cells, and neurons. Intricate cell–cell communication and signal transduction mechanisms of NVU cell types are highly controlled to regulate numerous functions in the central nervous system. Source: Nature Neuroscience.

Our manuscript, Genetics of transient amnesia highlights a vascular role in memory, is publicly available on medRxiv.

Here, are the top 9 variant genetic associations:

Variant IDrsidsnearest_genesORMETA_pvalCell type expression
1-201926536-T-Crs72744832LMOD11.417.19E-27Pericyte cell
14-105488368-C-Trs55633823CRIP11.191.93E-11Vascular endothelial cells in cerebellum
19-38694171-G-Ars7251903ACTN40.871.24E-10Vascular endothelial cells in cerebellum
2-203331895-C-Trs116426890many1.224.90E-10
6-1366655-G-Trs192238573FOXF20.795.52E-10Mural cell brain
16-15814272-T-Grs12919510MYH111.143.00E-09Pericyte cell
11-100639014-A-Trs10894996ARHGAP420.874.06E-09
11-47637583-G-Ars7118178many1.164.87E-09
10-58170232-G-Ars1769016IPMK0.861.45E-08
Table 1. FinnGen, AllofUS, and UK Biobank meta-analysis associated variants with index SNP.

Protein altering variant p.Ala58Val in CRIP1 is associated with transient global amnesia.

Spherical Harmonics

I learned about Spherical Harmonics in a Topics in Nonparametric Statistics course. Spherical harmonics can be used to test for multivariate normality. Spherical Harmonics are a set of orthogonal functions defined on the surface of a sphere. They are used in various fields of science and engineering to represent complex functions over spherical domains. Mathematically, Spherical Harmonics are solutions to the angular part of Laplace’s equation in spherical coordinates.

Laplace’s equation is given by:

$$
\nabla^2 \phi = 0
$$

In Cartesian coordinates, if \((\phi)\) is a function of (x), (y), and (z), the Laplacian operator is:

$$
\nabla^2 \phi = \frac{\partial^2 \phi}{\partial x^2} + \frac{\partial^2 \phi}{\partial y^2} + \frac{\partial^2 \phi}{\partial z^2}
$$

Mathematical Definition

The Spherical Harmonics \( Y_{l}^{m}(\theta, \phi) \) are given by: \[ Y_{l}^{m}(\theta, \phi) = \sqrt{\frac{(2l+1)}{4\pi} \frac{(l-m)!}{(l+m)!}} \cdot P_{l}^{m}(\cos \theta) \cdot e^{im\phi} \] where:

– \( l \) is the degree of the harmonic,

– \( m \) is the order of the harmonic,

– \( \theta \) is the polar angle,

– \( \phi \) is the azimuthal angle,

– \( P_{l}^{m} \) are the associated Legendre polynomials.

Applications of Spherical Harmonics

Spherical Harmonics are used in various fields, including:

Quantum Mechanics: To describe the angular part of wave functions of particles in spherical potentials (e.g., electrons in atoms).

Geophysics and Astronomy: For modeling gravitational and magnetic fields, and planetary surfaces.

Computer Graphics: In environment mapping and image-based lighting.

Acoustics: For sound field analysis and room acoustics modeling.

We can visualize Spherical Harmonics in a 3D plot. In this plot, the plot we represent a specific Spherical Harmonic function \(Y_3^2\) with \(l=3\) and \(m=2\).

The James Webb Telescope

I didn’t appreciate how much work was put into it. Launched on December 25, 2021. It really is a remarkable work of human innovation. It wouldn’t surprise me that in the next decade we will have an announcement of extraterrestrial life discovery. Galaxies form stars and planets at a rapid rate. Breeding grounds for life.

Surfing the waves of data

This Winter quarter I plan to teach a new class.

The new class will be focused on learning from wave like data.

It was motivated by experiences I’ve had with waves containing information that can be mathematically formulated.

Spherical Harmonics, first introduced to me in a class at MIT by Prof. Richard Dudley (who is described in a previous post), have multiple applications. In Nonparametric Statistics, for example, Spherical Harmonics play a role. A paper by Quiroz in 2001 introduces the concept as a framework for hypothesis testing. In an exposition on “Tests for Multivariate Normality” I wrote for a Nonparametric Statistics class taught by Prof. Dudley I integrated some of the concepts introduced in that paper. (I was fortunate enough to be lectured by Prof. Dudley one-on-one and for that experience I will forever be grateful). Spherical harmonics are mathematical functions that arise in the solution of the angular part of Laplace’s equation in spherical coordinates. They are used to describe wave-like properties on the surface of a sphere.

This course is designed to bring students into the world where music, quantum computing, wavelets, and DNA converge through the lens of wave data analytics. This interdisciplinary course explores the profound connections between these seemingly disparate fields, all united by the fundamental principles of wave theory.

The Broad Institute

I had a formidable experience at the Broad Institute circa 2003-2009. It was a remarkable place to learn about human genetics. They are continuously pushing the boundaries of genomics.

If you want to learn more feel free to visit https://broadinstitute.org/ . Eric Lander was one of the first faculty I met there where he taught a group of high school students what genetics was all about. He always had an image of Wilt Chamberlain and a horse jockey to describe variation in traits, which was captivating to me as a student as I was a huge basketball fan.

Mark Daly and David Altshuler were additional co-advisors that I had in my early days with Prof. John Rioux and Emily Walsh. I continue publishing many papers with Mark.

A bit about me:

I’m an Assistant Professor in the Department of Biomedical Data Science at Stanford University.

I’m originally from Managua, Nicaragua by way of Bonanza, Nicaragua, a beautiful gold mining town in Nicaragua located in the North Eastern region of the country.

The painting on the website is from my favorite artist who is also from Bonanza, Nicaragua. His name is Augusto Silva Gomez. I strongly recommend anyone to him. He is extremely talented. I call the piece of art Transformers. It captures diversity of humans, the many elements of our genetic makeup, and energy emanating from volcanoes. I love the colors that he uses and his artistic style. In addition, it captures the Americas where I was raised. This is the picture of him with the painting in the background that drew my attention to his art: originally used to design the Legacy Flor de Caña bottle.

I trained in Mathematics at Massachusetts Institute of Technology. A wonderful place with wonderful faculty.

There, I had the privilege to learn from many brilliant minds. One in particular stood out for me and that was Prof. Richard M. Dudley. There, I learned about rigorous statistical and mathematical thinking. Nonetheless, it was the perfect place to be humbled. The first few weeks at MIT everyone shows up confident. However, you quickly realize how very little you know once the exams start coming online. They know that and that is the reason the first semester is Pass/Fail. I think the average scores of many exams the first week was around 10-12 out of 100.

BTW, if you want to learn more about the importance of Prof. Dudley’s work watch the Abel Prize presentation awarded to Prof. Michel Talagrand:

I spent a great deal of time at the Broad Institute. At the Broad I learned about human genetics, data exploration, and coding. I’ll have more to say about that later.

For my PhD I went to Oxford University. You can see a bit about my time there here at the Nuffield Department of Clinical Medicine website.

I am currently a faculty member at Stanford University. A phenomenal campus with fantastic colleagues and with an entrepreneurial spirit unlike any other university.

PS. I strongly recommend watching the Abel Prize announcement.

Approximating the product of two large matrices using singular value decomposition

Approximating the product of two large matrices using singular value decomposition may be analogous to a Taylor series expansion where first order terms become more important. The cascade plots from genome-wide association studies for example look like genetic variants are being ranked by effect size.

I asked ChatGPT whether such a phenomenom is possible when multiplying very large matrices. Here, is what it says about complexity of the computations when doing ordinary matrix multiplications and when doing matrix multiplications with approximations using TSVD. If you have recommended readings please e-mail me as this is new to me.

ChatGPT interaction

If you want to multiple two matrices can you take SVD of the matrix and just multiply the first two components to get an approximation? What is complexity of that action compared to multiplying two matrices fully directly?

Yes, you can approximate the product of two matrices by leveraging the Singular Value Decomposition (SVD) of one of the matrices. Here’s a step-by-step explanation:

SVD and Matrix Multiplication Approximation

  1. SVD of Matrix \(A\):
    Suppose you want to multiply matrices \(A\) and \(B\). First, take the SVD of \(A\):
    $$ A = U \Sigma V^T$$
    where:
  • \(U\) is an \(m \times m\) orthogonal matrix,
  • \(\Sigma\) is an \(m \times n\) diagonal matrix with singular values on the diagonal,
  • \(V\) is an \(n \times n\) orthogonal matrix.
  1. Truncated SVD for Approximation:
    If you only want an approximation, you can use a truncated SVD. Let \(k\) be the number of singular values you retain (with \(k \leq \min(m,n)\)). The truncated SVD of \(A\) can be written as:
    $$A \approx U_k \Sigma_k V_k^T$$
    where \(U_k\) contains the first \(k\) columns of \(U\), \(\Sigma_k\) is the \(k \times k\) leading principal submatrix of \(\Sigma\), and \(V_k\) contains the first \(k\) columns of \(V\).
  2. Approximating the Product \(AB\):
    Now, multiply the truncated components:
    $$ AB \approx (U_k \Sigma_k V_k^T)B = U_k \Sigma_k (V_k^T B)$$
    Here, \(V_k^T B\) involves an \(n \times k\) matrix \(V_k\) and an \(n \times p\) matrix \(B\). The product \(V_k^T B\) results in a \(k \times p\) matrix. Then, you multiply \(U_k \Sigma_k\) (which is \(m \times k\)) by the \(k \times p\) result from the previous step.

Complexity Analysis

Direct Matrix Multiplication

Complexity: \(O(mnp)\) for multiplying \(A\) (\(m \times n\)) with \(B\) (\(n \times p\)).

Approximate Matrix Multiplication Using Truncated SVD

  1. SVD Calculation:
  • Full SVD has a complexity of \(O(mn \min(m,n))\).
  • For truncated SVD to compute only the first \(k\) singular values and vectors, various efficient algorithms can reduce this to \(O(mnk)\).
  1. Matrix Multiplications:
  • \(V_k^T B\): This involves multiplying \(V_k^T\) (\(k \times n\)) with \(B\) (\(n \times p\)), which has a complexity of \(O(knp)\).
  • \(U_k \Sigma_k (V_k^T B)\): This involves multiplying \(U_k \Sigma_k\) (\(m \times k\)) with the \(k \times p\) result, which has a complexity of \(O(mkp)\).

Total Complexity

Truncated SVD: \(O(mnk)\)

Matrix Multiplications: \(O(knp) + O(mkp) = O(mkp + knp)\)

Total: \(O(mnk + mkp + knp)\)

When \(k\) is significantly smaller than \(\min(m,n)\), the total complexity \(O(mnk + mkp + knp)\) is much less than the direct multiplication complexity \(O(mnp)\). This makes the truncated SVD approach advantageous for large matrices where a lower-rank approximation is sufficient.


css.php

Latest information about COVID-19

X