top of page

In the fall of 2022, I began a deep dive into mnemonics. Here is the TL;DR...

  • After my master's in math, I got super interested in mnemonics & mental scene construction and I followed my nose into the human & machine learning literature. This was a bit unconventional and I describe the rationale behind it in the next section below. The immense gap between how humans and machines process information is what initially drew me into exploring mnemonics for inspiration like a 16th-century scientist.

  • As a Ph.D. student, I am interested in information processing problems where I can drawn upon my backgrounds in math, biology, and cognitive science. I am particularly excited about high impact, cross-disciplinary research that is in a similar vein as Sam Gershman's 2023 sketch.

  • For the majority of college, I was passionate about synthetic biology, in particular dendritic spine morphogenesis & RNA/DNA editing technologies. After a summer internship in Jeremy Gunawardena's lab, I abruptly switched my focus to math.

And here I go into more depth... first some personal context:

​​​In middle school, I was inspired by Joshua Foer's Moonwalking with Einstein​, which is when I first began tinkering with mnemonics. I occasionally used mnemonics to study for exams throughout high school. â€‹â€‹â€‹â€‹â€‹Over the summer before college, I made palaces containing ideas from my journals and quotations from my favorite books. Upon entering college, I met a research scientist for the first time and immediately became captivated by psychology and then neurobiology. I forgot all about mnemonics for the next 5 years as I was busy taking classes and doing actual science and math research.

 

After the first semester of my master's, I re-read Foer's book and began making palaces containing theorems, definitions, and key steps in the proofs that I was learning. I finished my first-year feeling like I could do and learn anything. But what should I do? What is worth doing?​ I went into a bit of a personal identity crisis and became extremely sad after the first year of my master's. This was 2021 and I think the COVID-19 lockdowns also had a huge effect on my mental health at the time. I could've graduated early because I had already taken all of the requirements, but I didn't know what I would do after graduating so I stayed enrolled. I entered psychotherapy and began using mnemonics to calm my nerves rather than as a study tool. Then, serendipitously, I was introduced to art research​.

​​

During the second half of my junior year of college, after 8 months at the Broad Institute, I abruptly switched my major to pure math. In five semesters, I went from being a biology undergrad terrified of math to having a master's and feeling like I could work my way into branches of pure or applied math research. Although I enjoyed the classes and learning new concepts, I felt like math research was simply not my cup of tea. The brain was (and still is) the most fascinating object that I've ever come across or thought about.

 

What if, instead of abruptly switching fields, I slowly introduce myself to the computational and theoretical sides of neurobiology? What if, given my math and biology background, I take some time to read the literature for breadth rather than commit blindly to someone else's research vision? What if, since the object of investigation is behind my very own eye sockets, I study my own learning & memory as if I were an artist or as if I were a 16th century scientist working alone and without the paradigms of modern science? What if, by following my nose into the phenomena associated with the memory palace, I bump into the paradigms of the present and lay my own foundation for later contributing meaningfully to science? If not, then I'll have read a lot of interesting papers in a variety of distinct research areas and I'll have had a lot of fun! Here is the beginning of my journey so far:

 August 2022

This was my first time reciting 3-4 pages of notes from memory, i.e. without external cues. The video recording is here on the left, but it's unbelievably boring and solely archival. Its purpose is as a signal that, at one point, I had these ideas top of mind enough to be able to recite them from memory in one sitting. It'd be near impossible to recite the ideas from memory without ruminating on them.

​​​​

During college I learned about Ebbinghaus' self-experimentation with nonsense syllables. Here, rather than cue retrieval or forgetting curves, I was interested in how a finite set of episodes of experience "generates" the worldview compositionally. By closing my eyes and ruminating on a set of ideas, can I expand the span of what I am able to see? Also, I wanted to suggest that, for elaborative encoding, handling abstract content depends on prior familiarity with the content rather than the content itself, which somewhat counters Nielsen, Foer, and others in the mnemonics community. â€‹â€‹I later added some writing to this video, here​​, which goes into further detail. ​​

September 2022

This was the first paper that I latched onto within computational cognitive neuroscience. It lured me in because there seemed to be an enormous gap between (1) the ideas in this paper and (2) my naïve intuition about how memorizing ideas influences thought. What if I explore the literature for breadth with the memory palace as my guide and orientation? Every so often I will create and recite from a memory palace containing ideas related to the papers that I'm exploring, which will empower my exploration.​​

​​

- "What are the memory systems (episodic, declarative, semantic, etc...) that underlie the caching of computations?"

- "In the low-data regime, agents cannot hope to build an accurate internal model of the world, nor can they hope to accurately estimate cached values by averaging samples, so episodic memories may be the agent's best bet."

October 2022

Positive semi-definite kernels are closed under addition and multiplication, and every positive definite kernel defines a reproducing kernel Hilbert space. At the time, this paper & its citations seemed like a bridge between my thoughts related to mnemonics and my year's worth of independent studies with Prof. John Holmes (now at Ohio State) in functional analysis & distribution theory. This paper models human function learning using a compositional grammar described in​​​ Duvenaud et al.'s 2013 paper. ​​​​​​​

​​

- "structural regularities free up memory capacity because they are compressible"

- "compositionality helps memorizing structure by providing naturally occurring chunks"

- "if structure exists that a grammar can express, then one can save an unbounded # of bits by detecting that structure."

Nov. - Dec. 2022

For 8 weeks, I used this 2020 review on Neural Rendering and the subsequent 2022 version to introduce myself to the intersection of graphics, vision, and learning. Instead of continuing to store my mnemonics in the voice memos app, I began to wonder about what it would take to create and store mnemonics in 2D or 3D without having animation skills or a team of artists. Towards the goal of creating machines that learn and think like people, might it be useful to think about how humans make memory palaces about non-trivial content? If not, then I've had a lot of fun & I've ruminated on surveys of what already exists.

​​​​​​​​

- "the goal of neural rendering is to generate photo-realistic imagery in a controllable way"

-  "while there exists some work on generating neural scene representations, there is less progress on designing neural operators that take neural scene representations as input"

- "to 'learn less and know more' by incorporating differentiable physics simulators"

February 2023

This was one of the most exhilarating and validating months of my life. I stumbled across this paper on a late night rabbit hole. At the time, I was part-time restaurant serving in order to have the entire week free for thinking about and tinkering with mnemonics, and at times I was filled with self-doubt. Here, it became clear to me that other researchers have spent their 20's & beyond being curious about the phenomena associated with mnemonics. What have they missed? How can I contribute? This month filled me with resolve for meticulously sitting "alone" without a paradigm until I've further explored for breadth.​

​

​I explored related, late 2000's papers from the same authors, Buckner's late 2000's self-projection papers, Schacter's 2012 review, and some of De Brigard's more recent work. I found these ideas inspiring, but I wanted (and still want) to find (and/or help to develop) a more computational &/or mathematical framework for thinking about mnemonics and episodic memory.​

Mar. - Apr. 2023

I used this 2019 paper and the 2020 neural radiance fields paper as orientation for about 6 weeks; I dove into the citations and got a feel for what people are working on and what people are not working on within the emerging neural fields niche. How do humans represent scenes and how do we build machines that represent scenes in a more human-like manner? The brain does not implement ray tracing. The brain processes and reconstructs scenes compositionally. From one glance at an image or a scene, we can readily imagine novel views and mentally navigate through the imaged scene.

​

One exciting application of neural fields is in structural biology. â€‹My rabbit hole into mnemonics was just beginning to loop back into my undergrad biology interests. More on that later. I left off this month with the feeling like the neural fields space is incredibly competitive and fast-paced, like I'd just memorized the technical details of a paper that is already antiquated, and that if I want to eventually contribute to this area it should be by collaboration and via representation theory. 

Apr. - May 2023

  • 121 ideas from “Abstraction and Analogy-Making in Artificial Intelligence” (2021) by Melanie Mitchell

For this recitation, to memorize the chunk "continual interplay between bottom-up and top-down processes," I visualized ​Jerome Robinson pulling upwards​ on the top of a stop sign with his hands and kicking the bottom of the stop sign while wearing toe shoes.​ How did that particular analogy come to mind? For 6 weeks, I explored papers and benchmarks related to analogy-making in humans and machines. I finished this 6-week window with the desire of later getting involved in research in program synthesis.​​​​​​​​​​​​​​

​​

- "the process of abstraction is driven by analogy, in which one mentally maps the essence of one situation to a different situation."

- "without concepts there can be no thought and without analogies there can be no concepts" -Hofstadter & Sander

- "framing concept learning as the task of generating a program enables many of the advantages of programming in general, including flexible abstraction, reusability, modularity, and interpretability.​​

June - July 2023

For about 8 weeks, I latched onto David Duvenaud's statistical machine learning class materials as well as this review on amortized variational inference. Fortunately I took two Bayesian statistics classes as electives during my master's. By diving into the details of AVI, I wanted to unlock papers in variety of applied areas such as those linked here: visual working memory, generative modeling, scene representations, structural biology, and probabilistic reasoning.​​ I am particularly​​​​​​​​​​​​ interested in work within compositional learning as described further in my research statements.​​​​​​​​​​​

​

- "Amortized inference uses a stochastic function to estimate the true posterior. The parameters of this stochastic function are fixed and shared across all data points, thereby amortizing the inference."​

- "Generally, the distance between points in the latent space in a variational autoencoder does not reflect the true similarity of corresponding points in the observation space."​​​​​​​​​

Aug. - Sept. 2023

Each memory palace is half serious and half play. I'm intentional about which ideas are worth the effort of memorizing, and then I'm playful with the encoding and recitation process.​​​​​​​​​​​​​ If the encoding is not surprising, I will not be able to recall the encoded idea. "Wanting" to encode the ideas can inhibit the encoding process! ​​​The thrills come from chewing on the ideas and following the fresh curiosities that arise. Where does intrinsic curiosity come from and what are the conditions for it?​​​

​​​​​​​​​​​​

Instead of only maintaining a mental representation of the mnemonics that I used for this recitation, I externalized them onto my desktop. The scene took place in my living room, so I first did a 3D capture using NeRFStudio. Next, I used Midjourney to illustrate in 2D what I then used Common Sense Machine's Cube to expand into 3D. I rendered the combined scene in Blender after importing all of the generated 3D objects. How are humans able to create memory palaces and how can we create machines that are also able to do so?

​

- "How do we distinguish ill-posed problems that insufficiently constrain search from those that are rich in structure and therefore potentially tractable?"

- "If we explored only to try to maximize expected information gain, we would miss the chance to gain unexpected information.

- "How do we represent our own progress in thinking such that it can be a source of intrinsic reward?"

- "How do people, and how can machines, expand their hypothesis spaces to generate wholly new ideas, plans, and solutions?"

Oct. - Nov. 2023

  • 95 ideas from The Essential Tension (1977) & The Structure of Scientific Revolutions (1962) by Thomas Kühn and Reconstructing Scientific Revolutions (1993) by Paul Hoyningen-Huene.

By this point, my deep dive into mnemonics organically looped me back to the research of my previous mentor, Prof. Jeremy Gunawardena.​​ In this paper, learning is defined as "an increase of mutual information between environmental states and system states in which the internal representation of external information can influence subsequent behavior," which struck me as compatible with what I was intending to articulate regarding mnemonics. At the same time, there are vast implications across biology if single cells can learn.

 

I did these two projects in parallel.​​​​​​​ Thomas Kühn frequently mentions analogies and metaphors when describing his views on conceptual change and paradigm shifts within science. â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹â€‹How does the human mind manage the risk of novel information? How about the science community? How about a single bacterium? 

 

- What is the fabric of the world model(s) of a human? Of a bacterium? Of a single eukaryotic cell within a multicellular organism?

- Is there a relationship between intrinsic curiosity and the allostatic regulation of world model(s)?​

This introduced me to Dani Basset's conformational change theory of curiosity, which months later led me to study this review at the intersection of self-supervised learning & information theory.

​

- Is there a common language for describing the constraints faced by a bacterium in deciding whether or not to internalize information about a pathogen via its CRISPR system as well as for describing the constraints on working memory in humans due to dopamine levels in the prefrontal cortex?​​​​​​​​​​​​

Jan. - Feb. 2024

  • 70 ideas from “Meta-Learned Models of Cognition” (2023) by Marcel Binz, Ishita Dasgupta, Akshay Jagadish, Matt Botvinick, Jane Wang, and Eric Schulz

I did this as a follow-up to my second recitation (60 ideas from "Memory as a Computational Resource" from September 2022) and because I was interested in Schulz's more recent work on building a unified model of human cognition, which was hinted at and then published last month as "Centaur." I have to admit â€‹â€‹that I have been seduced by the possibility of a unified theory of cognition: Are there a common set of principles governing the phenomena associated with mnemonics in humans, single cell learning, and all of the rest within the tree of life?​ After a rabbit hole into Newell's work, I spent several days exploring Botvinick's early 2000's papers.​​​​​​

​​

- (Newell, 1992): "Unified theories of cognition are the only way to bring this wonderful increasing fund of knowledge under intellectual control."

- "Cognitive control is described as the processes behind the ability to adapt to task-specific demands."​​​​​​​​​​

Mar. - May 2024

For most of this window, I switched between reading about active inference, a counterfactual simulation model, and fleshing out details related to founding a start-up, Live Conceptual Arts Research Company. I learned a lot about entrepreneurship, early-stage investing, emerging generative AI companies, and the education tech industry. The aim would be to create an enormous dataset of human-like analogies and to maintain a workflow for fast content creation for users without 3D graphics &/or animation experience. The first initiative would be to infuse mnemonics into the calculus and statistics curricula for grades 10-14 using 2D and 3D generative tools. 

June - July 2024

This is an accompanying writing project that I made in parallel:

​We do not yet understand the biological basis of memory in single cells nor in humans. I would like to expand upon the sketch proposed in this paper. Serendipitously, I am well-placed to contribute on both the experimental and theoretical sides. 

​​

​​​​​​​- "The brain cannot compute with information that it does not represent in memory."​​​​​

​- "The available evidence makes it extremely unlikely that â€‹synapses are the site of long-term memory storage for representational content, i.e. memory for "facts" about quantities like space, time, and number."

- ​"Learning to think is conceptually distinct from, and complementary to, fact learning."

- "This story testifies to the power of theory, even when implicit, to determine how we interpret experimental data and ultimately what experiments we do."​​​​

​

The accompanying writing project is partially personal and partially future-oriented & speculative. For the last two years, I've worked as little as possible to have as much free time as possible for investing into my own education. I'm exploring a different area of the scientific literature every 4-8 weeks. I have loads of interests across disciplines. Mnemonics has provided me with orientation and I'm eager to connect the dots. I am on a bridge, sitting in the tension between patiently waiting and assertively creating the through-line. What is there just before the concept is formed? ​

​

This project is also deeply inspired by Mike Levin's "Self-Improvising Memory" (2024) and "Bioelectric networks" (2023), Jeremy Gunawardena's "Learning Outside the Brain" (2022), staibdance's Root Theory workshop (2023) and Ararat (2023), and in part by numerous other sources over the last couple of years.​​​​​​​

​

​

​

bottom of page