Computational Learning and Memory Group Welcome Trust Investigator Award

home | members | publications | events | teaching | vacancies | directions

Cambridge Memory Meeting 2016

19 April 2016, St John's College, University of Cambridge

StJohns.jpg

The fifth annual Cambridge Memory Meeting will be hosted by the Computational Learning and Memory Group from the Department of Engineering. The goal of this meeting is to encourage interaction between Cambridge researchers specialising in the neuroscience or psychology of learning and memory at a preclinical or clinical level. This meeting provides principal investigators, postdoctoral researchers and graduate students with the opportunity to share their research in an informal environment. We hope that the Cambridge Memory Meeting will provide useful insight to help your research and foster collaborations within Cambridge.

pdf download abstract book

Important dates

meeting: Tuesday, 19 April 2016
late registration: 18 April 2016
early registration: closed
abstract submission: closed

Registration and abstract submission

For late registration, contact david.barrett@eng.cam.ac.uk directly and complete the form here. Late registration is not guaranteed, but may be possible.

Location

The Old Divinity School, St John's College, Cambridge

see also college map (building number 30)

The meeting takes place in the Central Hall and the Main Lecture Theatre.

Programme

9:30 registration, coffee, tea and biscuits
morning session
9.55 introduction
10.00 keynote:
Misha Tsodyks received his PhD degree in Theoretical Physics from the Landau Institute of Theoretical Physics in Moscow. He then held various research positions in Moscow, Rome, Jerusalem and San Diego, before joining the Weizmann Institute of Science in Rehovot, Israel, in 1995, where he became a full professor in 2005. Misha Tsodyks worked on a wide range of topics in computational neuroscience, such as attractor neural networks, place-related activity in hippocampus, mathematical models of short- and long-term synaptic plasticity in the neocortex, population activity and functional architecture in the primary visual cortex and perceptual learning in the human visual system. His research benefited from close collaborative links that he established with experimental neuroscientists during different stages in his career, among them Amiram Grinvald, Henry Markram, Bruce McNaughton and Dov Sagi. He held long-term visiting positions in the Institute of Advanced Studies in Delmenhorst, Ecole Polytechnique Fédérale de Lausanne, Frankfurt Institute of Advanced Studies, UC Santa Barbara, CNRS Paris and Columbia University.


Human memory contains vast amounts of information such as events, names and facts, however recall of information is often challenging. A striking example of this discrepancy is the classical free recall paradigm, where most of the people were found to be unable to reliably reproduce short lists of randomly assembled words after a quick exposure. We propose that the fundamental reason for the extremely low retrieval capacity in free recall and other related cognitive tasks is the randomness of long-term memory representations by neuronal populations. A simple phenomenological model of recall based on neuronal representations results in retrieval capacity that is broadly compatible with old-standing results from psychological literature. Moreover, deviations from these fundamental limits on recall performance can be attributed to some people utilizing position-dependent strategies when recalling lists of unrelated words. Analysis of recent data shows that a small fraction of people can acquire some of these strategies when practicing free recall that leads to strong gains in performance as opposed to the majority of people who are not capable to substantially improve their memory recall.
11.00 Renee M. Visser (MRC Cognition and Brain Sciences Unit), Richard Henson, Emily A. Holmes

Most people will experience a traumatic event during their lives. Some will develop “recurrent, involuntary and intrusive distressing memories of the traumatic event(s)”. In literature on emotional memory a distinction is often made between different levels of responding, where ‘involuntary emotional responses’ – e.g. intrusive memories that pop into mind, or physiological fear responses to pictures that were previously paired with a shock - are hypothesized to rely on a memory system that only partially relates to voluntary recall. However, this qualitative distinction is rarely made in mainstream memory literature. One of our aims is to acquire fundamental insights into the putative dissociation between voluntary and involuntary emotional memory recall. A second aim is to understand the mechanisms underlying procedures that selectively target such involuntary emotional memories. Recent findings from the Holmes lab show that already consolidated memory for experimental trauma (film footage with aversive content) can be disrupted by a behavioural procedure (memory reactivation + competing cognitive task intervention), presumably via reconsolidation-update mechanisms. While this procedure resulted in a reduction in intrusive memory frequency, it left voluntary recall of the event intact. Here we propose to combine behavioural experimentation with functional Magnetic Resonance Imaging (fMRI) to understand the mechanisms underlying the modification of intrusive emotional memory.
11.20 Franziska R. Richter (Psychology, BCNI), Rose A. Cooper, Paul M. Bays, Jon S. Simons

In this experiment we contrasted retrieval precision, vividness, and accuracy, both behaviourally and neurally. In the fMRI scanner, 20 participants encoded objects that varied on three features: colour, orientation, and location. Subsequently, participants’ object-feature-memory was tested by having them recreate their appearance using a continuous dial. Continuous vividness reports were also recorded. We defined accurate retrieval as responses within a pre-specified interval around the original feature-value. We additionally assessed how ‘precisely’ accurate memories were retrieved by measuring the absolute distance between response and original value. Accuracy, precision, and vividness were dissociable behaviourally and neurally: accurate retrieval was associated with activity in hippocampus, activity in angular gyrus scaled with retrieval precision, and activity in precuneus scaled with vividness judgments.
11.40 Elisa Cooper (MRC Cognition and Brain Sciences Unit), Andrea Greve, and Richard N. Henson


A Fast Mapping (FM) incidental learning procedure is hypothesized to allow for rapid, hippocampal-independent memory formation in adults, unlike typical, intentional explicit-encoding (EE). Sharon et al. (2011) reported successful associate memory via FM in individuals with hippocampal (HC) damage. However, Greve et al. (2014) found no FM>EE advantage in healthy older adults who had some hippocampal volume loss. We present three new FM investigations. In four individuals with amnesia only one demonstrated a numerical advantage for FM>EE.  Across two experiments in young adults (N=48), the FM task produced the poorest memory performance, as compared to variations with fewer cognitive demands. Finally, we investigated a lexical integration measure of learning (in N=44 young adults), based on Coutanche & Thompson-Schill’s (2014) evidence for a FM>EE advantage on this measure. Again, we failed to replicate any FM benefit. We conclude that despite its potential therapeutic value, any advantage of FM is difficult to establish.
12.00 poster session and lunch
 
afternoon session
14.00 Reuben Rideaux (Psychology), Mark Edwards, Deborah Apthorp, Emma Baker

Information can be consolidated/encoded into visual working memory in parallel, i.e. multiple items can be consolidated in the same time required to consolidate one. Initially, it was thought that parallel consolidation may be limited to colour and that there is no cost associated with this process, compared to serial consolidation. Through a series of experiments we show that motion direction and orientation can also be consolidated in parallel, and that there is a universal cost associated with this process: reduced precision encoding and increased consolidation failure. Furthermore, we present evidence suggesting that a biased competition model may account for the increased consolidation failure and possibly the capacity of parallel consolidation.
14.20 Sebastian Schneegans (Psychology), Paul M. Bays

To investigate the mechanism of feature binding in visual working memory we presented subjects with memory displays consisting of six coloured oriented bars. After a delay, they were cued with one feature of an item from the memory display (e.g. colour), and had to report the other features of this target item (e.g. location and orientation). We found strong evidence for the occurrence of swap errors, in which subjects reported features of an item other than the target. Moreover, when subjects made a swap error in their spatial response, they showed a strong tendency to report the non-spatial feature of the item at the reported location.


To account for these findings, we propose a neural model for feature binding based on population codes. Feature combinations are represented in this model by populations of neurons whose activity is determined by a conjunction of two features (e.g. location and colour). Random noise in the neural activity causes errors in estimating the memorized features from the population code in both cue and report dimension. We found that the experimental findings were best explained by a model combining populations for location-colour conjunctions and location-orientation conjunctions, rather than a model with direct colour-orientation binding. The selected model provided quantitative fits of the observed error distributions, demonstrating that it offers a plausible neural mechanism for feature binding in working memory.
14:40 Zuzanna Brzosko (Physiology, Development and Neuroscience), Ole Paulsen, Wolfram Schultz

Spike timing-dependent plasticity (STDP) is a physiologically relevant form of Hebbian learning in which the order and precise timing of presynaptic and postsynaptic spikes determine the sign of synaptic change: pre-before-post spike pairings induce timing-dependent long-term potentiation (t-LTP) whereas post-before-pre pairings induce long-term depression (t-LTD). The quantitative rules of STDP are influenced by neuromodulators, including dopamine (DA). In light of the potential role of DA in reward learning, we examined whether DA modulates hippocampal STDP not only when applied during, but also when applied after the pairing event.


We found that DA applied after the post-before-pre pairing protocol converts t-LTD into t-LTP when acting within a delay time window of 1 min. The effect of DA was activity-dependent, demonstrating that DA is capable of acting specifically on active inputs. The DA-induced conversion of t-LTD into t-LTP was mediated in part through activation of the cAMP/PKA cascade and required synaptic NMDA receptors. 


Together our work demonstrates a retroactive effect of DA on STDP. This supports the concept of a slowly decaying synaptic eligibility trace, which is committed to memory by the occurrence of reward. It therefore provides a possible mechanism for associating specific experiences with behaviorally distant, rewarding outcomes. The implications of our finding for hippocampus-dependent learning during spatial exploration are also discussed.
15:00 coffee, tea and biscuits
15:30 Kelly Diederen (Psychiatry), Hisham Ziauddeen, Martin Vestergaard, Tom Spencer, Paul Fletcher, Wolfram Schultz

Effective decision-making requires us to learn from errors in our predictions. The size of the prediction error (PE) is, however, meaningless without an estimate of its precision, which depends on the fluctuation in action-outcome value. Previous work shows that humans scale PEs relative to reward variability to guide learning. Such adaptation may be facilitated by neurons coding PEs relative to the standard deviation (SD) of rewards as observed in monkey dopamine neurons. Here, we investigated adaptive PE coding in the human brain and the effect of dopaminergic perturbation on adaptive coding. During fMRI acquisition, subjects predicted the size of rewards drawn from distributions with different SDs. After each prediction, subjects received a reward, yielding trial-by-trial PEs. In a second study, we repeated these procedures after subjects received an oral dose of bromocriptine 2.5 mg, sulpiride 600 mg or placebo. PE regression slopes in the midbrain and ventral striatum were steeper for PEs occurring in distributions with smaller SDs, in line with adaptive coding. Adaptive coding was paralleled by behavioural adaptation as reflected in increased learning rates when SD decreased and adaptation was predictive of improved task performance. Dopaminergic perturbation modulated adaptive PE coding in the midbrain and striatum. These results suggest that adaptive PE coding may be crucial for learning and that normal dopaminergic function is likely to be critical for this process.
15:50 Daniel McNamee (Engineering), Daniel Wolpert, Máté Lengyel

Even in state-spaces of modest size, planning is plagued by the “curse of dimensionality”. Hierarchically organized modular representations have long been suggested to underlie the capacity of the nervous system to efficiently and flexibly plan in complex environments. In such a modular representation, planning can first operate at a global level across modules acquiring a high-level “rough picture” of the trajectory to the goal and, subsequently, locally within each module to “fill in the details”. Having the right modularization can thus greatly aid planning. However, the principles underlying efficient modularization remain obscure, making it difficult to identify the behavioral and neural signatures of modular representations. In particular, previous approaches computed optimal state-space decompositions based on optimal policies and optimal value functions thus requiring, a priori, knowledge of the environment solutions. Here, we compute a modularization optimized for planning directly from the transition structure of the environment without assuming any knowledge of optimal behavior. We propose a normative theory of efficient state-space representations which partitions an environment into modules, by minimizing the average (information theoretic) description length of path planning within the environment thereby optimally trading off the complexity of the global and local planning processes. We show that such optimal representations provide a unifying account for a diverse range of hitherto unrelated phenomena at multiple levels of representation and behavior: the strong correlation between hippocampal activity and state-space “degree centrality” in spatial cognition, the appearance of “start” and “stop” signals in sequential decision-making, “task-bracketing” in goal-directed control, and route compression in spatial navigation.
16:10 closing remarks
16:20 end of session
 
16:30 pub visit:

Posters

David G.T. Barrett (Engineering), Máté Lengyel
Various cortical areas, most prominently the prefrontal cortex and the hippocampus, display diverse, stimulus-dependent neural responses during the delay period of tasks with a working memory component (Siegel et al. 2011). The timescales of these responses are much longer than single neuron timescales. However, the principles that determine how cortical circuits learn to make use of such representations are not fully understood. We assume that the brain learns an internal (generative) model of temporal sequences in its environment, such as the contingency between conditioned (CS) and un- conditioned stimuli (US) in trace conditioning. We construct and train a spiking neural network to perform inference and make predictions under such an internal model, with separate populations of neurons representing observed and latent variables. Neurons representing latent variables maintain a memory of a CS by triggering a traveling wave in the latent variable population, with a phase that is determined by the CS identity. The network learns to “surf the wave” and generate a response after the correct delay, by maximizing a lower bound on the likelihood of the model using variational inference. This allows a spiking network to learn long delays, more than a hundred times longer than single neuron timescales, and longer than in previous unsupervised learning models (Rezende and Gerstner, 2014; Brea et al. 2013).

Alberto Bernacchia (Engineering), Guillaume Hennequin, Máté Lengyel
Dale’s principle states that neurons in the nervous system have an exclusive physiological effect, each neuron either excites or inhibits all its synaptic targets. It is usually impossible for a neuron to excite some of its targets while inhibiting others. This principle has been known for more than fifty years, and yet there is no explanation for its function. Supported by theoretical analysis and experimental data, I propose that the function of Dale's principle is to maintain thermodynamic equilibrium in neural circuits. I study a nonlinear dynamical model characterized by a given synaptic matrix and input noise. Using the Fokker-Planck equation, I calculate the synaptic matrix consistent with thermodynamic equilibrium, and I show that it must satisfy Dale's principle under quite general assumptions. In order to test the theoretical predictions, I analyze the activity of neurons in the awake mammalian brain, and I show that collective neural dynamics displays temporal reversibility, which is a hallmark of thermodynamic equilibrium. I conclude by speculating on the significance of equilibrium and reversibility on brain function. While out-of-equilibrium dynamics may better support neural computation, equilibrium represents a desirable "idle" state.

Emma Cahill (Psychology), Amy L. Milton, Barry J. Everitt
Fear is a strong emotional experience. A memory of learned fear in rodents can be quantified using a Pavlovian fear-conditioning task. Once reactivated the fear memory can be expressed. However, the memory can also be destabilised and subsequently restabilised under certain conditions. Fear reminders engage the dopamine system in the basolateral amygdala (Yokoyama et al., 2005), but the contribution of dopamine signalling to the retrieval and destabilisation of fear memory is not fully understood. We explore the idea that dopamine is an essential modulator of fear memory through its ability to regulate synaptic mechanisms.
We perform a combination of behavioural testing and molecular analysis in rodents. The reactivation of a cued-fear memory activated the MAPK pathway and Extracellular Regulated Kinase (ERK) in the basolateral amygdala, and not it other brain regions. We analysed the regulation of this pathway, post memory reactivation, downstream of glutamate and dopamine receptor using pharmacological intervention directly into the amygdala to modulate these receptors.
The results will further our understanding of the molecular mechanisms downstream of dopamine receptor signalling for retrieval and destabilisation of memories.

Polytimi Frangou (Psychology), Rui Wang, Andrew P. Prescot, Marta Correia, Zoe Kourtzi
Successful interactions in our environments entail extracting targets from cluttered scenes and discriminating highly similar objects. Previous functional MRI (fMRI) studies show differential activation patterns for learning to detect signal-in-noise vs. discriminate fine feature differences. However, fMRI does not allow us to discriminate excitatory from inhibitory contributions to learning. Recent Magnetic Resonance Spectroscopy (MRS) studies link γ-aminobutyric acid (GABA), the main inhibitory neurotransmitter, to learning processes and brain plasticity. Here we test whether BOLD changes correlate with GABA concentration changes after training in learning tasks. We trained observers to discriminate radial from concentric Glass patterns that were either presented in background noise or were highly similar to each other. Our results demonstrate that, for the signal-in-noise task, decreased GABA in occipito-temporal cortex correlates with increased BOLD changes and behavioural improvement after training. In contrast, for the fine discrimination task, increased GABA correlates with fMRI selective pattern activations and task performance. Our findings suggest dissociable visual learning mechanisms; that is, GABAergic inhibition may facilitate learning to discriminate fine differences by enhancing the tuning of feature selective neurons, while learning to see in clutter by suppressing background clutter and enhancing the gain in large neural populations.

Sebastian Schneegans (Psychology), Paul M. Bays
(poster and talk - see abstract above)

Yan Wu (Engineering), Máté Lengyel
Mixed selectivity is characterised by high-dimensional neural representations that are tuned to multiple task-related aspects. Neurons with such properties has been found in multiple regions in the brain and is hypothesised to support complex cognitive tasks. However, it is unclear what kind of computation mixed selectivity is optimal for and why it is optimal. Here we demonstrate that mixed selectivity is an intrinsic property of neural networks optimised for generative tasks through both analysis and simulations. By extending recently developed theories in deep learning, we interpret a generative neural network as a variational autoencoder, which can be decomposed into a recognition model that maps external input into representations and a generative model that reconstruct data from internal representations. This decomposition allows us to investigate neural representation, thus mixed selectivity, from a normative perspective. Our analytical results are supported by experiments on recurrent neural networks trained for sequence generation. In conclusion, we prove that mixed selectively is optimal for robustly generating data from neural networks, and demonstrated that it emerges spontaneously when optimising a neural network for generating data under noise.

David Zoltowski (Engineering), Ádám Koblinger, József Fiser, Máté Lengyel
Evidence accumulation and integration are the typical models for the use of time in perceptual decision-making. Here, we formulate two opposing probabilistic models of an orientation-estimation task in which stimulus time presentation is used for either evidence integration or probabilistic sampling. In evidence integration, time is used to iteratively collect evidence and update a posterior distribution over an environmental variable given the stimulus. Crucially, we assume that while the posterior distribution is changing as evidence is integrated it is represented exactly at each time point. Alternatively, under the sampling hypothesis time is used to approximately represent the posterior distribution by sequentially sampling from a static distribution. We devised a novel orientation-estimation task in which subjects also reported their uncertainty about their estimate. Therefore, on each trial we obtained a subjective measure of uncertainty along with a true measure of error. We developed intuitive as well as formal theoretical predictions of how humans should behave in this task given the two models and identified the across-trial correlation between error and uncertainty as a measure to distinguish between them. Finally, we collected data from multiple human subjects and obtained initial evidence supporting the sampling model.

Additional links

past meetings
history of St Johns

Organizers

David Barrett and Máté Lengyel

If you have any questions, please contact david.barrett@eng.cam.ac.uk
 
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback