{"id":51,"date":"2016-06-16T16:37:36","date_gmt":"2016-06-16T16:37:36","guid":{"rendered":"http:\/\/crowdsignals.io\/blog\/?p=51"},"modified":"2016-06-17T00:05:02","modified_gmt":"2016-06-17T00:05:02","slug":"understanding-the-neural-encoding-of-time-and-space-in-real-world-memories","status":"publish","type":"post","link":"http:\/\/crowdsignals.io\/blog\/2016\/06\/16\/understanding-the-neural-encoding-of-time-and-space-in-real-world-memories\/","title":{"rendered":"Understanding the neural encoding of time and space in real world memories"},"content":{"rendered":"<p>We&#8217;re introducing the first in a series of fascinating guest posts from experts who have backed the CrowdSignals.io campaign. \u00a0Today&#8217;s post is from <a href=\"https:\/\/www.newcastle.edu.au\/profile\/simon-dennis\">Simon Dennis<\/a>, <a href=\"http:\/\/www.newcastle.edu.au\/about-uon\/governance-and-leadership\/faculties-and-schools\/faculty-of-science-and-information-technology\/school-of-psychology\">Head of\u00a0the School of Psychology at the\u00a0University of Newcastle<\/a> and\u00a0CEO of <a href=\"https:\/\/www.unforgettable.me\/\">Unforgettable Technologies LLC<\/a>.<br \/>\n&#8212; Evan<\/p>\n<hr \/>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-full wp-image-56\" src=\"http:\/\/crowdsignals.io\/blog\/wp-content\/uploads\/2016\/06\/Simon-uon.png\" alt=\"Simon UoN\" width=\"370\" height=\"186\" srcset=\"http:\/\/crowdsignals.io\/blog\/wp-content\/uploads\/2016\/06\/Simon-uon.png 370w, http:\/\/crowdsignals.io\/blog\/wp-content\/uploads\/2016\/06\/Simon-uon-300x151.png 300w\" sizes=\"auto, (max-width: 370px) 85vw, 370px\" \/>\u00a0 \u00a0We live in exciting times. In the cognitive sciences, the big news for\u00a0the last twenty or thirty years has been the ability to look inside\u00a0the functioning brain in real time. A lot has been learned but, as\u00a0always, science is hard and progress occurs in fits and starts. A\u00a0critical piece that has been missing is the ability to characterize\u00a0the environment in which people operate. In the early 1990s, John\u00a0Anderson introduced rational analysis, which uses statistical analyses\u00a0of environmental contingencies in order to understand the structure of\u00a0cognition. Despite showing early promise, the method was stymied by a\u00a0lack of technologies to collect the environmental data. Now the\u00a0situation has changed. Smartphones, watches and other wearables are\u00a0starting to provide us with access to environmental data at scale. For\u00a0the first time, we can look at cognition from the inside and outside\u00a0at the same time. Efforts such as <a href=\"http:\/\/crowdsignals.io\">CrowdSignals.io<\/a> are going to be key\u00a0to realizing the potential.<\/p>\n<p>As an example of what is possible, I would like to highlight a line of\u00a0research I have been engaged in with Per Sederberg, Vishnu Sreekumar,\u00a0Dylan Nielson and Troy Smith, which was published in the Proceedings\u00a0of the National Academy of Sciences last year.<\/p>\n<p>The story starts with rats. In 2014, the Nobel Prize in Physiology and\u00a0Medicine was awarded to John O\u2019Keefe, May-Britt Moser and Edvard Moser\u00a0for their discovery of place and grid cells in the rat hippocampus.\u00a0Within the medial temporal lobe are cells that fire when a rat is in a\u00a0given location in a room. The cells are laid out in regular patterns\u00a0creating a coordinate system. For rats, we are talking about spatial\u00a0scales of meters and temporal scales of seconds. We were interested in\u00a0whether the same areas would be involved as people attempted to\u00a0remember experiences over the much longer spatial and temporal scales\u00a0on which we operate.<\/p>\n<p>To test the idea, we had people wear a smartphone in a pouch around\u00a0their necks for 2-4 weeks. The phone captured images, accelerometry,\u00a0GPS coordinates and audio (obfuscated for privacy) automatically as\u00a0they engaged in their activities of daily living. Later, we showed\u00a0them their images and asked them to recall their experiences while we\u00a0scanned their brains using functional magnetic resonance imaging.<\/p>\n<p>We knew when and where each image had been taken, so we were able to\u00a0create a matrix of the distances between the images in time and in\u00a0space. Rather than meters and seconds our distances ranged over\u00a0kilometers and weeks. We then created similar matrices by examining\u00a0the pattern of neural activity in each of many small regions across\u00a0the brain for each image. If the pattern of distances in the neural\u00a0activity is able to predict the pattern of distances in time and\/or\u00a0space then one can deduce that that region is coding information that\u00a0is related to time and\/or space.<\/p>\n<p>We found that areas at the front (anterior) of the hippocampus coded\u00a0for both time and space. As with the work by O\u2019Keefe, Moser and Moser,\u00a0it was the hippocampus and surrounding areas that were implicated.\u00a0What was different, however, is that the regions that were most\u00a0strongly implicated were at the front of the hippocampus, rather than\u00a0towards the back as is usually the case. More work is necessary, but\u00a0one interesting hypothesis is that the scale of both temporal and\u00a0spatial representations decreases as one moves along the hippocampus\u00a0towards the back of the brain. Perhaps as people attempt to isolate\u00a0specific autobiographical events they start with a broad idea of where\u00a0the event is in time and space and zoom in on the specific event\u00a0progressively activating representations along the hippocampus.<\/p>\n<p>Beyond this specific hypothesis, this work demonstrates what one can\u00a0achieve if one combines neuroimaging techniques with experience\u00a0sampling technologies like smartphones. No doubt it won\u2019t be long\u00a0before our current efforts are seen as terribly crude. Nonetheless we\u00a0have a reached a milestone &#8211; a place where two powerful techniques\u00a0intersect &#8211; and I think that bodes well for attacking what is in my\u00a0opinion the most fascinating challenge of all &#8211; understanding the\u00a0human mind.<\/p>\n<p>&nbsp;<\/p>\n<p><em> Nielson, D. M., Smith, T. A., Sreekumar, V., Dennis, S., &amp; Sederberg,\u00a0<\/em><em>P. B. (2015). Human hippocampus represents space and time during\u00a0<\/em><em>retrieval of real-world memories. Proceedings of the National Academy\u00a0<\/em><em>of Sciences,112(35), 11078-11083.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>We&#8217;re introducing the first in a series of fascinating guest posts from experts who have backed the CrowdSignals.io campaign. \u00a0Today&#8217;s post is from Simon Dennis, Head of\u00a0the School of Psychology at the\u00a0University of Newcastle and\u00a0CEO of Unforgettable Technologies LLC. &#8212; Evan \u00a0 \u00a0We live in exciting times. In the cognitive sciences, the big news for\u00a0the &hellip; <a href=\"http:\/\/crowdsignals.io\/blog\/2016\/06\/16\/understanding-the-neural-encoding-of-time-and-space-in-real-world-memories\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Understanding the neural encoding of time and space in real world memories&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[9,10],"tags":[5,11,12,13],"class_list":["post-51","post","type-post","status-publish","format-standard","hentry","category-guest-post","category-psychology","tag-data","tag-guest-post","tag-psychology","tag-sensors"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"http:\/\/crowdsignals.io\/blog\/wp-json\/wp\/v2\/posts\/51","targetHints":{"allow":["GET"]}}],"collection":[{"href":"http:\/\/crowdsignals.io\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/crowdsignals.io\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/crowdsignals.io\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/crowdsignals.io\/blog\/wp-json\/wp\/v2\/comments?post=51"}],"version-history":[{"count":6,"href":"http:\/\/crowdsignals.io\/blog\/wp-json\/wp\/v2\/posts\/51\/revisions"}],"predecessor-version":[{"id":62,"href":"http:\/\/crowdsignals.io\/blog\/wp-json\/wp\/v2\/posts\/51\/revisions\/62"}],"wp:attachment":[{"href":"http:\/\/crowdsignals.io\/blog\/wp-json\/wp\/v2\/media?parent=51"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/crowdsignals.io\/blog\/wp-json\/wp\/v2\/categories?post=51"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/crowdsignals.io\/blog\/wp-json\/wp\/v2\/tags?post=51"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}