Featured Sponsor: Data Quality Research Group, UFCG

Thiago NobregaThiago Nóbrega is a student in the Department of Computing Systems of the Federal University of Campina Grande (DSC/UFCG) and a member of the Data Quality Research Group. DQRG is a research group that investigates and develops novel techniques, approaches, processes, and tools for evaluating and improving the quality of data sets, taking into account critical aspects of contemporary data analysis: reliability and performance.

The main focus of the group is to facilitate and improve the effectiveness of key tasks that rely on the quality of the data, such as: data integration, schema integration, data mining and decision making. The group is formed by students, faculty, and staff from the Department of Computing Systems of the Federal University of Campina Grande (DSC/UFCG).

Learn More:
Data Quality Research Group Homepage

Understanding the neural encoding of time and space in real world memories

We’re introducing the first in a series of fascinating guest posts from experts who have backed the CrowdSignals.io campaign.  Today’s post is from Simon Dennis, Head of the School of Psychology at the University of Newcastle and CEO of Unforgettable Technologies LLC.
— Evan

Simon UoN   We live in exciting times. In the cognitive sciences, the big news for the last twenty or thirty years has been the ability to look inside the functioning brain in real time. A lot has been learned but, as always, science is hard and progress occurs in fits and starts. A critical piece that has been missing is the ability to characterize the environment in which people operate. In the early 1990s, John Anderson introduced rational analysis, which uses statistical analyses of environmental contingencies in order to understand the structure of cognition. Despite showing early promise, the method was stymied by a lack of technologies to collect the environmental data. Now the situation has changed. Smartphones, watches and other wearables are starting to provide us with access to environmental data at scale. For the first time, we can look at cognition from the inside and outside at the same time. Efforts such as CrowdSignals.io are going to be key to realizing the potential.

As an example of what is possible, I would like to highlight a line of research I have been engaged in with Per Sederberg, Vishnu Sreekumar, Dylan Nielson and Troy Smith, which was published in the Proceedings of the National Academy of Sciences last year.

The story starts with rats. In 2014, the Nobel Prize in Physiology and Medicine was awarded to John O’Keefe, May-Britt Moser and Edvard Moser for their discovery of place and grid cells in the rat hippocampus. Within the medial temporal lobe are cells that fire when a rat is in a given location in a room. The cells are laid out in regular patterns creating a coordinate system. For rats, we are talking about spatial scales of meters and temporal scales of seconds. We were interested in whether the same areas would be involved as people attempted to remember experiences over the much longer spatial and temporal scales on which we operate.

To test the idea, we had people wear a smartphone in a pouch around their necks for 2-4 weeks. The phone captured images, accelerometry, GPS coordinates and audio (obfuscated for privacy) automatically as they engaged in their activities of daily living. Later, we showed them their images and asked them to recall their experiences while we scanned their brains using functional magnetic resonance imaging.

We knew when and where each image had been taken, so we were able to create a matrix of the distances between the images in time and in space. Rather than meters and seconds our distances ranged over kilometers and weeks. We then created similar matrices by examining the pattern of neural activity in each of many small regions across the brain for each image. If the pattern of distances in the neural activity is able to predict the pattern of distances in time and/or space then one can deduce that that region is coding information that is related to time and/or space.

We found that areas at the front (anterior) of the hippocampus coded for both time and space. As with the work by O’Keefe, Moser and Moser, it was the hippocampus and surrounding areas that were implicated. What was different, however, is that the regions that were most strongly implicated were at the front of the hippocampus, rather than towards the back as is usually the case. More work is necessary, but one interesting hypothesis is that the scale of both temporal and spatial representations decreases as one moves along the hippocampus towards the back of the brain. Perhaps as people attempt to isolate specific autobiographical events they start with a broad idea of where the event is in time and space and zoom in on the specific event progressively activating representations along the hippocampus.

Beyond this specific hypothesis, this work demonstrates what one can achieve if one combines neuroimaging techniques with experience sampling technologies like smartphones. No doubt it won’t be long before our current efforts are seen as terribly crude. Nonetheless we have a reached a milestone – a place where two powerful techniques intersect – and I think that bodes well for attacking what is in my opinion the most fascinating challenge of all – understanding the human mind.


Nielson, D. M., Smith, T. A., Sreekumar, V., Dennis, S., & Sederberg, P. B. (2015). Human hippocampus represents space and time during retrieval of real-world memories. Proceedings of the National Academy of Sciences,112(35), 11078-11083.

Boost CrowdSignals.io! 100+GB Mobile-Social-Sensor-System Data Guaranteed


NYT logo
CrowdSignals.io most emailed in NY Times Technology this week:
CrowdSignals Aims to Create a Marketplace for Smartphone Sensor Data
CrowdSignals.io in KDNuggets:
CrowdSignals.io, Building Big Mobile Social Sensor Dataset

Help Us Boost The Dataset!

At our current level of funding we’re guaranteeing 100+GB of data from 30 volunteers for 30 days. This includes sensor, social, system, and interaction data in addition to ground truth on contact relationships, places visited, and 2 additional phenomena to be selected by Backers. But we can do better! With your support we can boost the diversity and density of ground truth labels, making the data more useful for an even broader spectrum of researchers and data scientists!

  • It only costs $2 per academic researcher or $5 per data scientist to contribute
  • Visit the Campaign!

Help us prove the concept and receive Big Data at a tiny fraction of the cost. Support the campaign at any level and share or tweet the news!
Contact us directly with any questions or feedback: organizers@crowdsignals.io

Phase 1: 100+GB of Data for Research and Products

Dear Colleagues,

Today we launch Phase 1 of CrowdSignals.io on Indiegogo!  We’re collecting 100+ GB (over 20K hours!) of rich sensor, social, system, interaction, and ground truth data from smartphones and smartwatches. We’re confident we can create an excellent dataset: the real experiment is in crowdfunding and community.

CrowdSignals.io Infographic Med

We’re asking for your help to generate funds that will pay volunteers and administrative staff. In return, we’ll share the all collected data, sample code, and a direct connection to a community of 1,000s of researchers and developers.

More about CrowdSignals.io:

  • Donations are just $2 per academic researcher and $5 per data scientist or engineer!
  • 100+ GB of sensor, social, system, and interaction data
  • Precise ground truth labels
  • Executed by AlgoSnap, a bootstrapped, Seattle-based start-up
  • Advised by:
    – Andrew Campbell (Dartmouth)
    – Deborah Estrin (Cornell)
    – Henry Tirri (Aalto U)
    – Jason Hong (CMU)

Please support this crowdfunded dataset and/or forward to any lists or colleagues you think may be interested!