Using Smartphone Sensors to Measure Metro Traffic in New Delhi

Our next guest post is by Vinayak Naik, Associate Professor and Founding Coordinator of the MTech in CSE with specialization in Mobile Computing at IIIT-Delhi, New Delhi, India.  He writes about his group’s research on using smartphone sensors to measure metro traffic in New Delhi.


Vinayak Naik   In recent years, mobile sensing has been used in myriad of interesting problems that rely on collection of real-time data from various sources. The field of crowd-sourcing or participatory sensing is extensively being used for data collection to find solutions to day-to-day problems. One such problem of growing importance is analytics for smart cities. As per Wikipedia, a smart city is defined as an urban development vision to integrate multiple information and communication technology (ICT) solutions in a secure fashion to manage a city’s assets and notable research is being conducted in this direction. One of the important assets is transportation. With the growing number of people and shrinking land space, it is important to come up with solutions to reduce pollution caused by the vehicles and ease commuting between any two locations within the city.

Today, cities in developed countries, such as New York, London, etc, have millions of bytes of data being logged on a diurnal basis using sensors installed at highway and metro stations. This data is typically leveraged to carry out an extensive analysis to understand usage of roads and metro across the city’s layout and aid in the process of better and uniform city planning in long term. However, a common challenge faced in developing countries is the paucity of such detailed data due to lack of sensors in the infrastructure. In the absence of these statistics, the most credible solution is to collect data via participatory sensing using low-cost sensors like accelerometer, gyroscope, magnetometer, GPS, and WiFi that come packaged with modern day smartphones. These sensors can be used to collect data on behalf of users, which upon analysis can be leveraged in the same way as data made available through infrastructure-based.

IIT Metro
Figure 1: (a) and (b) show the relative rush at a given station which aids us in estimating wait time across metro stations. (c) shows a possible Google Maps Utility where an alternate route or transport mode can be suggested depending on the amount.

In short term, this data can be leveraged to guide commuters about expected rush at the stations. This is important as in some extreme cases wait time at stations could be more than the travel time itself. Aged people, those with disabilities, and children can possibly avoid traveling if there is rush at stations. We show two snapshots of platform at Rajiv Chowk metro station in Delhi, in Figure 1(a) and Figure 1(b). These figures show variation in the amount of rush that can happen in a real life scenario. In the long run information from our solution can also be integrated with route planning tools (e.g., Google Maps, see Figure 1(c)), to give an estimated waiting time at the stations. This will help those who want to minimize or avoid waiting at the stations.

About a decade ago, even cities in developed countries had less infrastructural support to get sensory data. The author was the lead integrator of the Personal Environmental Impact Report (PEIR) project in 2008. The objective of the project was to estimate emission and exposure to pollution for the city of Los Angeles [1] in USA. PEIR used data from smartphones to address the problem of lack of data. Today, the same approach is applicable in developing countries, where reach of smartphones is more than that of infrastructure-based sensors. At IIIT-Delhi, master’s student Megha Vij, co-advised by Prof. Viswanath Gunturi and the author, worked her thesis [2] on the problem to build a model, which can predict the metro station activity in the city of New Delhi, India using accelerometer logs collected from a smartphone app.

Our approach is to detect commuter’s entry into metro station and thereon measure time spent by the commuter till he/she boards the train. Similarly, we measure the time spent from disembarking the train until exiting the metro station. These two time-durations are indicative of the rush at metro stations, i.e., more the rush, more the time spent. We leverage geo-fencing APIs on the smartphones to detect entry into and exit from the metro stations. These APIs efficiently use GPS to detect whether the user has crossed a boundary, in our case perimeter around the metro station. For detecting boarding and disembarking, we use accelerometers on the smartphones to detect whether the commuter is in a train or not. Unlike geo-fencing, the latter is an unsolved problem. We treat this problem as a two class classification, where the goal is to detect whether a person is traveling in a train or is at the metro station. Furthermore, the “in-metro-station” activity is a collection of many micro-activities including walking, climbing stairs, queuing, etc and therefore, needs to be combined into one single category for the classification. We map this problem to machine learning and explore an array of classification algorithms to solve it. We attempt to use discriminating, both probabilistic and non-probabilistic, classification methods to effectively distinguish such patterns into “in-train” and “in-metro-station” classes.

Accelerometer Plot
Figure 2: Sample accelerometer data annotated with the two mobility patterns, “in-train” and “in-station”

Figure 2 illustrates “in-train” and “in-metro-station” classes on sample accelerometer data that we collected. One may observe a stark difference in accelerometer values across the two classes. The values for the”in-train” class have less variance, whereas for the “in-metro-station” class, we observe heavy fluctuations (and thus much greater variance). This was expected since a train typically moves at a uniform speed, whereas in a station we would witness several small activities such as walking, waiting, climbing, sprinting, etc.

It is important to note that this problem is not limited to only analytics for smart cities, e.g., our work is applicable for personal health. Smart bands and watches are used these days to detect users’ activities, such as sitting, walking, and running. At IIIT-Delhi, we are working on these problems. A standardized data collection and sharing app for smartphones would leapfrog our research. CrowdSignals.io fills this need aptly.

[1] PEIR (Personal Environmental Impact Report)
http://www.cens.ucla.edu/events/2008/07/18/cens_presentation_peir.pdf

[2] Using smartphone-based accelerometer to detect travel by metro train
https://repository.iiitd.edu.in/jspui/handle/123456789/396

 

Connect with Vinayak: Facebook   Twitter   LinkedIn

Wireless Sensor Research at Bogazici University, Istanbul

Our second in a series of expert guest posts comes from Bilgin Kosucu, a doctoral student in the WiSe group at Bogazici University Computer Engineering Department.  He provides a brief history of their unique and diverse group and writes about their work with wireless sensors and wearables in Turkey.
— Evan


WiSe,Bilgin the Wireless Sensor Research group of Bogazici University Computer Engineering Department, is one of the largest research groups in its area in Turkey. Currently the group consists of 4 full time faculty members, a medical doctor, 12 PhD candidates and 10 MS students.

WiSe was established by a computer networks researcher who then extended the work to wireless sensors. Eventually, the research on wireless sensor networks expanded to include wearable/ambient sensors and mobile phones. Currently, we are involved with the analysis of daily activities, ambient assisted living and elderly healthcare including but not limited to the UBI-HEALTH Project, supported by the EC Marie Curie IRSES Program.

As a group we are experienced of designing and developing mobile data collection platforms (e.g. on Android phones and Samsung Galaxy Gear S smartwatches), but we believe that CrowdSignals.io will build a unique database, having dedicated data annotators and considering the immense difficulty of gathering the ground truth data. This database will serve as common platform, owing its popularity to being a crowd funded collaboration, for researchers to test and compare their algorithms.

WiSe Research

In addition to the opportunity of applying our previous research of activity recognition to a well defined and standardized dataset, we hope to reach out to the wearable computing community and build the pavement for finer research through advanced collaboration.

How to reach us:
WiSe group: http://netlab.boun.edu.tr/WiSe/doku.php/home?id=home
Computer Engineering Department: http://www.cmpe.boun.edu.tr/
UBI-HEALTH Project: http://www.ubihealth-project.eu/

Understanding the neural encoding of time and space in real world memories

We’re introducing the first in a series of fascinating guest posts from experts who have backed the CrowdSignals.io campaign.  Today’s post is from Simon Dennis, Head of the School of Psychology at the University of Newcastle and CEO of Unforgettable Technologies LLC.
— Evan


Simon UoN   We live in exciting times. In the cognitive sciences, the big news for the last twenty or thirty years has been the ability to look inside the functioning brain in real time. A lot has been learned but, as always, science is hard and progress occurs in fits and starts. A critical piece that has been missing is the ability to characterize the environment in which people operate. In the early 1990s, John Anderson introduced rational analysis, which uses statistical analyses of environmental contingencies in order to understand the structure of cognition. Despite showing early promise, the method was stymied by a lack of technologies to collect the environmental data. Now the situation has changed. Smartphones, watches and other wearables are starting to provide us with access to environmental data at scale. For the first time, we can look at cognition from the inside and outside at the same time. Efforts such as CrowdSignals.io are going to be key to realizing the potential.

As an example of what is possible, I would like to highlight a line of research I have been engaged in with Per Sederberg, Vishnu Sreekumar, Dylan Nielson and Troy Smith, which was published in the Proceedings of the National Academy of Sciences last year.

The story starts with rats. In 2014, the Nobel Prize in Physiology and Medicine was awarded to John O’Keefe, May-Britt Moser and Edvard Moser for their discovery of place and grid cells in the rat hippocampus. Within the medial temporal lobe are cells that fire when a rat is in a given location in a room. The cells are laid out in regular patterns creating a coordinate system. For rats, we are talking about spatial scales of meters and temporal scales of seconds. We were interested in whether the same areas would be involved as people attempted to remember experiences over the much longer spatial and temporal scales on which we operate.

To test the idea, we had people wear a smartphone in a pouch around their necks for 2-4 weeks. The phone captured images, accelerometry, GPS coordinates and audio (obfuscated for privacy) automatically as they engaged in their activities of daily living. Later, we showed them their images and asked them to recall their experiences while we scanned their brains using functional magnetic resonance imaging.

We knew when and where each image had been taken, so we were able to create a matrix of the distances between the images in time and in space. Rather than meters and seconds our distances ranged over kilometers and weeks. We then created similar matrices by examining the pattern of neural activity in each of many small regions across the brain for each image. If the pattern of distances in the neural activity is able to predict the pattern of distances in time and/or space then one can deduce that that region is coding information that is related to time and/or space.

We found that areas at the front (anterior) of the hippocampus coded for both time and space. As with the work by O’Keefe, Moser and Moser, it was the hippocampus and surrounding areas that were implicated. What was different, however, is that the regions that were most strongly implicated were at the front of the hippocampus, rather than towards the back as is usually the case. More work is necessary, but one interesting hypothesis is that the scale of both temporal and spatial representations decreases as one moves along the hippocampus towards the back of the brain. Perhaps as people attempt to isolate specific autobiographical events they start with a broad idea of where the event is in time and space and zoom in on the specific event progressively activating representations along the hippocampus.

Beyond this specific hypothesis, this work demonstrates what one can achieve if one combines neuroimaging techniques with experience sampling technologies like smartphones. No doubt it won’t be long before our current efforts are seen as terribly crude. Nonetheless we have a reached a milestone – a place where two powerful techniques intersect – and I think that bodes well for attacking what is in my opinion the most fascinating challenge of all – understanding the human mind.

 

Nielson, D. M., Smith, T. A., Sreekumar, V., Dennis, S., & Sederberg, P. B. (2015). Human hippocampus represents space and time during retrieval of real-world memories. Proceedings of the National Academy of Sciences,112(35), 11078-11083.