Visualising Sensor data based on 3 dimensional spatial representation

This project has indeed been eye opening. The main thought behind doing this project, is to better understand the attention economy of maps and spatial based representation. Most maps being used within new rooms are 2 dimensional in nature. This project was to find use cased of three dimensional spatial representation. In all the project has been a success but more needs to be done in building tookits likely to make this simple in a news room. What would be the use case in a newsroom. Several journalists when providing a narrative use 3 dimensional representation of objects to explain certain events. A simple example is this flight disaster simulation.  I also see this as a great tool to tell a narrative around flood simulation. It could also be used for games where the three dimensional structures are created as scenes within the game allowing for an experience close to the actual physical space.

https://www.youtube.com/watch?v=AjQAnqG-6Qs

More and more technologies are being used in newsrooms to tell such stories. With the advent of data, and a society with access to data. This experiment was to find ways in which such visualisations could be employed.

Mapsense is a simple first draft to test the concept of three dimensional spatial representation. It can be accessed here. http://jmwenda.github.io/mapsense. The image below depicts the representation of MIT.

 MIT campus

One can zoom and pan the map in different ways. The data used to achieve this was Boston’s/Cambridge raster data collected in 2009. The height of the buildings was retrieved from the City of Cambridge, hence the heights of the buildings are in scale to one another. Data extracted from OSM was also used to provide as and act as the base map.

On clicking elements on the map, one is able to get detailed information about the entity. Another great case especially in the realm of virtual reality would be to texturize the buildings with their appropriate color schemes. Information on how to use

The following image depicts attribute data from playing around with building data.  One is able to see the properties of such building and this could also be helpful in other scenarios.

screenshot

Another aspect to note, this virtual world can be modified to provide building experiences. I did attempt to have the media lab three dimensional model to be represented upon clicking on the building but this requires quite some processing and what I would consider environment caching. To explain this simple concept, 2 dimensional maps, make use of tiling. As on zooms in and out, there is a tiling service running in the background doing the hard work of reprojecting what you see. The same needs to be employed in a three dimensional space. This is possible and is a possible area of further experimentation.

Sensor Data

Through this project, it was my hope to integrate data from one of the open sensing project. The results have been mixed, with challenges of processing and also just the structure of such sensing projects. Looking at the data from safecast,  the biggest challenge was that most sensor data is not provided with the elevation tag. Making this impossible to map on a three dimensional realm.  With sensors, this could help in creation a virtual environment which citizens or the public could experience.  Sadly,when representing 2 dimensional objects within a realm that is three dimensional, is is likely that the objects are at the same level to the elevation zero.  Hence the attempt to give the sensors a slight elevation bias.

Another challenge from sensor data is that most data does not provide for quick ways once could perform arbitrary queries. Case in point and example, was from the safe cast data, I needed to extract all measurements within the US but aggregated by state. Given that the only geographic attributes in use are latitude and longitude, one then has to perform spatial queries against bounding boxes, or multipolygons, that represent the different states and average the measurements. This is indeed a space that needs more work.

Thoughts, would you use this to consume data provided technical hurdles are done away with?

 

1 thought on “Visualising Sensor data based on 3 dimensional spatial representation

  1. Jude, this is a very interesting area to explore. I think you’re identifying two topics worth pursuing: can 3D models make for more compelling storytelling, and is it helpful to plot sensor data in three dimensions? I think there’s a good bit of work already done on the former question. You can explore how people have used Google Maps 3D in different situations and see whether there are compelling journalistic use cases. I’d like to hear a bit more about why you felt it was important to create your own tool in this space rather than using some existing data and tools – I suspect the answer is that it’s important for people to be able to build these models themselves, ala OSM, but I’d like to hear you make the argument.

    As for the question of visualizing sensors in 3D, I think there you’re in unexplored territory, which is very interesting. You should look at Joe Paradiso’s work on Doppellab, which looks at visualizing temperature and airflow within the Media Lab using a 3D model: http://www.aec.at/origin/en/2011/07/07/doppellab/

    I think you’d benefit from finding some more data sets, particularly ones with good elevation data. If there aren’t those sets, you may want to ask whether 3D visualization of sensors is as relevant a problem as you think it is.

    Nice work talking on an ambitious problem and producing some interesting and exciting new software.

Comments are closed.