This project has indeed been eye opening. The main thought behind doing this project, is to better understand the attention economy of maps and spatial based representation. Most maps being used within new rooms are 2 dimensional in nature. This project was to find use cased of three dimensional spatial representation. In all the project has been a success but more needs to be done in building tookits likely to make this simple in a news room. What would be the use case in a newsroom. Several journalists when providing a narrative use 3 dimensional representation of objects to explain certain events. A simple example is this flight disaster simulation. I also see this as a great tool to tell a narrative around flood simulation. It could also be used for games where the three dimensional structures are created as scenes within the game allowing for an experience close to the actual physical space.
More and more technologies are being used in newsrooms to tell such stories. With the advent of data, and a society with access to data. This experiment was to find ways in which such visualisations could be employed.
Mapsense is a simple first draft to test the concept of three dimensional spatial representation. It can be accessed here. http://jmwenda.github.io/mapsense. The image below depicts the representation of MIT.
One can zoom and pan the map in different ways. The data used to achieve this was Boston’s/Cambridge raster data collected in 2009. The height of the buildings was retrieved from the City of Cambridge, hence the heights of the buildings are in scale to one another. Data extracted from OSM was also used to provide as and act as the base map.
On clicking elements on the map, one is able to get detailed information about the entity. Another great case especially in the realm of virtual reality would be to texturize the buildings with their appropriate color schemes.
The following image depicts attribute data from playing around with building data. One is able to see the properties of such building and this could also be helpful in other scenarios.
Another aspect to note, this virtual world can be modified to provide building experiences. I did attempt to have the media lab three dimensional model to be represented upon clicking on the building but this requires quite some processing and what I would consider environment caching. To explain this simple concept, 2 dimensional maps, make use of tiling. As on zooms in and out, there is a tiling service running in the background doing the hard work of reprojecting what you see. The same needs to be employed in a three dimensional space. This is possible and is a possible area of further experimentation.
Through this project, it was my hope to integrate data from one of the open sensing project. The results have been mixed, with challenges of processing and also just the structure of such sensing projects. Looking at the data from safecast, the biggest challenge was that most sensor data is not provided with the elevation tag. Making this impossible to map on a three dimensional realm. With sensors, this could help in creation a virtual environment which citizens or the public could experience. Sadly,when representing 2 dimensional objects within a realm that is three dimensional, is is likely that the objects are at the same level to the elevation zero. Hence the attempt to give the sensors a slight elevation bias.
Another challenge from sensor data is that most data does not provide for quick ways once could perform arbitrary queries. Case in point and example, was from the safe cast data, I needed to extract all measurements within the US but aggregated by state. Given that the only geographic attributes in use are latitude and longitude, one then has to perform spatial queries against bounding boxes, or multipolygons, that represent the different states and average the measurements. This is indeed a space that needs more work.
Thoughts, would you use this to consume data provided technical hurdles are done away with?