Visualizing Station Delays on the TTC

By: Alexander Shatrov

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018.

Intro:

The topic of this geovisualization project is the TTC. More specifically, the Toronto subway system and its many, many, MANY delays. As someone who frequently has to suffer through them, I decided to turn this misfortune into something productive and informative, as well as something that would give a person not from Toronto an accurate image of what using the TTC on a daily basis is like. A time-series map showing every single delay the TTC went through over a specified time period.  The software chosen for this task was Carto, due to its reputation as being good at creating time-series maps.

Obtaining the data:

First, an excel file of TTC subway delays was obtained from Toronto Open Data, where it is organised by month, with this project specifically using August 2018 data. Unfortunately, this data did not include XY coordinates or specific addresses, which made geocoding it difficult. Next, a shapefile of subway lines and stations was obtained from a website called the “Unofficial TTC Geospatial Data”. Unfortunately, this data was incomplete as it had last been updated in 2012 and therefore did not include the recent 2017 expansion to the Yonge-University-Spadina line. A partial shapefile of it was obtained from DMTI, but it was not complete. To get around this, the csv file of the stations shapefile was opened up, the new stations added, the latitude-longitude coordinates for all of the stations manually entered in, and the csv file then geocoded in ArcGIS using its “Display XY Data” function to make sure the points were correctly geocoded. Once the XY data was confirmed to be working, the delay excel file was saved as a csv file, and had the station data joined with it. Now, it had a list of both the delays and XY coordinates to go with those delays. Unfortunately, not all of the delays were usable, as about a quarter of them had not been logged with a specific station name but rather the overall line on which the delay happened. These delays were discarded as there was no way to know where exactly on the line they happened. Once this was done, a time-stamp column was created using the day and timeinday columns in the csv file.

Finally, the CSV file was uploaded to Carto, where its locations were geocoded using Carto’s geocode tool, seen below.

It should be noted that the csv file was uploaded instead of the already geocoded shapefile because exporting the shapefile would cause an issue with the timestamp, specifically it would delete the hours and minutes from the time stamp, leaving only the month and day. No solution to this was found so the csv file was used instead. The subway lines were then added as well, although the part of the recent extension that was still missing had to be manually drawn. Technically speaking the delays were already arranged in chronological order, but creating a time series map just based on the order made it difficult to determine what day of the month or time of day the delay occurred at. This is where the timestamp column came in. While Carto at first did not recognize the created timestamp, due to it being saved as a string, another column was created and the string timestamp data used to create the actual timestamp.

Creating the map:

Now, the data was fully ready to be turned into a time-series map. Carto has greatly simplified the process of map creation since their early days. Simply clicking on the layer that needs to be mapped provides a collection of tabs such as data and analysis. In order to create the map, the style tab was clicked on, and the animation aggregation method was selected.

The color of the points was chosen based on value, with the value being set to the code column, which indicates what the reason for each delay was. The actual column used was the timestamp column, and options like duration (how long the animation runs for, in this case the maximum time limit of 60 seconds) and trails (how long each event remains on the map, in this case set to just 2 to keep the animation fast-paced). In order to properly separate the animation into specific days, the time-series widget was added in the widget tab, located next to to the layer tab.

In the widget, the timestamp column was selected as the data source, the correct time zone was set, and the day bucket was chosen. Everything else was left as default.

The buckets option is there to select what time unit will be used for your time series. In theory, it is supposed to range from minutes to decades, but at the time of this project being completed, for some reason the smallest time unit available is day. This was part of the reason why the timestamp column is useful, as without it the limitations of the bucket in the time-series widget would have resulted in the map being nothing more then a giant pulse of every delay that happened that day once a day. With the time-stamp column, the animation feature in the style tab was able to create a chronological animation of all of the delays which, when paired with the widget was able to say what day a delay occurred, although the lack of an hour bucket meant that figuring out which part of the day a delay occurred requires a degree of guesswork based on where the indicator is, as seen below

Finally, a legend needed to be created so that a viewer can see what each color is supposed to mean. Since the different colors of the points are based on the incident code, this was put into a custom legend, which was created in the legend tab found in the same toolbar as style. Unfortunately this proved impossible as the TTC has close to 200 different codes for various situations, so the legend only included the top 10 most common types and an “other” category encompassing all others.

And that is all it took to create an interesting and informative time-series map. As you can see, there was no coding involved. A few years ago, doing this map would have likely required a degree of coding, but Carto has been making an effort to make its software easy to learn and easy to use. The result of the actions described here can be seen below.

https://alexandershatrov.carto.com/builder/8574ffc2-9751-49ad-bd98-e2ab5c8396bb/embed

Visual Story of GHG Emissions in Canada

By Sharon Seilman, Ryerson University
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

Background

Topic: 

An evaluation of annual Greenhouse Gas (GHG) Emissions changes in Canada and an in-depth analysis of which provinces/ territories contribute to most of the GHG emissions within National and Regional geographies, as well as by economic sectors.

  • The timeline for this analysis was from 1990-2015
  • Main data sources: Government of Canada Greenhouse Gas Emissions Inventory and Statistics Canada
Why? 

Greenhouse gas emissions are compounds in the atmosphere that absorbs infrared radiation, thus trapping and holding heat in the atmosphere. By increasing the heat in the atmosphere, greenhouse gases are responsible for the greenhouse effect, which ultimately leads to global climate change. GHG emissions are monitored in three elements -its abundance in the atmosphere, how long it stays in the atmosphere and its global warming potential.

Audience: 

Government organizations, Environmental NGOs, Members of the public

Technology

An informative website with the use of Webflow was created, to visually show the story of the annual emissions changes in Canada, understand the spread of it and the expected trajectory. Webflow is a software as a service (SaaS) application that allows designers/users to build receptive websites without significant coding requirements. While the designer is creating the page in the front end, Webflow automatically generates HTML, CSS and JavaScript on the back end. Figure 1 below shows the user interaction interface of Webflow in the editing process. All of the content that is to be used in the website would be created externally, prior to integrating it into the website.

Figure 1: Webflow Editing Interface
The website: 

The website it self was designed in a user friendly manner that enables users to follow the story quite easily. As seen in figure 2, the information it self starts at a high level and gradually narrows down (national level, national trajectory, regional level and economic sector breakdown), thus guiding the audience towards the final findings and discussions. The maps and graphs used in the website were created from raw data with the use of various software that would be further elaborated in the next section.

Figure 2: Website created with the use of Webflow
Check out Canada’s GHG emissions story HERE!

Method

Below are the steps that were undertaken for the creation of this website. Figure 3 shows a break down of these steps, which is further elaborated below.

Figure 3:  Project Process
  1. Understanding the Topic:
    • Prior to beginning the process of creating a website, it is essential to evaluate and understand the topic overall to undertake the best approach to visualizing the data and content.
    • Evaluate the audience that the website would be geared towards and visualize the most suitable process to represent the chosen topic.
    • For this particular topic of understanding GHG emissions in Canada, Webflow was chosen because it allows the audience to interact with the website in a manner that is similar to a story; providing them with the content in a visually appealing and user friendly manner.
  2. Data Collection:
    • For the undertaking of this analysis, the main data source used was the Greenhouse Gas Inventory from the Government of Canada (Environment and Climate Change). The inventory provided raw values that could be mapped and analyzed in various geographies and sectors. Figure 4 shows an example of what the data looks like at a national scale, prior to being extracted. Similarly, data is also provided at a regional scale and by economic sector.

      Figure 4: Raw GHG Values Table from the Inventory
    • The second source for this visualization was the geographic boundaries. The geographic boundaries shapefiles for Canada at both a national scale and regional scale was obtained from Statistics Canada. Additionally, the rivers (lines) shapefile from Statistics Canada too was used to include water bodies in the maps that were created.
      • When downloading the files from Statistics Canada, the ArcGIS (.shp) format was chosen.
  3. Analysis:
    • Prior to undertaking any of the analysis, the data from the inventory report needed to be extracted to excel. For the purpose of this analysis, national, regional and economic sector data were extracted from the report to excel sheets
      • National -from 1990 to 2015, annually,
      • Regional -by province/territory from 1990 to 2015, annually
      • Economic Sector -by sector from 1990 to 2015, annually
    • Graphs:
      • Trend -after extracting the national level data from the inventory, a line graph was created in excel with an added trendline. This graph shows the total emissions in Canada from 1990 to 2015 and the expected trajectory of emissions for the upcoming five years. In this particular graph, it is evident that the emissions show an increasing trajectory. Check out the trend graph here!
      • Economic Sector -similar to the trend graph, the economic sector annual data was extracted from the inventory to excel. With the use of the available data, a stacked bar graph was created from 1990 to 2015. This graph shows the breakdown of emissions by sector in Canada as well as the variation/fluctuations of emissions in the sectors. It helps understand which sectors contribute the most and which years these sectors may have seen a significant increase or decrease. With the use of this graph, further analysis could be undertaken to understand what changes may have occurred in certain years to create such a variation. Check out the economic sector graph here!
    •  Maps:
      • National map -the national map animation was created with the use of ArcMap and an online GIF maker. After the data was extracted to excel, it was saved as a .csv files and uploaded to ArcMap. With the use of ArcMap, sixteen individual maps were made to visualize the varied emissions from 1990 to 2015. The provincial and territorial shapefile was dissolved using the ArcMap dissolve feature (from the Arc Tool box) to obtain a boundary file at a national scale (that was aligned with the regional boundary for the next map). Then, the uploaded table was joined to the boundary file (with the use of the Table join feature). Both the dissolved national boundary shapefile and the river shapefile were used for this process, with the data that was initially exported from the inventory for national emissions. Each map was then exported a .jpeg image and uploaded to the GIF maker, to create the animation that is shown in the website. With the use of this visualization, the viewer can see the variation of emissions throughout the years in Canada. Check out the national animation map here!
      •  Regional map -similar to the national one, the regional map animation was created in same process. However, for the regional emissions, data was only available for three years (1990, 2005 and 2015). The extracted data .csv file was uploaded and table joined to the provinces and territories shapefile (undissolved), to create three choropleth maps. The three maps were them exported as .jpeg images and uploaded to the GIF maker to create the regional animation. By understanding this animation, the viewer can distinctly see which regions in Canada have increase, decreased or remained the same with its emissions. Check out the regional animation map here!
  4. Final output/maps:
    • The graphs and maps that were discussed above were exported as images and GIFs to integrate in the website. By evaluating the varied visualizations, various conclusions and outputs were drawn in order to understand the current status of Canada as a nation, with regards to its GHG emissions. Additional research was done in order to assess the targets and policies that are currently in place about GHG emissions reductions.
  5. Design and Context:
    • Once the final output and maps were created, and the content was drafted, Webflow enables the user to easily upload external content via the upload media tool. The content was then organized with the graphs and maps that show a sequential evaluation of the content.
    • For the purpose of this website, an introductory statement introduces the content discussed and Canada’s place in the realm of Global emissions. Then the emissions are first evaluated at a national scale with the visual animation, then the national trend, regional animation and finally, the economic sector breakdown. Each of the sections have its associated content and description that provides an explanation of what is shown by the visual.
    • The Learn More and Data Source buttons in the website include direct links to Government of Canada website about Canada’s emissions and the GHG inventory itself.
    • The concluding statement provides the viewer with an overall understanding of Canada’s status in GHG emissions from 1990 to 2015.
    • All of the font formatting and organizing of the content was done within the Webflow interface with the end user in mind.
  6. Webflow:
    • The particular format that was chosen in for this website because of story telling element of it. Giving the viewer the option to scrolls through the page and read the contents of it, works similarly as story because this website was created for informative purposes.

Lessons Learned: 

  • While the this website provides informative information, it could be further advanced through the integration of an interactive map, with the use of additional coding. This however would require creating the website outside of the Webflow interface.
  • Also, the analysis could be further advanced with the additional of municipal emissions values and policies (which was not available in the inventory it self)

Overall, the use of Webflow for the creation of this website, provides users with the flexibility to integrate various components and visualizations. The user friendly interface enables uses with minimal coding knowledge to create a website that could be used for various purposes.

Thank you for reading. Hope you enjoyed this post!

Visualizing Urban Land Use Growth in Greater Sào Paulo

By: Kevin Miudo

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2018

https://www.youtube.com/watch?v=Il6nINBqNYw&feature=youtu.be

Introduction

In this online development blog for my created map animation, I intend to discuss the steps involved in producing my final geovisualization product, which can be viewed above in the embedded youtube link. It is my hope that you, the reader, learn something new about GIS technologies and can apply any of the knowledge contained within this blog towards your own projects. Prior to discussing the technical aspects of the map animations development, I would like to provide some context behind the creation of my map animation.

Cities within developing nations are experiencing urban growth at a rapid rate. Both population and sprawl are increasing at unpredictable rates, with consequences for environmental health and sustainability. In order to explore this topic, I have chosen to create a time series map animation visualizing the growth of urban land use in a developing city within the Global South. The City which I have chosen is Sào Paulo, Brazil. Sào Paulo has been undergoing rapid urban growth over the last 20 years. This increase in population and urban sprawl has significant consequences to climate change, and such it is important to understand the spatial trend of growth in developing cities that do not yet have the same level of control and policies in regards to environmental sustainability and urban planning. A map animation visualizing not only the extent of urban growth, but when and where sprawl occurs, can help the general public get an idea of how developing cities grow.

Data Collection

In-depth searches of online open data catalogues for vector based land use data cultivated little results. In the absence of detailed, well collected and precise land use data for Sào Paulo, I chose to analyze urban growth through the use of remote sensing. Imagery from Landsat satellites were collected, and further processed in PCI Geomatica and ArcGIS Pro for land use classification.

Data collection involved the use of open data repositories. In particular, free remotely sensed imagery from Landsat 4, 5, 7 and 8 can be publicly accessed through the United States Geological Survey Earth Explorer web page. This open data portal allows the public to collect imagery from a variety of satellite platforms, at varying data levels. As this project aims to view land use change over time, imagery was selected at data type level-1 for Landsat 4-5 Thematic Mapper and Landsat 8 OLI/TIRS. Imagery selected had to have at least less than 10% cloud cover, and had to be images taken during the daytime so that spectral values would remain consistent across each unsupervised image classification.

Landsat 4-5 imagery at 30m spectral resolution was used for the years between 2004 and 2010. Landsat-7 Imagery at 15m panchromatic resolution was excluded from search criteria, as in 2003 the scan-line corrector of Landsat-7 failed, making many of its images obsolete for precise land use analysis. Landsat 8 imagery was collected for the year 2014 and 2017. All images downloaded were done so at the Level-1 GeoTIFF Data Product level. In total, six images were collected for years 2004, 2006, 2007, 2008, 2010, 2014, 2017.

Data Processing

Imagery at the Level-1 GeoTIFF Data Product Level contains a .tif file for each image band produced by Landsat 4-5 and Landsat-8. In order to analyze land use, the image data must be processed as a single .tiff. PCI Geomatica remote sensing software was employed for this process. By using the File->Utility->Translate command within the software, the user can create a new image based on one of the image bands from the Landsat imagery.

For this project, I selected the first spectral band from Landsat 4-5 Thematic Mapper images, and then sequentially added bands 2,3,4,5, and band 7 to complete the final .tiff image for that year. Band 6 is skipped as it is the thermal band at 120m spatial resolution, and is not necessary for land use classification. This process was repeated for each landsat4-5 image.Similarly for the 2014 and 2017 Landsat-8 images, bands 2-7 were included in the same manner, and a combined image was produced for years 2014 and 2017.

Each combined raster image contained a lot of data, more than required to analyze the urban extent of Sào Paulo and as a result the full extent of each image was clipped. When doing your own map animation project, you may also wish to clip data to your study area as it is very common for raw imagery to contain sections of no data or clouds that you do not wish to analyze. Using the clipping/subsetting option found under tools in the main panel of PCI Geomatica Focus, you can clip any image to a subset of your choosing. For this project, I selected the coordinate type ‘lat/long’ extents and input data for my selected 3000×3000 pixel subset. The input coordinates for my project were: Upper left: 46d59’38.30″ W, Upper right: 23d02’44.98″ S, Lower right: 46d07’21.44″ W, Lower Left: 23d52’02.18″ S.

Land Use Classification

The 7 processed images were then imported into a new project in ArcPro. During importation, raster pyramids were created for each image in order to increase processing speeds.  Within ArcPro, the Spatial Analyst extension was activated. The spatial analyst extension allows the user to perform analytical techniques such as unsupervised land use classification using iso-clusters. The unsupervised iso-clusters tool was used on each image layer as a raster input.

The tool generates a new raster that assigns all pixels with the same or similar spectral reluctance value a class. The number of classes is selected by the user. 20 classes were selected as the unsupervised output classes for each raster. It is important to note that the more classes selected, the more precise your classification results will be. After this output was generated for each image, the 20 spectral classes were narrowed down further into three simple land use classes. These classes were: vegetated land, urban land cover, and water. As the project primarily seeks to visualize urban growth, and not all types of varying land use, only three classes were necessary. Furthermore, it is often difficult to discern between agricultural land use and regular vegetated land cover, or industrial land use from residential land use, and so forth. Such precision is out of scope for this exercise.

The 20 classes were manually assigned, using the true colour .tiff image created from the image processing step as a reference. In cases where the spectral resolution was too low to precisely determine what land use class a spectral class belong to, google maps was earth imagery referenced. This process was repeated for each of the 7 images.

After the 20 classes were assigned, the reclassify tool under raster processing in ArcPro was used to aggregate all of the similar classes together. This outputs a final, reclassified raster with a gridcode attribute that assigns respective pixel values to a land use class. This step was repeated for each of the 7 images. With the reclassify tool, you can assign each of the output spectral classes to new classes that you define. For this project, the three classes were urban land use, vegetated land, and water.

Cartographic Element Choices:

 It was at this point within ArcPro that I had decided to implement my cartographic design choices prior to creating my final map animation.

For each layer, urban land use given a different shade of red. The later the year, the darker and more opaque the colour of red. Saturation and light used in this manner helps assist the viewer to indicate where urban growth is occurring. The darker the shade of red, the more recent the growth of urban land use in the greater Sào Paulo region. In the final map animation, this will be visualized through the progression of colour as time moves on in the video.

ArcPro Map Animation:

Creating an animation in ArcPro is very simple. First, locate the animation tab through the ‘View’ panel in ArcPro, then select ‘Add animation’. Doing so will open a new window below your work space that will allow the user to insert keyframes. The animation tab contains plenty of options for creating your animation, such as the time frame between key frames, and effects such as transitions, text, and image overlays.

For the creation of my map animation, I started with zoomed-out view of South America in order to provide the viewer with some context for the study area, as the audience may not be very familiar with the geography of Sào Paulo. Then, using the pan tool, I zoomed into select areas of choice within my study area, ensuring to create new keyframes every so often such that the animation tool creates a fly-by effect. The end result explores the very same mapping extents as I viewed while navigating through my data.

While making your own map animation, ensure to play through your animation frequently in order to determine that the fly-by camera is navigating in the direction you want it to. The time between each keyframe can be adjusted in the animation panel, and effects such as text overlays can be added. Each time I activated another layer for display to show the growth of urban land use from year to year, I created a new keyframe and added a text overlay indicating to the user the date of the processed image.

Once you are satisfied with your results, you can export your final animation in a variety of formats, such as .avi, .mov, .gif and more. You can even select the type of resolution, or use a preset that automatically configures your video format for particular purposes. I chose the youtube export format for a final .mpeg4 file at 720p resolution.

I hope this blog was useful in creating your very own map animation on remotely sensed and classified raster data. Good luck!

Geovisualizing “Informality” – Using OpenStreetMap & Story Maps to tell the story of infrastructure in Kibera (Nairobi, Kenya)

by Melanie C. MacDonald
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2017

From November 8th to 16th, 2017, I ran a small mapping campaign to generate building data in Kibera (Nairobi, Kenya) using OpenStreetMap (OSM). OSM is a collaborative online project whose aim is to create a free, editable ‘world map’ for anyone to use. The foundation(s) of OSM are rooted in both participation and partnership (i.e. the open-source movement) and restriction (i.e. the growing complexity of data and copyright constraints in many fields of work); collaboration was OSMs direct response to these notable growing restrictions and, as a results, I felt the best – and most interesting – technology suited for my geovisualization project. Overall, my personal campaign resulted in the contribution of 6770 buildings generated from known sources – mostly myself – and 1101 from ‘unknown’ sources (to me).

Importance:

Building data in informal settlements (or “slums”) is difficult to generate and/or find. While my research efforts, which included an informal interview/discussion with colleagues in Kenya, uncovered that building data for Kibera does in fact exist, it is prohibitively expensive and inaccessible to most people, including myself. (Note: Despite being a student and a former researcher in Nairobi, this was the case.) To further this, copyright law protects any data extracted in the private sector, making it more complicated to use this data to create a map for the public. Because I wanted to use this geovisualization project to create such a map – accessible to anyone (using the technology available through Esri’s Story Maps) and educational, the case for OSM became even stronger.

Steps taken: how and where to start?

The first step of my project was to learn how OpenStreetMap (OSM) works. Because OSM is intuitive, the steps I took were simple: (1) visit the website, www.openstreetmap.org; (2) create an account (“Sign Up”); (3) Log In; (4) type in “Kibera, Nairobi, Kenya” into the search field; (3) click “edit”; (4) follow the tutorial that OSM offers before you make your first edit; (5) select “Area” and zoom all the way into the rooftops of the buildings to create polygons that mark the geolocation of each structure (double-click to close the polygon); (6) select “Building” from the options in the left-bar (Note: if this process was done with people who live in these neighbourhoods, the names of what each building could be included in the data extraction, which would create more possibility for analysis in the future); (7) click the “check-mark” (perhaps the most important step to saving the data, and then “Save” on the top banner.

These steps were repeated until a chosen portion of Kibera was completed. The above instructions were emailed to a few willing participants and a “call” for participation via Twitter, too, was done periodically over the course of 6 days. My building extraction started from the beginning of an “access road” at the furthest South-Eastern point of Kibera in a village called Soweto-East, where I had conducted research about a contentious “slum-upgrading” programme 4 years ago.

Over the course of 6 days, I made 31,691 edits to OSM over all, which included all actions (deleting incorrect buildings, creating nodes, moving things, etc.). In total, I created 5471 buildings and 1299 were created by friends and family, resulting in 6770 buildings in total. However, when I extracted this building data, first loading it into QGIS and then exporting that shapefile into ArcGIS, 7871 buildings were counted (extracted/cleaned) in this area South of the railway (which runs along the northern part of the outside boundary). I cannot account for who created 1101 buildings (perhaps success attributed to social media efforts?), but 86% of the area was ‘mapped’ over a 6-day period.

It’s often said, for perspective purposes, that Kibera is “two-thirds the size of Central Park in New York”, but the precise calculation of area it covered is less-often (if ever) expressed. I wasn’t able to contribute to an absolutely calculation, either, but: not accounting for elevation or other things, at its longest and widest, the area of Kibera covered in this 6-day period was approximately 2000m x 1500m. It’s imprecise, but: imagine someone running around a 400m track 5 times and you have the length of the area in focus – thousands of buildings that are homes, businesses, schools, medical clinics, and so on the equivalent of maybe 10 football fields (12 or 13 acres).

 Accuracy, Precision & Impact

It was often difficult to determine where the lines began and ended. Because of the corrugated metal material that’s used to build homes, schools, businesses (and more), the light flares from the sun, captured from the satellite imagery, made for guesswork in some instances. The question then became: why bother? Is there a point, or purpose, to capturing these structures at all if it’s impossible to be precise?

Much of the work to date with open-source data in this particular community in this particular part of the world is deeply rooted in protecting people; keeping people safe. Reading about the origins of mapping efforts by an organization called Map Kibera can reveal a lot about the impact(s) and challenges of creating geodata in informal settlements (or “slums”). The process of drawing thousands of polygons to represent buildings that are most often considered to be precarious or impermanent housing was enlightening. One of the main take-away ‘lessons’ was that data production and availability is critical – without data, we don’t have much to work with as spatial analysts.

Practical Implications: the “Story Map”

While new data production was one of the goals of this geovisualization project, the second challenge goal was to find a way of communicating the results. As a technology, Esri’s Story Maps technology was the most useful because it allowed me to link the OSM map as a basemap, which helped maintain the open-source ‘look’ to the map. Without much effort, the 7871 new buildings, covering 7 of the 13 villages in Kibera, were automatically available using this basemap. Because I took stop-motion videos of the OSM building extraction process, I was able to create point data on my Story Map to link to these videos. With “education” as one of the goals of the project – both of the infrastructure in Kibera itself, and of how to use OSM, in general – people unfamiliar with OSM tools and how they can be used/useful in the context of missing data in informal settlements (or “slums”) could familiarize themselves with both. In addition, I included interesting, personal photos from prior-research/work in this area, further adding to the “story” of infrastructure in the community. The Story Map is available here.

Print: Formalizing the Informal

The initial goal of this geovisualization project was to demonstrate that there is beauty and art in the creation of data, particularly when it is collaborative and made to be openly-accessible, unrestricted, and for anyone to use. After I proposed to create and extract building data from Kibera, I wanted to use a special printing technology to have the building data etched into an aluminum composite material called dibond. The idea was to have this piece of collaborative work (about a place commonly labeled a “slum”) gallery-ready and ultimately “legitimized” simply by being etched into something permanent (this idea of “legitimate” is tongue-in-cheek, to be clear). The technology available to etch into dibond is limited in the city, however, and when time-limitations made the etching-goal prohibitive, I decided to have the final product printed and mounted onto dibond as a compromise. In the end, the result of having the mounting material hidden was conceptually true to the goal of the project – to draw attention to the reality that real people with rich lives maintain these homes, businesses, schools, community centres, etc., regardless of the assumptions what that corrugated metal material may indicate. Paired with the Esri Story Map, this print was useful at drawing the attention of people into the digital story, which was loaded onto a computer for the day of the formal project presentation. Now, however, the 24×36 print hands on my wall, generating conversation about the entire process of this project and the potential impacts of open-source data. Having spent 3 years of my life examining the impacts of public participation when designing infrastructure changes (which hopefully lead to improvements in quality of life), this print – and process – could not have a better ‘home’.

#AddressingTheSurface Translucent Maps inspired by GIS and Open Data

by Edgar Baculi #themapmaker
Geovisualization Project @RyersonGeo, SA8905, Fall 2017

#AddressingTheSurface was a collaborative geovisualization project with recent OCAD University graduate, Graphic Designer and Fine Artist Jay Ginsherman, with ideas and direction from Ryerson University, Master of Spatial Analysis candidate Edgar Baculi. This project was inspired by the previous work of Ginsherman entitled ‘Liquid Shadows’ using translucent images or maps as well as a lighting device nicknamed the ‘Lightbox’. This piece along with Ginsherman’s previous and on-going work can be found here http://jginsherman.format.com/. While attending OCAD University’s 102nd GradEx, Baculi encountered the work of Ginsherman and the GIS like experience of the attendees. From this the idea of using open data and actual GIS to produce a piece had begun.

After consulting with Ginsherman a piece based on the lived experience of Baculi, open data and GIS was established. Having previous research work in open data, Baculi was familiar with exploring and downloading open data. The Toronto Open Data Catalogue provided all the data relevant to the project. The key focus of the data collection were datasets related to Toronto Community Housing and services of interest for these residents and other locations.

The following datasets were downloaded and manipulated from the catalogue:
1. Toronto Community Housing Corporation Residences (with high, mid and low rise buildings selected and divided into three map layers)
2. The boundary of the city of Toronto (dissolved former municipality shape file)
3. City of Toronto Neighbourhoods
4. Street file
5. Fire Stations
6. Police Stations
7. Park Land
8. TTC Subway lines
9. Three heat/ kernel density maps on services of interest for TCHC residents (based on Rent Bank Centres, Community Cooling Centres and Shelters.

A key aspect of this project was the use of subtractive colours (Magenta, Yellow and Cyan) for the heat maps to show interesting overlap, resulting in new colours. The overlap of colours were designed intentionally to be open to interpretation to the map readers.

Using ArcGIS the previously mentioned datasets were adjusted by Baculi with ideal symbology before being sent to Ginsherman. The discussions between Baculi and Ginsherman involved understanding how GIS works and cartographic ideals for the look of the maps, with great design to appeal to the audience. Baculi wanted to create a hands on GIS experience, with a legend that built itself up and remained legible to the map reader. Ginsherman incorporated these ideals into the final look under Baculi’s direction.

Once Baculi completed the GIS portion of the layers, they were sent off to Ginsherman to improve design, layout and to print. Ginsherman used PDF’s of the layers in adobe illustrator, and ensured map alignment by limiting the work to the same illustrator file and giving each map its own layer. Printing involved using a laser printer, specifically at the OCAD University Digital Print Centre. Previous draft layers were also created to test the colour combinations and the best level of transparency for the maps.

A key component of the piece was the Lightbox from Ginsherman’s previous work which was designed and built by Ginsherman and his father. The Lightbox is made of wood, acrylic glass, and LED lights which were screwed together. The Toronto boundary layer was the only layer not printed on a translucent sheet, but on the glass. The boundary along with the north arrow acted as guides to align the layering of the maps. The LED lights improved the clarity of the layering maps as well as directed attention to the piece.

The end result was presented on Ryerson’s 2017 GIS Day and consisted of a Lightbox with the Toronto boundary printed on top and a total of 12 translucent maps. A variety of combinations were possible for interaction and discussion for the attendees. Please see the YouTube video below!

Displaying Brooklyn’s Urban Layers by Mapping Over 200 Years of Buildings

Renad Kerdasi
Geovis Course Assignment
SA8905, Fall 2015 (Rinner)

Growth in Brooklyn
Located at the far western end of Long Island, Brooklyn is the most populous of New York City’s five boroughs. The borough began to expand between the 1830s and 1860s in downtown Brooklyn. The borough continued to expand outwards as a result of a massive European immigration, the completion of the Brooklyn Bridge connecting to Manhattan, and the expansion of industry. By mid 1900s, most of Brooklyn was already built up as population increased rapidly.

Data
The data in the time series map are from PLUTO, which is a NYC open data site created by NYC Department of City Planning and released in 2015. The data contain information about each building located in the boroughs, including the year the construction of the building was completed (in numeric 4 digits format) and the building footprints. The building years range from 1800 to 2015, there are some missing dates in the dataset as well as some inaccuracy in the recorded dates. The data are available in Shapefile and Windows Comma Separated format, found on NYC Planning website: http://www.nyc.gov/html/dcp/html/bytes/dwn_pluto_mappluto.shtml

The Making of the Time Series
To present the structural episodes of Brooklyn’s built environment, QGIS 2.10 was utilized with the Time Manager plugin. QGIS is an open source GIS application that provides data visualization, editing, and analysis through functions and plugins (https://www.qgis.org/en/site/about/). The Time Manager plugin animates vector features based on a time attribute (https://plugins.qgis.org/plugins/timemanager/). This tool was effective in presenting a time series of Brooklyn’s building construction dates.

To create the time series, the PLUTO SHP was downloaded and prepared by removing any unnecessary fields. The columns of interest are: FID, Shape, and YearBuilt. Because we are interested in the time column, the formatting must fit with QGIS Time Manager. QGIS Time Manager requires timestamps to be in YYYY-MM-DD format whereas the building dates in the PLUTO SHP are in a four-digit format. Therefore, the date in the dataset must be modified to fit the Time Manager format before it can be brought into QGIS.

Table 1_BrooklynData

In QGIS, Time Manager plugin must be installed first. The SHP can then be added into QGIS as well as other Shapefiles needed: roads, highways, state boundaries, etc. Note: to use Time Manager, the data must be in SHP format.

Layer_BrooklynData

Once the data are added, the polygons (i.e. buildings) are styled based on age. This will be effective in distinguishing the oldest buildings from the newest. In QGIS, there are a large number of options available to apply different types of symbology to the data. The layer is styled based on the attribute Year Built, since the time series will show urban layers using building dates. Also, Graduated is chosen because features in the layer should not be styled the same way. The other data file, such as roads, highways, and state boundaries, are styled as well.

Once all the data are added and styled, it can be oriented and applied to the Time Manager plugin. To truly see the urban layers, the map is zoomed on the upper portion of Brooklyn. In Time Manager settings, the layer with building dates is added and the Start Time is the Year Built field, which includes the timestamp data. To get features to be configured permanently on the map, in the End Time option “No End Time” is selected. For animation options, each time frame will be shown for 100 milliseconds, and timestamp (i.e. built year) will be displayed on the map.

Layer_BrooklynData

In the Time Manager dock, the time frame is changed to years since the animation will be showing the year the construction of the building was completed. The size of the time frame will be 5 years. With these settings, each frame will display 5 years of data every 100 millisecond. Playing the video will display the animation inside QGIS, and one can see the time scrolling from 1800-2015 in the dock.

Dock_BrooklynData

Time Manager also enables you to export the animation to an image series using the “Export Video” button. Actual video export is not implemented in Time Manager. To play the animation outside of QGIS, various software applications can be used on the resulting image series to create a video file.

In addition, QGIS only allows users to insert a legend and title in the Composer Manager window. Currently, it is not possible to get the legend rendered in the main map window. One approach to generate a video with a legend is to create a dummy legend and add the image containing the legend into the PNGs that Time Manager produces. A dummy legend and a title for Brooklyn’s urban layers was created outside of QGIS, and added to each PNG.

Finally, to create a time-lapse and compile the images together, Microsoft Movie Maker was utilized. Other software applications can be used, including mancoder and avidemux.

Results

Link: https://youtu.be/52TnYAVxN3s

3D Hexbin Map Displaying Places of Worship in Toronto

Produced by: Anne Christian
Geovis Course Assignment, SA8905, Fall 2015 (Rinner)

Toronto is often seen as the city of many cultures, and with different cultures often come different beliefs. I wanted to explore the places of worship in Toronto and determine what areas have the highest concentrations versus the lowest concentrations. As I explored the different ways to display this information in a way that is effective and also unique, I discovered the use of hexbin maps and 3D maps. While doing some exploratory analysis, I discovered that while hexbin maps have been created before and 3D maps have been printed before, I was unable to find someone who has printed a 3D hexbin prism map, so I decided to take on this endeavor.

Hexbin maps are a great alternative technique for working with large data sets, especially point data. Hexagonal binning uses a hexagon shape grid, and allows one to divide up space in a map into equal units and display the information (in this case the places of worship) that falls within each unit (in this case hexagon grids). The tools used to create this project include QGIS, ArcGIS, and ArcScene, although it could probably be completed entirely within QGIS and other open-source software.

Below are the specific steps I followed to create the 3D hexbin map:

  1. Obtained the places of worship point data (2006) from the City of Toronto’s Open Data Catalogue.
  2. Opened QGIS, and added the MMQGIS plugin.
  3. Inputted the places of worship point data into QGIS.
  4. Used the “Create Grid Lines Layer” tool (Figure 1) and selected the hexagon shape, which created a new shapefile layer of a hexagon grid.

    Figure 1: Create Grid Lines Layer Tool
  5. Used the “Points in Polygon” tool (Figure 2) which counts the points (in this case the places of worship) that fall within each hexagon grid. I chose the hexagon grid as the input polygon layer and the places of worship as the input point layer. The number of places of worship within each hexagon grid was counted and added as a field in the new shapefile.

    Figure 2: Points in Polygon Tool
  6. Inputted the created shapefile with the count field into ArcGIS.
  7. Obtained the census tract shapefile from the Statistics Canada website (https://www12.statcan.gc.ca/census-recensement/2011/geo/bound-limit/bound-limit-2011-eng.cfm) and clipped out the city of Toronto.
  8. Used the clip tool to include only the hexagons that are within the Toronto boundary.
  9. Classified the data into 5 classes using the quantile classification method, and attributed one value for each class so that there are only 5 heights in the final model. For example, the first class had values 0-3 in it, and the value I attributed to this class was 1.5. I did this for all of the classes.
  10. The hexagons for the legend were created using the editor toolbar, whereby each of the 5 hexagons were digitized and given a height value that matched with the map prism height.
  11. Inputted the shapefile with the new classified field values into ArcScene, and extruded the classified values and divided the value by 280 because this height works well and can be printed in a timely manner.
  12. Both the legend and hexagonal map shapefile were converted into wrl format in Arcscene. The wrl file was opened in Windows 10 3D Builder and converted into STL format.
  13. This file was then brought to the Digital Media Experience (DME) lab at Ryerson, and the Printrbot Simple was used to print the model using the Cura program. The model was rescaled where appropriate. My map took approximately 3 hours to print, but the time can vary depending on the spatial detail of what is being printed. The legend took approximately 45 minutes. Below is a short video of how the Printrbot created my legend. A similar process was used to created the map.

The final map and legend (displayed in the image below) provide a helpful and creative way to display data. The taller prisms indicate areas with the most places of worship, and the shorter prisms indicate the areas in Toronto with the least places of worship. This hexagonal prism map allows for effective numerical comparisons between different parts of Toronto.

IMG_5392

Animating Toronto Parking Enforcement with heatmap.js

by Justin Pierre – Geovis course project for SA8905, Fall 2015 (Dr. Rinner)

Heatmap.js is a project developed by Patrick Wied to create heatmaps online using JSON data and javascript. It’s lightweight, free to use and comes with tons of great customization options.

For my geovisualization project for SA8905 I created an animated heat map of parking tickets issued in Toronto during the 24 hour period of May 1st 2014. Parking ticket data is supplied on the Toronto Open Data Portal.

Thursday May 1st, 2014 was one of the busiest days of the year for parking tickets. There were 9,559 issued in 24 hours. 6am was the safest time with only 25 tickets issued and 9am was the busiest with 1,451.

To create the heatmap I  geocoded the Toronto parking ticket data using the city of Toronto street data with address ranges. About 10% of the records had to be manually geocoded to intersections, which was a time consuming process! Once I had the locations, it was simple to create a JSON object for each hour in excel, like this:

var h=[ {
 max: 100000,
 data: [
{lat: 43.667229, lng: -79.382666, count: 1},
{lat: 43.728744, lng: -79.30461, count: 1},
{lat: 43.778933, lng: -79.418283, count: 1},
{lat: 43.647378, lng: -79.418484, count: 1},

etc…

h is an array where each element is a JSON object containing the lats and lngs of each traffic ticket. The count is required for the heatmapping function and is always 1, unless you’re this driver:

Using heatmap.js is super straightforward. Initialize your web map in leaflet or openlayers (I used leaflet), configure some simple parameters:

var cfg = {
 "radius": .008,           //set for interpolation radius
 "maxOpacity": .8,         //set to .8 to show the basedata
 "scaleRadius": true,      //recalibrate radius for zoom
 "useLocalExtrema": true,  //reset data maximum based on view
 latField: 'lat',          //where is latitude referenced 
 lngField: 'lng',          //where is longitude referenced
 valueField: 'count'       //where is the numerical field
 };

Attach that to your heatmap object and point it at your datasource like so:

heatmapLayer = new HeatmapOverlay(cfg);
map.addLayer(heatmapLayer);
i=0;
heatmapLayer.setData(h[i]);

Remember that h[] is the array where the ticket data is stored and so h[0] is the first hour of data, midnight to 1am. This will create a static heatmap like this:

Screenshot

Now comes the part where we cycle through the hours of data with a setInterval() function:

setInterval(function(){ 
 i+=1;
 if (i>23) i=0;
 $( ".heatmap-canvas" ).fadeOut( "slow", function() 
   {heatmapLayer.setData(h[i]);
   heatmapLayer._draw();
   $( "#hour").html(i);
 });
 $( ".heatmap-canvas" ).fadeIn( "slow", function() {
 });
}, 2000);

Every 2,000 milliseconds (2 seconds) the page will fade out the heatmap layer, switch the data for the next hour and fade it back in. If the cycle has reached the end of the day it resets. The $( “#hour”).html(i) bit refers to changing the hour printed on the webpage itself.

You can check out the finished project at http://justinpierre.ca/tools/heatmap/ and be sure to let me know what you think at https://twitter.com/jpierre001.