A Glimpse of Short Term Rentals in Calgary Using Tableau

by Bryan Willis
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2020

Project linkhttps://thebryanwillis.github.io/CalgaryShortTermRentals.html

Background

Over the years, many homeowners have decided to turn their place of residence into short term rentals, allowing their place of residence to be rented out for short periods of time. Short term rentals have also seen an increase in popularity due to their better pricing when compared with hotels and the unique neighbourhood characteristics it provides. Although Calgary has not seen the increase of short term rentals as dramatics as that of Toronto and Vancouver, Calgary has continued to see growth in the short term rental supply. The City of Calgary defines a short term rental as a place of residence that provides temporary accommodation and lodging for up to 30 days and all short term rentals in Calgary must legally obtain a business license to run.

This interactive dashboard will aim to highlight some key components related to short term rentals in Calgary such as the locations, the license status, the composition of the housing type and licenses per month

Data

The data used in this dashboard is based off of the Short Term Rentals data set which was acquired through the City of Calgary’s Open Data Portal.

Methods

  1. Data Cleaning – After downloading the data from the open data portal, the data needed to be cleaned for it to properly display the attributes we want. All rows containing NULL values were removed from the data set via MS Excel.
  2. Map Production – After importing the cleaned data into Tableau, we should quickly be able to create our map that shows where the locations of the short term rentals are. To do this, drag both the auto generated into the middle of the sheet which should automatically generate a map with the location points. To differentiate LICENSED and CANCELLED points, drag the License Status column into the ‘Color’ box.
  1. Monthly Line Graph – To produce the line graph that shows the number of licenses produced by month, drag into the COLUMN section at the top and right click on it and select MONTH. For the ROWS section, again use but right click on it after dragging and select MEASURE and COUNT. Lastly, drag License Status into the ‘Color’ box.
Finalized monthly line graph
  1. City Quadrant Table – To create this table, we first need to create a new column value for the city quadrant. Right click the white space under ‘Tables’ and click on ‘Create Calculated Field’ which will bring up a new window. In the new window input RIGHT([Address],2) into the blank space. This code will create a new field with the last two letters in the Address field which is the quadrant. Once this field is created, drag it into the ROW section and drag it again into the ROW but this time right clicking it and clicking on Measure and then Count. Finish off by dragging License Status to the ‘Color’ box.
Finalized City Quadrant Table
  1. Dwelling Type Pie Chart – For the pie chart, first right click on the ROW section and click ‘New Calculation’. In the box, type in avg(0) to create a new ‘Mark’. There should now be an AGG(avg(0)) section under “Marks’, make sure the dropdown is selected at ‘Pie’. Then drag the Type of Residence column into the ‘Angle’ and ‘Color’ boxes. To further compute the percentage for each dwelling type, right click on the angle tab with the Type of Residence column in it then go the ‘Quick Table calculation’ and finally ‘Percent of Total’ .
Finalized pie chart
  1. Dashboard Creation – Once the above steps are complete, a dashboard can be made with the pieces by combining all 4 sheets in the Dashboard tab.
Finalized dashboard with the 4 created components

Limitations

The main limitations in this project comes from the data. Older licensing data is removed from the data set when the data set is updated daily by city staff. This presents the problem of not being able to compare full year to date data. As seen in the data set used in the dashboard, majority of the January data has already been removed from the data set with the except of January 26, 2020. Additionally, there were also quite a few entries in the data set that had null addresses which made it impossible to pinpoint where those addresses were. Lastly, as this data set is for 2020, the COVID-19 pandemic might have disrupted the amount of short term rentals being licensed due to both the city shifting priorities as well as more people staying home resulting in less vacant homes available for short term rentals.

Geovisualization of the York Region 2018 Business Directory


(Established Businesses across Region of York from 1806 through 2018)

Project Weblink (ArcGIS Online): Click here or direct weblink at https://ryerson.maps.arcgis.com/apps/opsdashboard/index.html#/82473f5563f8443ca52048c040f84ac1

Geovisualization Project @RyersonGeo
SA8905- Cartography and Geovisualization, Fall 2020
Author: Sridhar Lam

Introduction:

York Region, Ontario as identified in Figure 1, with over one million people from a variety of cultural backgrounds is across 1,776 square kilometres stretching from Steeles Avenue in the south to Lake Simcoe and the Holland Marsh in the north. By 2031, projections indicate 1.5 million residents, 780,000 jobs, and 510,000 households. Over time, York Region attracted a broad spectrum of business activity and over 30,000 businesses.

Fig.1: Region of York showing context within Ontario, Greater Toronto Area (GTA) and its nine Municipalities.
(Image-Sources: https://www.fin.gov.on.ca/en/economy/demographics/projections/ , https://peelarchivesblog.com/about-peel/ and https://www.forestsontario.ca/en/program/emerald-ash-borer-advisory-services-program)

Objective:

To create a geovisualization dashboard for the public to navigate, locate and compare established Businesses across the nine Municipalities within the Region of York.

The dashboard is intended to help Economic Development market research divisions sort and visualize businesses’ nature, year of establishment (1806 through 2018), and identify clusters (hot-spots) at various scales.

Data-Sources & References:

  1. Open-Data York Region
  2. York Region Official Plan 2010

Methodology:

First, the Business Directory updated as of 2018, and the municipal boundaries layer files, which are made available at the Open-Data Source of York Region, are downloaded. As shown in Figure 2, the raw data is analyzed to identify the Municipal data based on the address / municipal location distribution. It is identified that the City of Markham and the City of Vaughan have a major share.

Fig.2: The number of businesses and the percentage of share within the nine Municipalities of the York Region.

The raw-data is further analyzed, as shown in Figure 3, to identify the major business categories, and the chart below presents the top categories within the dataset.

Fig.3: Major Business Categories identified within the dataset.

Further, the raw data is analyzed, as shown in figure 4, to identify the businesses by the year of establishment, and identifies that most of the businesses within the dataset were established after the 1990s.

Fig 4: Business Establishment Years identified within the dataset.

The Business addressed data is checked for consistency, and Geocodio service is used to geocode the address list for all the business location addresses. The resulting dataset is imported into ArcGIS Map, as shown in figure 5, along with the municipal boundaries layers and checked for inconsistent data before being uploaded onto ArcGIS Online as hosted layers.

Fig.5: Business Locations identified after geocoding of the addresses across the York Region.

Once hosted on ArcGIS Online, a new dashboard titled: ‘Geovisualization of the York Region 2018 Business Directory’ is created. To the dashboard, the components are tested for visual hierarchy, and careful selection is made to use the following components to display the data:

  1. Dashboard Title
  2. Navigation (as shown in figure 6, is placed on the left of the interface, which provides information and user-control to navigate)
  3. Pull-Down/ Slider Lists for the user to select and sort from the data
  4. Maps – One map to display the point data and the other to display cluster groups
  5. Serial Chart (List from the data)- To compare the selected data by the municipality
  6. Map Legend, and
  7. Embedded Content – A few images and videos to orient the context of the dashboard

The user is given a choice to select the data by:

Fig.6: User interface for the dashboard offering selection in dropdown and slider bar.

Thus a user of the dashboard can select or make choices using one or a combination of the following to display the results in on the right panes (Map, data-chart and cluster density map):

  1. Municipality: By each or all Municipalities within York Region
  2. Business Type: By each type or multiple selections
  3. Business Establishment Year Time-Range using the slider (the Year 1806 through 2018)

For the end-user of this dashboard, results are also provided based on business locations identified after geocoding the addresses across the York Region, comparative and quantifiable by each of the nine municipalities shown in Figure 7.

Fig.7: Data-Chart displayed once the dashboard user makes a selection.

By plotting the point locations on a map, and simultaneously showing the clusters within the selected range (Region/ by Municipality / by Business Type / Year of Establishment selections), Figure 8.

Fig.8: Point data map and cluster map indicate the exact geolocation as well as the cluster for the selection made by the user across the York Region at different scales.

Results:

Overall, the dashboard provides an effective geovisualization with a spatial context and location detail of the York Region’s 2018 businesses. The business type index with an option to select one/ multiple at a time and the timeline slider bar offers an end-user of the dashboard to drill down to the information they seek to obtain from this dashboard. The dashboard design offers a dark theme interface maintaining a visual hierarchy of the different map elements such as the map title, legend, colour scheme, colour combinations ensuring contrast and balance, font face selection and size, background and map contrast, choice of hues, saturation, emphasis etc. The maps also offer the end-user to change the background map base layers to see the data in the context of their choice. As shown in figure 9 of location data and quantifiable data at different scales, the dashboard interface offers visuals to display the 30,000+ businesses across the York Region.

This image has an empty alt attribute; its file name is Capture-1-1024x496.jpg

Fig.9: Geovisualization Dashboard to display the York Region 2018 Business Directory across the Nine Municipalities of the York Region.

The weblink to access the ArcGIS Online Dashboard where it is hosted is: https://ryerson.maps.arcgis.com/apps/opsdashboard/index.html#/82473f5563f8443ca52048c040f84ac1

(Please note an ArcGIS Online account is required)

Limitation:

The 2018 business data across York Region contains over 38,000 data points, and the index/ legend of the business types may look cluttered while a selection is made as well. The fixed left navigation panel width is definitely a technical limitation because the pull-down display cannot be made wider. However, the legend screen could be maximized to read all the business categories clearly. There may be errors, incomplete or missing data in the compilation of business addresses. This dashboard can be updated quickly but requires a little effort, whenever there is an update of the York Region business directory’s new release in the coming years.

An Interactive Introduction to Retail Geography

by Jack Forsyth
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2020

Project Link: https://gis.jackforsyth.com/


Who shops at which store? Answers to this fundamentally geographic question often use a wide variety of models and data to understand consumer decision making to help locate new stores, target advertisements, and forecast sales. Understanding store trade areas, or where a store’s customers come from, plays an important role in this kind of retail analysis. The Trade Area Models web app lets users dip their toes into the world of retail geography in a dynamic, interactive fashion to learn about buffers, Voronoi polygons, and the Huff Model, some of the models that can underlie trade area modeling.

The Huff Model on display in the Trade Area Models web app

The web app features a tutorial that walks new users through the basics of trade area modeling and the app itself. Step by step, it introduces some of the underlying concepts in retail geography, and requires users to interact with the app to relocate a store and resize the square footage of another, giving them an introduction to the key interactions that they can use later when interacting with the models directly.

A tutorial screenshot showing users how to interact with the web app

The web app is designed to have a map dominate the screen. On the left of the browser window, users have a control panel where they can learn about the models displayed on the map, add and remove stores, and adjust model parameters where appropriate. As parameters are changed, users receive instant feedback on the map that displays the result of their parameter changes. This quick feedback loop is intended to encourage playful and exploratory interactions that are not available in desktop GIS software. At the top of the screen, users can navigate between tabs to see different trade area models, and they are also provided with an option to return to the tutorial, or read more about the web app in the About tab.

The Buffers tab allows for Euclidean distance and drive time buffers (pictured above)

Implementation

The Trade Area Models web app was implemented using HTML/CSS/JavaScript and third party libraries including Bootstrap, JQuery, Leaflet, Mapbox, and Turf.js. Bootstrap and JQuery provided formatting and functionality frameworks that are common in web development. Leaflet provided the base for the web mapping components, including the map itself, most of the map-based user interactions, and the polygon layers. Mapbox was used for the base map layer and its Isochrone API was used to visualize drive time buffers. Turf.js is a JavaScript-based geospatial analysis library that makes performing many GIS-related functions and analysis simple to do in web browsers, and it was used for distance calculation, buffering, and creating Voronoi polygons. Toronto (Census Metropolitan Area) census tract data for 2016 were gathered from the CensusMapper API, which provides an easy to use interface to extract census data from Statistics Canada. Data retrieved from the API included geospatial boundaries, number of households, and median household income. The Huff Model was written from scratch in JavaScript, but uses Turf.js’s distance calculation functionality to understand the distance from each store to each census tract’s centroid. Source code is available at https://github.com/mappinjack/spatial-model-viz

Limitations

One of the key limitations in the app is a lack of specificity in models. Buffer sizes and store square footage areas are abstracted out of the app for simplicity, but this results in a lack of quantitative feedback. The Huff Model also uses Euclidean distance rather than drive time which ignores the road network and alternative means of transit like subway or foot traffic. The Huff Model also uses census tract centroids, which can lead to counter intuitive results in large census tracts. The sales forecasting aspect of the Huff Model tab makes large assumptions on the amount of many spent by each household on goods, and is impacted by edge effects of both stores and customers that may fall outside of the Toronto CMA. The drive time buffers also fully rely on the road network (rather than incorporating transit) and are limited by an upper bounded travel time of 60 minutes from the Mapbox Isochrone API.

Future work

The application in its current form is useful for spurring interest and discussion around trade area modeling, but should be more analytical to be useful for genuine analysis. A future iteration should remove the abstractions of buffer sizes and square footage estimates to allow an experienced user to directly enter exact values into the models. Further, more demographic data to support the Huff Model, and parameter defaults for specific industries would help users more quickly create meaningful models. Applying demographic filters to the sales forecasting would allow, for example, a store that sells baby apparel to more appropriately identify areas where there are more new families. Another useful addition to the app would be integration of real estate data to show retail space that is actually available for lease in the city so that users can pick their candidate store locations in a more meaningful way.

Summary

The Trade Area Models web app gives experienced and inexperienced analysts alike the opportunity to learn more about retail geography. While more analytical components have been abstracted out of the app in favour of simplicity, users can not only learn about buffers, Voronoi polygons, and the Huff Model, but interact with them directly and see how changes in store location and model parameters affect the retail landscape of Toronto.

An interactive demo of Voronoi polygons that includes adding and moving stores

100 Years of Wildfires in California – Tableau Dashboard Time Series

Shanice Rodrigues

GeoVis Project Assignment @RyersonGeo, SA8905, Fall 2020

Natural phenomenon can be challenging to map as they are dynamic through time and space. However, one solution is dynamic visualization itself through time series maps, which offered on Tableau. Through this application, an interactive dashboard can be created which can relay your data in various ways, including time series maps, graphs, text and graphics. If you are interested in creating a dashboard in Tableau with interactive time series and visuals, keep reading.

In this example, we will be creating a timeseries dashboard for the distribution of California’s wildfires over time. The overall dashboard can be viewed HERE on Tableau Online.

First, let’s go over the history of these wildfires which will present an interesting context for what we observe from these fires over time.

History of Wildfires

There is a rich, complicated history between civilization and wildfires. While indigenous communities found fires to be productive in producing soils rich in fertile ash ideal for crops, colonizers dismissed all fires as destructive phenomenon that needed to be extinguished. Especially with the massive fires in the early 1900s causing many fatalities, such as that in the Rocky Mountains that killed 85 people. The United States Forest Service (USFS) decided in implementing a severe fire suppression policy, requiring fires of 10 acres or less to be put out beginning in 1926, and then all fires to be put out by 10 A.M. the next day in 1935. It is expected that from the immediate extinction of fires in the early to mid-1900s, natural fire fuels such as forest debris continued to build up. This is likely the cause of massive fires that appeared in the late 1900s and persist to the current age which continue to be both difficult and expensive to manage. This pattern is obvious, as shown on the bar graph below for the number of fires and acres burned over the years (1919-2019).

Dashboard Creation

Data Importation

Many types of spatial files can be imported into Tableau such as shapefiles and KML files to create point, line or polygon maps. For our purposes, we will be extracting wildfire perimeter data from the Fire and Resource Assessment Program (FRAP) as linked here or on ArcGIS here.  This data displays fire perimeters dating back to 1878 up till the last calendar year, 2019 in California. Informative attribute data such as fire alarm dates, fire extinction dates, causes of fire and acre size of fires are included. While there is a file on prescribed burns, we will only be looking at the wildfire history file. The data imported into Tableau as a ‘Spatial file” where the perimeter polygons are automatically labelled as a geometry column by Tableau.

Timeseries

The data table is shown on the “Data Source” tab, where the table can be sorted by fields, edited or even joined to other data tables. The “Sheet” tabs are used to produce the maps or graphs individually that can all be added in the “Dashboard” table. First, we will create the wildfire time series for California. Conveniently, Tableau categorizes table columns by their data types, such as date, geometry, string text or integers. We can add column “Year” to the “Pages” card from which Tableau will use as the temporal reference for the time series.

The following timeseries toolbar will appear, where wildfire polygons will appear on the map depending on the year they occurred and is defined by the following scroll bar. The map can be shown as a looped animation with different speeds.

Additionally, the “Geometry” field can be added to the “Marks” card which are the wildfire perimeter polygons. Tableau has also generated “Longitude” and “Latitude” that are the total spatial extent of the wildfire geometries and can be added to the “Columns” and “Rows” tab.

In the upper-right “Show Me” table, the map icon can be selected to generate the base map.

Proportionally Sized Point Features

Multiple features can be added to this map to improve the visualization. First, the polygon areas appear to be very small and hard to see on the map above therefore it may be more effective to display them as point locations. In the “Marks” card, use the dropdown and select the ‘Shape” tab.

From the shape tab, there are multiple symbols to select from, or symbols can be uploaded from your computer into Tableau. Here, we chose a glowing point symbol to represent wildfire locations.

Additionally, to add more information to the points, such as proportional symbol sizes according to area burned (GIS ACRES field) by each fire. A new calculated field will have to be created for the point size magnitudes as shown:

The field is named “Area Burned (acres)” and is brought to the power of 10 so that the differences in magnitude between the wildfire points are noticeable and large enough on the map to be spotted, even at the lowest magnitude.

Tool Tip

Another informative feature to add to the points is the “Tool Tip,” or the attribute box about the feature that a reader has scrolled over. Often, attribute fields are already available in the data table to use in the tool tip such as fire names or the year of the fire. However, some fields need to be calculated such as the length of each wildfire. This can be calculated from the analysis tab as shown:

For the new field named “Fire Life Length (Days)” the following script was used:

Essentially this script finds the difference between the alarm date (when the fire started) and the contained date (when the fire ended) in unit “days.”  

For instance, here are some important attributes about each wildfire point that was added to the tool tip.

As shown, limitless options of formatting such as font, text size, and hovering options can be applied to the tool tip.

Graphics and Visualizations

The next aspects of the dashboard to incorporate would be the graphs to better inform the reader on the statistics of wildfire history. For the first graph, it will not only show the number of fires annually, but the acres burned as this will show the sizes of the fires.

Similarly to the map, the appropriate data fields need to be added to the columns and rows to generate a graph. Here the alarm date (start of the fire) is added to the x-axis, whereas the number of fires and Gis Acres (acres burned) was added to the y-axis and are filtered by “year.”

The field for the number of fires was a new field calculated with the following script:

Essentially, every row with a unique fire name is counted for every year under the “Alarm_Date” field to count the number of fires per year.

Another graph to be added to this dashboard is to inform the reader about the causes of fires and if they vary temporally. Tableau offers many novel ways of displaying mundane data into interesting visualizations that are both informative and appealing. Below is an example of a clustering graph, showing number of fires by cause against months over the entire timeseries. A colour gradient was added to provide more emphasis on causes that result in the most fires, displaying a bright yellow against less popular causes displayed with crimson red.

Similarly to the map, the “(Alarm_Date)” was added to the “Filters” card, however since we want to look at the average of causes per month rather than year, we can use the dropdown to change the date of interest to “MONTH.”

We also want to add the “Number of Fires” field to the “Marks” card to quantify how many fires are attributed to each cause. As shown, the same field can be added twice, such as one to edit its size attribute and one to edit its colour gradient attribute.

Putting it All Together

Finally, in the “Dashboard” tab, all these pages below of the timeseries map and graphs can be dragged and dropped into the viewer. The left toolbar can be used to import sheets into, change the extent size of the dashboard, as well as add/edit graphics and text.

Hopefully you’ve learned some of the basics of map and statistical visualizations that can be done in Tableau using this tutorial. If you’re interested in the history, recommendations and limits of this visualization, it is continued below.

Data Limitations and Recommendations

Firstly, with the wildfire data itself there are shortcomings, particularly that fires may have not been well documented prior to the mid-1900s due to the lack of observational technology. Additionally, only large fires were detected by surveyors whereas smaller fires were left unreported. With today’s technology in satellite imagery and LiDAR, fires of all sizes can be detected therefore it may appear that more fires of all sizes happen frequently in the modern age than prior. Besides the data, there are limitations with Tableau itself. First, the spatial data are all transformed to the spatial reference system WGS84 (EPSG:4326) when imported into Tableau and there can be inaccuracies of the spatial data through the system conversion. Therefore, it would be helpful for Tableau to utilize other reference systems and provide the user the choice to convert systems or not. Another limitation is with the proportional symbols for wildfires. The proportional symbol field had to be calculated and used had to be put to the “power of 10” to show up on the map, with no legend of the size range produced. It would be easier for Tableau to have a ‘Proportional Symbol” added onto the “Size” tab as this is a basic parameter required for many maps and would communicate the data easier to the reader. Hopefully Tableau can resolve these technical limitations to making mapping a more exclusive format that will work in visualizing many dataset types.

With gaps in wildfire history data for California, many recommendations can be made. While this visualization looked at the general number of fires per month by cause, it would be interesting to go in depth with climate or weather data, such as if there are an increasing number of thunderstorms or warmer summers that are sparking more fires in the 200s than the 1900s. Additionally, visualizing wildfire distributions with urban sprawl, such as if fires in range of urban centers or are more commonly in the range of people so are ranked as more serious hazards than those in the wilderness. Especially since the majority of wildfires are caused by people, it would be important to point out major camping groups and residential areas and their potential association with wildfires around them. Also, recalling the time since areas were last burned, as this can quantify the time regrowth has occurred for vegetation as well as the build-up of natural fuels which can then predict the size of future wildfires that can occur here if sparked. This is important for residential areas near these areas of high natural-fuel buildup and even insurance companies to locate large fire-prone areas. Overall, improving a visualization such as this requires the building of context surrounding it, such as filling in gaps of wildfire history through reviewing historical literature and surveying, as well as deriving data of wildfire risk using environmental and anthropogenic data.

Mapping Toronto Flood Events by using Esri Operations Dashboard

Dashboard Web application: Toronto Flood Events 2013-2017

By: Mohamad Fawaz Al-Hajjar

Geovisualization Project, @RyersonGeo, SA8905, Fall 2019

Introduction:

Toronto has been affected by many flood events, but the biggest modern event happened in July, 8th, 2013, when a thunderstorm passed over the city and broke the record when Toronto received huge amount of rain reached to 126mm, that caused major transit delays, power outages, flight cancellations and many areas flooded throughout the city; in order to visualize such phenomena and monitor the number of events per Toronto ward, web application dashboard has been implemented to inactively visualize the historical data, which also could be used to map the real time data as an optimal way to utilize the web dashboards.

Geovisualization Methodology

The technology that has been used to interactively visualize flood events data in Toronto is Esri Operations Dashboard, which was released in December, 2017 and has become an effective tool for the Esri users, which allow them to publish their Web Maps via dashboard by applying simple configuration without writing a single line of code. The project has followed the below methodology.

  1. Data Review and Manipulation

After obtaining the open data from two main sources, TRCA Open Data Portal and Toronto Open Data Portal, with other different data sources which have been reviewed and visualized in ArcMap application 10.7.1 release. Some of the data had to be cleansed, such as Flood Plain Mapping Index and property boundary shapefiles, other data were derived from polygon shapefile “flood-reporting-wgs84” for Toronto wards, where the total number of flood events stored by year from 2013-2017. A derived data-set produced as a point shapefile events points by using generating random point tool from polygon in ArcGIS ArcToolbox.

In addition, another data set have been created, the Property boundaries which have been intersected and clipped with the flood plain feature to generate the flooded properties per ward, which is also spatially joined with the wards to inherit its attributes. that could be configured in the dashboard to show the number of flooded properties per ward.

List of Data-Set Used:

Stormevents (derived from Flood reporting polygon) (Toronto open data)

Property per ward (derived from Property boundary and Flood reporting polygons) (Toronto open data)

Flood Events renamed to (Flood reporting polygons) (Toronto open data)

Toronto Shelters (Toronto open data)

GTA Watercourses (TRCA open data)

GTA Flood Plain (TRCA open data)

GTA Waterbodies (TRCA open data)

2. Data Publishing:

After getting the data ready, map produced in ArcMap where data symbolized then published to web map In ArcGIS Online, which will be the core map for the operation dashboard.

3. Creating the Dashboard:

In order to generate an Esri operation dashboard you need to be a member of ArcGIS Online organization, then have a published Web Map or hosted Feature Layer as an input to the dashboard.

Creating the dashboard went through many steps as described below:

  • Login to your ArcGIS Online organization using your username and password.
  • From the main interface click the App Launcher button as below snapshot
Application Launcher button

or you could also click on your Web Map application under Content in ArcGIS Online then click on Create Web App dropdown list to choose Using Operations Dashboard

Create Web App
  • Create Web App box will be opened to fill Title, Tag and Summary
  • The map will be opened into the dashboard, where you will start to add the widgets you need to your application from the drop-down menu as below snapshot.
  • Widgets will be added and configured as needed.

Toronto Flood Events Dashboard has included the most important widgets (Map, Header, Serial Chart, Pie Chart, Indicator, and List)

Once widget selected, the configuration box will be opened which is easy to be configured then will be dragged to be docked as needed

After adding multiple widgets, an important setting needs to be configured in the Map widget to set what is called an Action Framework, that happens when we change the map extent of the geographic area, then the other dashboard elements such as (Serial Chart, Pie Chart, Indicator, and List) will interactively be changed.

  • From the Map Widget go to Configure button, then select Map Actions tab, hit Add Action drop-down list then filter to choose other dashboard elements from the configuration box. the option When Map Extent Changes appears to let you filter and make action to other elements as well. Indeed, this is the most powerful tool in the dashboard.
  • Another configuration could be made in the Header element where you can insert a drop-down menu to map a certain feature by date, type, area or time, which is easily be configured in the dashboard web application.
  • After configuring all required elements, hit save then you can share or publish your dashboard web application with other users out of your organization.
To access the Dashboard click on the link below
Toronto Flood Events 2013-2017

Geovisualization Project Limitations:

The project was encountered two main limitations:

The data limitation:

Data limitations were taken most of the time to be defined, then after defining the available open data, many data cleansing and manipulation has been taking in terms of changing spatial reference to fit with online maps or changing the data format, which are still limited with the variables used, the derived events point generated randomly from the polygon shapefile “flood-reporting-wgs84” for Toronto wards to show the number of events per Toronto ward, which are not available as points from the main source; even though, the points still not accurate in location, but it give an idea about the number of event per ward boundary in different years.

Technology Accessibility:

It is clearly represented when we use Esri operations dashboard, which is only available to the member of ArcGIS Online organization and whom how have that access, still be able to get the benefits out of it by hitting the published location.

Visualizing Spatial Distribution of SARS in Carto

by Cheuk Ying Lee (Damita)
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

Project Link: https://c14lee.carto.com/builder/5ebe8c01-fb32-40bf-9cae-3b5f7326d02b/embed

Background
In 2003, there was a SARS (Severe Acute Respiratory Syndrome) outbreak in Southern China. The first cases were reported in Guangdong, China and quickly spread to other countries via air travel. I experienced all the preventive measures taken and school suspension, yet too young to realize the scale of the outbreak worldwide.

Technology
CARTO is used to visualize the spatial distribution of SARS cases by countries and by time. CARTO is a software as a service cloud computing platform that enables analysis and visualization of spatial data. CARTO requires a monthly subscription fee, however, a free account is available for students. With CARTO, a dashboard (incorporating interactive maps, widgets, selective layers) can be created.

Data
The data were obtained from World Health Organization under SARS (available here). Two datasets were used. The first dataset was compiled, containing information in the number of cumulative cases and cumulative deaths of each affected country, listed by dates, from March 17 to July 11, 2003. The second dataset was a summary table of SARS cases by countries, containing total SARS cases by sex, age range, number deaths, number of recovery, percentage of affected healthcare worker etc. The data were organized and entered into a spreadsheet in Microsoft Excel. Data cleaning and data processing were performed using text functions in excel. This is primarily done to removing the superscripts after the country names such that the software can recognize, as well as changing the data types from string to numbers.

Figure 1. Screenshot of the issues in the country names that have to be processed before uploading it to CARTO.

After trials of connecting the database to CARTO, it was found that CARTO only recognized “Hong Kong”, “Macau” and “Taiwan” as country names, therefore unnecessary characters have to be removed. After cleaning the data, the two datasets were then uploaded and connected to CARTO. If the country names can be recognized, the datasets will then automatically contain spatial information. The two datasets now in CARTO appear as follows:

Figure 2. Screenshot of the dataset containing the cumulative number of cases and deaths for each country by date.

Figure 3. Screenshot of the dataset containing the summary of SARS cases for each affected country.

Figure 4. Screenshot of the page to connect datasets to CARTO. A variety of file formats are accepted.

METHOD
After datasets have been connected to CARTO, layers and widgets can be added. First, layers were added simply by clicking “ADD NEW LAYER” and choosing the datasets. After the layer was successfully added, data were ready to be mapped out. To create a choropleth map of the number of SARS cases, choose the layer and under STYLE, specify the polygon colour to “by value” and select the fields and colour scheme to be displayed.

Figure 5. Screenshot showing the settings of creating a choropleth map.

Countries are recognized as polygons in CARTO. In order to create a graduated symbol map showing number of SARS cases, centroids of each country has to be computed first. This was done by adding a new analysis of “Create Centroids of Geometries”. After that, under STYLE, specify the point size and point colour to “by value” and select the field and colour scheme.

Figure 6. Sets of screenshots showing steps to create centroids of polygons. Click on the layer and under ANALYSIS, add new analysis which brings you to a list of available analysis.


Animation was also created to show SARS-affected countries affected by dates. Under STYLE, “animated” was selected for aggregation. The figure below shows the properties that can be adjusted. Play around with the duration, steps, trails, and resolution, these will affect the appearance and smoothness of the animation.


Figure 7. Screenshot showing the settings for animation.

Figure 8. Screenshot showing all the layers used.

Widgets were added to enrich the content and information, along with the map itself. Widgets are interactive tools for users where displayed information can be controlled and explored by selecting targeted filters of interest. Widgets were added simply by clicking “ADD NEW WIDGETS” and selecting the fields to be presented in the widget. Most of them were chosen to be displayed in category type. For each category type widget, data has to be configured by selecting the field that the widget will be aggregated by, for most of them, they are aggregated by country, showing the information of widget by countries. Lastly, the animation was accompanied by a time series type widget.

Figure 9. Sets of screenshots showing the steps and settings to create new widgets.

Figure 10. A screenshot of some of the widgets I incorporated.

FINAL PROJECT

The dashboard includes an interactive map and several widgets where users can play around with the different layers, pop-up information, widgets and time-series animation. Widgets information changed along with a change in the map view. Widgets can be expanded and collapsed depending on the user’s preference.

LIMITATION
For the dataset of SARS accumulated cases by dates, some dates were not available, which can affect the smoothness of the animation. In fact, the earliest reported SARS cases happened before March 17 (earliest date of statistics available on WHO). Although the statistics still included information before March 17, the timeline of how SARS was spread before March 17 was not available. In addition, there were some inconsistencies in the data. The data provided at earlier dates contain less information, including only accumulated cases and deaths of each affected country. However, data provided at later dates contain new information, such as new cases since last reported date and number of recovery, which was not used in the project in order to maintain consistency but otherwise could be useful in illustrating the topic and in telling a more comprehensive story.

CARTO only allows a maximum of 8 layers, which is adequate for this project, but this may possibly limit the comprehensiveness if used for other larger projects. The title is not available at the first glance of the dashboard and it is not able to show the whole title if it is too long. This could cause confusion since the topic is not specified clearly. Furthermore, the selective layers and legend cannot be minimized. This obscures part of the map, affecting users perception because it is not using all of its available space effectively. Lastly, the animation is only available for points but not polygons, which would otherwise be able to show the change in SARS cases (by colour) for each country by date (time-series animation of choropleth map) and increase functionality and effectiveness of the animation.

Turbo Vs Snail

by Jazba Munir

The highways in Canada including the Trans Canada Highway (TCH) and the National Highways System (NHS), fall within provincial or territorial jurisdiction (Downs, 2004). The Greater Toronto Area (GTA) is surrounded by many of the 400 series highways. Some of the segments or between interchanges experience higher traffic volume than others (Downs, 2004). The traffic volume during certain hours such as morning rush hours (6:30 – 9:30) and evening rush hours (4:30- 6:30) results in traffic congestion. This traffic congestion is experienced on highway (Hwy) 401 that is the most “busiest” highway of North America. In 2016, City of Toronto Council approved the road tolls for Gardiner Express and Don Valley Parkway (DVP) to decrease the traffic volume and congestion on these two highways (Fraser, 2016). This proposal was not implemented; nonetheless, it can be visualized using Tableau that whether the speed improves by using the dataset to compare the toll route with non-toll route. The steps on the Tableau: The dataset used for visualization can be organized and clean using Microsoft Excel or Tableau. The speed data is retrieved in points form. For instance, each point has a x and y coordinates. The first step is to create field ID in order to connect each point (x, y) to next point (x, y); in order, to create the line of Hwy 401.The street , highways, routes layer provided by Tableau was used as a guideline to make sure that all the points are connected in a correct order (See Figure 1) .

Figure 1: The layer added into the map sheet

The x and y are converted from measures to dimensions since, the x and y default setting is measures. This change can be made by dragging and dropping x and y from measures to dimensions. Another way is putting longitude into columns and latitude in to rows (See Figure 2).

Figure 2: Columns and Rows for Longitude and Latitude.

The difference between the two is that dimensions are tools that are used to slice and describe data record whereas measures are values of those records that are aggregated. For further assistance please refer to: https://www.tableau.com/learn/training Once all the points appear on the map in tableau use the mark first selection to select line to connect the points (See Figure 3).

Figure 3: The option to connect the dots.

The speed data for any of the selected hwy can be placed in the colors and graduated color scheme from red to green is selected. In this color scheme red indicates minimum speed of 80km whereas green indicates maximum speed 120 km. These speeds were selected as standard to compare toll route with non-toll routes. These are some of the basics steps that are required for any spatial tableau project. The color, size, label and detail options can be selected to create the visualization much clearer (See Figure 4).

Figure 4: The option to add color, size and labels of the variables.

This shows the options for creating the comparison between the turbo vs snail. For further assistance please refer to: https://www.tableau.com/learn/training Once this is set up another sheet was added to include a graph component. The speeds can be organized by hour, minute, year, road (toll vs non-toll). The speed can be represented by using the color option. The speed on the map is represented with the red to green color gradient. The underneath map is layer map available through tableau (See Figure 5).

Figure 5: Showing the speed in color red to green.

This will indicate the difference between the speed at the different part of the hwy. All the other hwy’s appear in yellow to show insects of each hwy. The sheet 1 for map and sheet 2 with a graph are combined to create a dashboard. This dash board helps to visualize the graph and map at once. The filter for each sheet is combined to make space organized more space for the sheets (See Figure 6 and 7).

Figure 6: Showing the filters added into the dashboard.

For further assistance please refer to: https://www.tableau.com/learn/training The dashboard helps to know the speed and compare it based on the time and location. Based on the visualization, it can be concluded that toll routes have no congestion as the line is green. This indication is drawn based on the visualization. In contrast, the non-toll route appears red and light green for some sections. The color helps to know where the congestion occurs. Image 1:

Dashboard combining the two sheets In conclusion, the tableau visualization helps to compare between toll route vs non-toll route. Based on the dashboard, the toll route is turbo speed whereas the non-toll route are snails.

References https://www.brookings.edu/research/traffic-why-its-getting-worse-what-government-can-do/ https://www.cbc.ca/news/canada/toronto/city-council-meeting-road-tolls-1.3893884

Desperate Journeys

By Ibrahim T. Ghanem

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

Background:

Over the past 20 years, Asylum Seekers have invented many travel routes between Africa, Europe and Middle East in order be able to reach a country of Asylum. Many governmental and non-governmental provided information about those irregular travel routes used by Asylum Seekers. In this context, this geovisualization project aims at compiling and presenting two dimensions of this topic: (1) a comprehensive animated spider map presenting some of the travel routes between the above mentioned three geographic areas; (2) develop a dashboard that connects those routes to other statistics about refugees in a user-friendly interface. In that sense, the best software to fit the project is Tableau.

Data and Technology

Creation of Spider maps at Tableau is perfect for connecting hubs to surrounding point as it allows paths between many origins and destinations. Besides, it can comprehend multiple layers. Below is a description of the major steps for the creation of the animated map and dashboard.

Also, Dashboards are now very useful in combining different themes of data (i.e. pie-charts, graphs, and maps), and accordingly, they are used extensively in non-profit world to present data about a certain cause. The Geovisualiztion Project applied geocoding approach to come up with the animated map and the dashboard.

The Data used to create the project included the following:

-Origins and Destinations of Refugees

-Number of Refugees hosted by each country

-Count of Refugees arriving by Sea (2010-2015)

-Demographics of Refugees arriving by Sea – 2015

Below is a brief description of the steps followed to create the project

Step 1: Data Sources:

The data was collected from the below sources.

United Nations High Commissioner for Refugees, Human Rights Watch, Vox, InfoMigrants, The Geographical Association of UK, RefWorld, Broder Free Association for Human Rights, and Frontex Europa.

However, most of the data are not geocoded. Accordingly, Google Sheets was used in Geocoding 21 routes, and thereafter each Route was given a distinguishing ID and a short description of the route.

Step 2: Utilizing the Main Dataset:

Data is imported from an excel sheet. In order to compute a route, Tableau requires data about origins,and destination with latitude and longitude. In that aspect, the data contains different categories:

A-Route I.D. It is a unique path I.D. for each route of the 21 routes;

B-Order of Points: It is the order of stations travelled by refugees from their country of origin to country of Asylum;

C-Year: the year in which the route was invented;

D-Latitude/Longitude: it is the coordinates of the each station;

F-Country: It is the country hosting Refugees;

E- Population: Number of refugees hosted in each country.

Step 3: Building the Map View:

The map view was built by putting longitude in columns, latitude in rows, Route I.D. at details, and selecting the mark type as line. In order to enhance the layout, Oder of Points was added to Marks’ Path, and changing it to dimensions instead of SUM.  Finally, to bring stations of travel, another layer was added to by putting another longitude to columns, and changing it to Dual Axis. To create filtration by Route, and timeline by year, route was added Filter while year was added to page.

Step 4: Identifying Routes:

To differentiate routes from each other by distinct colours, the route column was added to colours, and the default setting was changed to Tableau 20. And Layer format wash changed to dark to have a contrast between the colours of the routes and the background.

Step 5: Editing the Map:

After finishing up with the map formation. A video was captured by QuickStart and edited by iMovie to be cropped and merged.

Step 6: Creating the Choropleth map and Symbology:

In another sheet, a set of excel data (obtained from UNHCR) was uploaded to create a Choropoleth map that would display number of refugees hosted by each country by year 2018. Count of refugees was added to columns while Country was added to rows. The Marks’ colour ramp of orange-gold, with 4 classes was added to indicate whether or not the country is hosting a significant number of refugees. Hovering over each country would display the name of the country and number of refugees it hosts.

Step 7: Statistical Graphs:

A pie-chart and a graph were added to display some other statistics related to count of Refugees arriving by Sea from Africa to Europe, and the demographics of those refugees arriving by sea. Demographics was added to label to display them on the charts.

Step 8: Creation of the Dashboard:

All four sheets were added in the dashboard section through dragging them into the layer view. To comprehend that amount of data explanation, size was selected as legal landscape. Title was given to the Dashboard as Desperate Journeys.

Limitations

A- Tableau does not allow the map creator to change the projection of the maps; thus, presentation of maps is limited. Below is a picture showing the final format of the dashboard:

B-Tableau has an online server that can host dashboard; nevertheless, it cannot publish animated maps. Thus, the animated maps is uploaded here a video. The below link can lead the viewer to the dashboard:

https://prod-useast-a.online.tableau.com/t/desperatejourneysgeovis/views/DesperateJourneys_IbrahimGhanem_Geoviz/DesperateJourneys/ibrahim.ghanem@ryerson.ca/23c4337a-dd99-4a1b-af2e-c9f683eab62a?:display_count=n&:showVizHome=n&:origin=viz_share_link

C-Due to unavailability of geocoded data, geocoding the routes of refugees’ migration consumed time to fine out the exact routes taken be refugees. These locations were based on the reports and maps released by the sources mentioned at the very beginning of the post.

The Toronto Financial Institution Market: Bridging the gap between Cartography and Analytics using Tableau

Nav Salooja

“Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019”

<script type='text/javascript' src='https://prod-useast-a.online.tableau.com/javascripts/api/viz_v1.js'></script><div class='tableauPlaceholder' style='width: 1920px; height: 915px;'><object class='tableauViz' width='1920' height='915' style='display:none;'><param name='host_url' value='https%3A%2F%2Fprod-useast-a.online.tableau.com%2F' /> <param name='embed_code_version' value='3' /> <param name='site_root' value='/t/torontofimarketgeovisprojectsa8905fall2019' /><param name='name' value='TheTorontoFIMarketDashboard/TorontoFIMarket' /><param name='tabs' value='yes' /><param name='toolbar' value='yes' /><param name='showAppBanner' value='false' /></object></div>

Introduction & Background

Banking in the 21st century has evolved significantly especially in the hyper competitive Canadian Market. Big banks nationally have a limited population and wealth share to capture given Canada’s small population and have been active in innovating their retail footprint. In this case study, TD Bank is the point of interest given its large branch network footprint in the Toronto CMA. Within the City of Toronto the bank has 144 branches and is used as the study area for the dashboard created.  The dashboard analyzes the market potential, branch network distribution, banking product recommendations and client insights to help derive analytics through a centralized and interactive data visualization tool.

Technology

The technology selected for the geovisualization component is Tableau given its friendly user interface, mapping capabilities, data manipulation and an overall excellent visualization experience. However, Alteryx was widely used for the build out of the datasets that run in Tableau. As the data was extracted from various different sources, spatial element and combining datasets was all done in Alteryx. The data extracted for Expenditure, Income and Dwelling Composition was merged and indexed in Alteryx. The TD Branches was web scrapped live from the Branch Locator and the trading areas (1.5KM Buffers) are also created in Alteryx. The software is also used for all the statistical functions such as the indexed data points in the workbook were all created in Alteryx. The geovisualization component is all created within the Tableau workbooks as multiple sheets are leverged to create an interactive dashboard for full end user development and results.

Figure 1 represents the Alteryx Workflow used to build the Market, Branch and Trade Area datasets
Figure 2 provides the build out of the final data sets to fully manipulate the data to be Tableau prepared

Data Overview

There are several data sets used to build the multiple sheets in the tableau workbook which range from Environics Expenditure Data, Census Data and webscrapped TD branch locations. In addition to these data sets, a client and trade area geography file was also created. The clients dataset was generated by leveraging a random name and Toronto address generator and those clients were then profiled to their corresponding market. The data collected ranges from a wide variety of sources and geographic extents to provide a fully functional view of the banking industry. This begins by extracting and analyzing the TD Branches and their respective trade areas. The trading areas are created based on a limited buffer representing the immediate market opportunity for the respective branches. Average Income and Dwelling composition variables are then used at the Dissemination Area (DA) geography from the 2016 Census. Although income is represented as an actual dollar value, all market demographics are analyzed and indexed against Toronto CMA averages. As such these datasets combined with Market, Client and TD level data provide the full conceptual framework for this dashboard.

Tables & Visualization Overview

Given the structure of the datasets, six total tables are utilized to combine and work with the data to provide the appropriate visualization. The first two tables are the branch level datasets which begin with the geographic location of the branches in the City of Toronto. This is a point file taken from the TD store locator with fundamental information about the branch name and location attributes. There is a second table created which analyzes the performance of these branches in respect to their client acquisition over a pre-determined timeframe.

Figure three is a visualization of the first table used and the distribution of the Branch Network within the market

The third table used consists of client level information selected from ‘frequent’ clients (clients transacting at branches 20+ times in a year. Their information builds on the respective geography and identifies who and where the client resides along with critical information that is usable for the bank to run some level of statistical analytics. The client table shows the exact location of those frequent clients, their names, unique identifiers, their preferred branch, current location, average incomes, property/dwelling value and mortgage payments the bank collects. This table is then combined to understand the client demographic and wealth opportunity from these frequent clients at the respective branches.

Figure four is the visualization of the client level data and its respective dashboard component

Table four and five are extremely comprehensive as they visualize the geography of the market (City of Toronto at a DA level). This provides a trade area market level full breakdown of the demographics and trading areas as DAs are attributed to their closest branch and allows users to trigger on for where the bank has market coverage and where the gaps reside. However, outside of the allocation of the branches, the geography has a robust set of demographics such as growth (population, income), Dwelling composition and structure, average expenditure and the product recommendations the bank can target driven through the average expenditure datasets. Although the file has a significant amount of data and can be seen as overwhelming, selected data is fully visualized. This also has the full breakdown of how many frequent clients reside in the respective markets and what kind of products are being recomened on the basis of the market demographics analyzed through dwelling composition, growth metrics and expenditure.

Figure five is the visualization of the market level data and its respective dashboard component

The final table provides visualization and breakdown of the five primary product lines of business the bank offers which are combined with the market level data and cross validated against the average expenditure dataset. This is done to identify what products can be recommended throughout the market based on current and anticipate expenditure and growth metrics. For example, markets with high population, income and dwelling growth with limited spend would be targeted with mortgage products given the anticipated growth and the limited spend indicating a demographic saving to buy their home in a growth market. These assumptions are made across the market based on the actual indexed values and as such every market (DA) is given a product recommendation.

Figure six is the visualization of the product recommendation and analysis data and its respective dashboard component

Dashboard

Based on the full breakdown of the data extracted, the build out and the tables leveraged as seen above, the dashboard is fully interactive and driven by one prime parameters which controls all elements of the dashboard. Additional visualizations such as the products visualization, the client distribution treemap and the branch trends bar graph are combined here. The products visualization provides a full breakdown of the products that can be recommended based on their value and categorization to the bank. The value is driven based on the revenue the product can bring as investment products drive higher returns than liabilities. This is then broken down into three graphs consisting of the amount of times the product is recommended, the market coverage the recommendation provides between Stocks, Mortgages, Broker Fees, Insurance and Personal Banking products. The client distribution tree map provides an overview by branch as to how many frequent clients reside in the branch’s respective trading area. This provides a holistic approach to anticipating branch traffic trends and capacity constraints as branches with a high degree of frequent clients would require larger square footage and staffing models to adequately service the dependent markets. The final component is the representation of the client trends in a five year run rate to identify the growth the bank experienced in the market and at a branch level through new client acquisition. This provides a full run down of the number of new clients acquired and how the performance varies year over year to identify areas of high and low growth.

This combined with the primary three mapping visualizations, creates a fully robust and interactive dashboard for the user. Parameters are heavily used and are built on a select by branch basis to dynamically change all 6 live elements to represent what the user input requires. This is one of the most significant capabilities of Tableau, the flexibility of using a parameter to analyze the entire market, one branch at a time or to analyze markets without a branch is extremely powerful in deriving insights and analytics. The overall dashboard then zooms in/out as required when a specific branch is selected highlighting its location, its respective frequent clients, the trade area breakdown, what kind of products to recommend, the branch client acquisition trends and the actual number of frequent clients in the market. This can also be expanded to analyze multiple branches or larger markets overall if the functionality is required. Overall, the capacity of the dashboard consists of the following elements:

1. Market DA Level Map
2. Branch Level Map
3. Client Level Map
4. Client Distribution (Tree-Map)
5. Branch Trending Graph
6. Product Recommendation Coverage, Value and Effectiveness

This combined with the capacity to manipulate/store a live feed of data and the current parameters used for this level of analysis bring a new capacity to visualizing large datasets and providing a robust interactive playground to derive insights and analytics.

The link for this full Tableau Workbook is hosted here (please note an online account is required):https://prod-useast-a.online.tableau.com/t/torontofimarketgeovisprojectsa8905fall2019/views/TheTorontoFIMarketDashboard/TorontoFIMarket?:showAppBanner=false&:display_count=n&:showVizHome=n&:origin=viz_share_link

Geovisualization of Crime in the City of Toronto Using Time-Series Animation Heat Map in ARCGIS PRO

Hetty Fu

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2019

Background/Introduction

The City of Toronto Police Services have been keeping track of and stores historical crime information by location and time across the City of Toronto since 2014. This data is now downloadable in Excel and spatial shapefiles by the public and can be used to help forecast future crime locations and time. I have decided to use a set of data from the Police Services Data Portal to create a time series map to show crime density throughout the years 2014 to 2018. The data I have decided to work with are auto-theft, break and enter, robbery, theft and assault. The main idea of the video map I want to display is to show multiple heat density maps across month long intervals between 2014 to 2018 in the City of Toronto and focus on downtown Toronto as most crimes happen within the heart of Toronto.

The end result is an animation time-series map that shows density heat map snapshots during the 4-year period, 3-month interval at a time. Examples of my post are shown at the end of this blog post under Heat Map Videos.

Dataset

All datasets were downloaded through the Toronto Police Services Data Portal which is accessible to the public.

The data that was used to create my maps are:

  1. Assault
  2. Auto Theft
  3. Robbery
  4. Break and Enter
  5. Theft

Process Required to Generate Time-Series Animation Heat Maps

Step 1:  Create an additional field to store the date interval in ArcGis Pro.

Add the shapefile downloaded from the Toronto Police Services Portal intoArcGIS Pro.

First create a new field under View Table and then click on Add.             

To get only the date, we use the Calculate Field in the Geoprocessing tools with the formula

date2=!occurrence![:10]  

where Occurrence is the existing text field that contains the 10 digit date: YYYY-MM-DD. This removes the time of day which is unnecessary for our analysis.

Step 2: Create a layer using the new date field created.

Go into properties in the edited layer. Under the time tab, place in the new date field created from Step 1 and enter in the time extent of the dataset. In this case, it will be from 2014-01-01 to 2018-12-31 as the data is between 2014 to 2018.

Step 3: Create Symbology as Heat Map

Go into the Symbology properties for the edited layer and select heat map under the drop down menu. Select 80 as its radius which will show the size of the density concentration in a heat map. Choose a color scheme and set the method as Dynamic. The method used will show how each color in the scheme relates to a density value. In a Dynamic setting versus and constant, the density is recalculated each time the map scale or map extent changes to reflect only those features that are currently in view. The Dynamic method is useful to view the distribution of data in a particular area, but is not valid for comparing different areas across a map (ArcGIS Pro Help Online).

Step 4: Convert Map to 3D global scene.

Go to View tab on the top and select convert to global scene. This will allow the user to create a 3D map feature when showing their animated heat map.

Step 5: Creating the 3D look.

Once a 3D scene is set, press and hold the middle mouse button and drag it down or up to create a 3D effect.

Step 6: Setting the time-series map.

Under the Time tab, set the start time and end time to create the 3 month interval snapshot. Ensure that “Use Time Span” is checked and the Start and End date is set between 2014 and 2018. See the image below for settings.

Step 7: Create a time Slider Steps for Animation Purposes

Under Animation tab, select the appropriate “Append Time” (the transition time between each frame). Usually 1 second is good enough, anything higher will be too slow. Make sure to check off maintain speed and append front before Importing the time Slider Steps. See below image.

Step 8: Editing additional cosmetics onto the animation.

Once the animation is created, you may add any additional layers to the frames such as Titles, Time Bar and Paragraphs.

There is a drop down section in the Animation tab that will allow you to add these cosmetic layers onto the frame.

Animation Timeline by frames will look like this below.

Step 9: Exporting to Video

There are many types of exports the user can choose to create. Such as Youtube, Vimeo, Twitter, Instagram, HD1080 and Gif. See below image for the settings to export the create animation video. You can also choose the number of frames per second, as this is a time-series snapshot no more than 30 frames per second is needed. Choose a place where you would like to export the video and lastly, click on Export.

Conclusion/Recommendation/Limitation

As this was one of my first-time using ArcGIS Pro software, I find it very intuitive to learn as all the functions were easy to find and ready to use. I got lucky in finding a dataset that I didn’t have to format too much as the main fields I required were already there and the only thing required was editing the date format. The number of data in the dataset was sufficient for me to create a time series map that shows enough data across the city of Toronto spanning 3 months at a time. If there was less data, I would have to increase my time span. The 3D scene on ArcGIS Pro is very slow and created a lot of problems for me when trying to load my video onto set time frames. As a result of the high-quality 3D setting, I decided to use, it took couple of hours to render my video through the export tool. As the ArcGIS Pro software wasn’t made to create videos, I felt that there was lack of user video modification tools.

Heat Map Videos Export

  1. Theft in Downtown Toronto between 2014-2018. A Time-Series Heat Map Animation using a 3 month Interval.
  2. Robbery in Downtown Toronto between 2014-2018. A Time-Series Heat Map Animation using a 3 month Interval.
  3. Break and Enter in Downtown Toronto between 2014-2018. A Time-Series Heat Map Animation using a 3 month Interval.
  4. Auto Theft across the City of Toronto between 2014-2018. A Time-Series Heat Map Animation using a 3 month Interval.
  5. Assault across the City of Toronto between 2014-2018. A Time-Series Heat Map Animation using a 3 month Interval.