Interactive Map and Border Travels

Given the chance to look at making geovisualisation, a pursuit began to bring in data on a scope which would need adjustments and interaction for understanding geography further and further, while still being able to begin the journey with an overview and general understanding of the topic at hand.

Introduction to the geovisualisation

This blog post doesn’t unveil a hidden gem theme of border crossing, but demonstrates how an interactive map can share the insights which the user might seek, not being limited to the publisher’s extents or by printed information. Border crossing is selected as topic of interest to observe the navigation that may get chosen with borders, applying this user to a point of view that is similar to those crossing at these points themselves, by allowing them to look at the crossing options, and consider preferences.

To give the user this perspective, this meant beginning to locate and provide the crossing points. The border crossing selected was the US border between Canada and between Mexico, being a scope which could be engaged with the viewer and provide detail, instead of having to limit this data of surface transportation to a single specified scale and extent determined by the creator rather than the user.

Border crossings are a matter largely determined by geography, and are best understood in map rather than any other data representation, unlike attributes like sales data which may still be suitable in an aspatial sense, such as projected sales levels by line graph.

To get specific, the data came from the U.S. Bureau of Transportation Statistics, and was cleaned to be results from the beginning of January 2010 til the end of September 2020. The data was geocoded with multiple providers and selected upon consistency, however some locations were provided but their location could not be identified.

Seal of the U.S. Bureau of Transportation Statistics

To start allowing any insights for you, the viewer, the first data set to be appended to the map is of the border locations. These are points, and started to identify the distribution of crossing opportunities between the north American countries. If a point could not be appended to the location of the particular office that processed the border entries, then the record was assigned to the city which the office was located in. An appropriate base layer was imported from Mapbox to best display the background map information.

The changes in the range of border crossings were represented by shifts in colour gradient and symbol size. With all the points and their proportions plotted, patterns could begin to be provided as per the attached border attributes. These can illustrate the increases and decreases in entries, such as the crossings in California points being larger compared to entries in Montana.

Mapped Data

But is there a measure as to how visited the state itself is, rather than at each entry point? Yes! Indeed there is. In addition to the crossing points themselves, the states which they belong to have also been given measurement. Each state with a crossing is represented on the map displaying a gradient for the value of average crossing which the state had experienced. We knew that California had entry points with more crossings than the points shown in Montana, but now we compare these states themselves, and see that California altogether still experienced more crossings at the border than Montana had, despite having fewer border entry points.

Could there be a way to milk just a bit more of this basic information? Yes. This is where the map begins to benefit from being interactive.

Each point and each state can be hovered over to show the calculated values they had, clarifying how much more or less one case had when compared to another. A state may have a similar gradient, an entry point may appear the same size, but to hover over them you can see which place the locations belong to, as well as the specific crossing value it has. Montana is a state with one of the most numerous crossing points, and experiencing similar crossing frequencies across these entries. To hover over the points we can discover that Sweetgrass, Montana is the most popular point along the Montana border.

Similar values along the Montana border

In fact, this is how we discover another dimension which belongs to the data. Hovering over these cases we can see a list of transport modes that make up the total crossings, and that the sum was made up of transport by trucks, trains, automotives, busses, and pedestrians.

To discover more data available should simply mean more available to learn, and to only state the transport numbers without their visuals would not be the way to share an engaging spatial understanding. With these 5 extra aspects of the border crossings available, the map can be made to display the distributions of each particular mode.

Despite the points in Alaska typically being one of the least entered among the total border crossings, selecting the entries by train draws attention to Skagway, Alaska, being one of the most used border points for crossing into the US, even though it is not connected to the mainland. Of course, this mapped display paints a strong understanding from the visuals, as though this large entry experienced at Skagway, Alaska is related to the border crossings at Blaine, Washington, likely being the train connection between Alaska and Continental USA.

Mapping truck crossing levels (above), crossings are made going east and past the small city of Calexico. The Calexico East is seen having a road connection between the two boundaries facing a single direction, suggesting little interaction intended along the way

When mapping pedestrian crossings (above), these are much more popular in Calexico, the area which is likely big dense to support the operation of the airport shown in its region, and is displaying an interweaving connection of roads associated with an everyday usage

Overall, this is where the interactive mapping applies. The borders and their entry points have relations largely influenced by geography. The total pedestrian or personal vehicle crossings do well to describe how attractive the region may be on one side rather than another. Searching to discover where these locations become attractive, and even the underlying causes for the crossing to be selected, can be discovered in the map that is interactive for the user, looking at the grounds which the user chooses.

While this theme data layered on top highlights the topic, the base map can help explain the reasons behind it, and both are better understood when interactive. It isn’t necessary to answer one particular thought here as a static map may do, but instead to help address a number of speculative thoughts, enabling your exploration.

COVID-19 in Toronto: A Tale of Two Age Groups

By Meira Greenbaum

Geovis Project Assignment @RyersonGeo, SA8905, Fall 2020

Story Map Link

Introduction

The COVID-19 pandemic has affected every age group in Toronto, but not equally (breakdown here). As of November 2020, the 20-29 age group accounts for nearly 20% of cases, which is the highest proportion compared to the other groups. The 70+ age group accounts for 15.4% of all cases. During the first wave, seniors were affected the most, as there were outbreaks in long-term care homes across the city. By the end of summer and early fall, the probability of a second wave was certain, and it was clear that an increasing number of cases were attributed to younger people, specifically those 20-29 years old. Data from after October 6th was not available at the time this project began, but since then Toronto has seen another outbreak in long-term care homes and an increasing number of cases each week. This story map will investigate the spatial distribution and patterns of COVID-19 cases in the city’s neighbourhoods using ArcGIS Pro and Tableau. Based on the findings, specific neighbourhoods with high rates can be analyzed further.

Why these age groups?

Although other age groups have seen spikes during the pandemic, the trends of those cases have been more even. Both the 20-29 and 70+ groups have seen significant increases and decreases between February and November. Seniors are more likely to develop extreme symptoms from COVID-19, which is why it is important to focus on identifying neighbourhoods with higher rates of seniors. 20-29 is an important age group to track because increases within that group are more unique to the second wave and there is a clear cluster of neighbourhoods with high rates.

Data and Methods

The COVID-19 data for Toronto was provided by the Geo-Health Research Group. Each sheet within the Excel file contained a different age group and the number of cases each neighbourhood had per week from January to early October. The format of the data had to be arranged differently for Tableau and ArcGIS Pro. I was able to table join the original excel sheet with the columns I needed (rates during the week of April 14th and October 6th for the specific age groups) to a Toronto neighbourhood shapefile in Pro and map the rates. The maps were then exported as individual web layers to ArcGIS Online, where the pop-ups were formatted. After this was done, the maps were added to the Story Map. This was a simple process because I was still working within the ArcGIS suite so the maps could be transported from Pro to Online seamlessly.

For animations with a time and date component, Tableau requires the data to be vertical (i.e. had to be transposed). This is an example of what the transformation looks like (not the actual values):

A time placeholder was added beside the date (T00:00:00Z) and the excel file was imported into Tableau. The TotalRated variable was numeric, and put in the “Columns” section. Neighbourhoods was a string column and dragged to the “Colour” and “Label” boxes so the names of each neighbourhood would show while playing the animation. The row column was more complicated because it required the calculated field as follows:

TotalRatedRanking is the new calculation name. This produced a new numeric variable which was placed in the “Rows” box. 

If TotalRatedRanking is right clicked, various options will pop-up. To ensure the animation was formatted correctly, the “Discrete” option had to be chosen as well as “Compute Using —> Neighbourhoods.” The data looked like the screenshot below, with an option to play the animation in the bottom right corner. This process was repeated for the other two animations.

Unfortunately, this workbook could not be imported directly into Tableau Public (where there would be a link to embed in the Story Map) because I was using the full version of Tableau. To work around this issue, I had to re-create the visualization in Tableau Public (does not support animation), and then I could add the animation separately when the workbook was uploaded to my Tableau Public account. These animations had to be embedded into the Story Map, which does have an “Embed” option for external links. To do this, the “Share” button on Tableau Public had to be clicked and a link appeared. But when embedded in the Story Map, the animation is not shown because the link is not formatted correctly. To fix this, the link had to be altered manually (a quick Google search helped me solve it):

Limitations and Future Work

Creating an animation showing the rate of cases over time in each neighbourhood (for whichever age group or other category in the excel spreadsheet) may have been beneficial. An animation in ArcGIS Pro would have been cool (just not enough time to learn about how ArcGIS animation works), and this is an avenue that could be explored further. The compromise was to focus on certain age groups, although patterns between the start (April) and end (October) points are less obvious. It would also be interesting to explore other variables in the spreadsheet, such as community spread and hospitalizations per neighbourhood. I tried using kepler.gl, which is a powerful data visualization tool developed by Uber, to create an animation from January to October for all cases, and this worked for the most part (video at the end of the Story Map). The neighbourhoods were represented as dots (not polygons), which is not very intuitive for the viewer because the shape of the neighbourhood cannot be seen. Polygons can be imported into kepler.gl but only as a geojson and I am unfamiliar with that file format.

Health Care Access in the City of Toronto

By: Shabnam Sepehri

Geo-Visualization Project: SA8905, Fall 2020

Project Link: Final Map

Final Product: An Interactive Map

Context

There are many factors that contribute to an individual’s access to health care. Statistics Canada has defined the ‘Social determinants of health and health inequalities’ as the 12 major factors that affect access. First on the list is income and social status and near the bottom at number 11 is race and ethnicity. For this project, I was curios to see how these two variables are distributed across the census tracts in the City of Toronto; and if there are any overlaps with the locations of healthcare institutions. The software of choice will be CARTO, which is a Service cloud computing platform that enables the visualization and analysis of geographic data.

Data Acquisition

  • CHASS Data Center: used to collect census data by census tract (2016): Total population, total visible minority population, total aboriginal identity population, and median total income;
  • Statistics Canada: used to obtain census tract boundary files;
  • City of Toronto Open Data: Address Point files;
  • Geospatial Map & Data Centre: used to collect physicians data – Enhanced Points of Interest 3.1 (City of Toronto)
  • ArcMap (digitize): used to digitize hospital locations in the City of Toronto

Process

After the data was acquired, the following steps were taken in ArcMap to organize the data before importing it into Carto. First, the census variables were joined to the census boundary shapefile. Then, a new column was created to calculate the sum of visible minority and aboriginal identity population density per 1,000 people.

Next, the hospital locations were digitized using the ‘Editor Toolbar’. following that, the physicians locations were geocoded using the using the address repository acquired from Toronto Open Source data. Lastly, , the non-spatial data (e.g. total median income) were joined to the spatial data (census tract boundaries) to enable the layer visualization. After all the necessary formatting was done, the data was uploaded onto Carto.

Once on Carto, I realized that the software allows the user to carry out different spatial functions such as geocoding. It also allows you to edit your dataset using SQL queries. This function is really useful in facilitating the data editing process and helps to reduce the back and forth between different mapping software’s.

CARTO: dataset dashboard

Carto allows you to import a total of four layers in your map. The hospital locations, physician offices, and the census tracts were added as the four layers, with census tract uploaded twice to show the two different census variables. The census variables were visualized as a choropleth maps, and the health institutions were visualized as points on top of the choropleth layers.

Interactivity

The interactive aspect of this map is mainly the users ability to switch between layers and toggle each map component separately. Moreover, the ‘pop-up’ option was utilized for the hospital points to show the name and address of each location. Similarly, pop-ups were created for the choropleth maps to show the median income and population density of each individual census tract. Lastly, the widget feature was used to create histograms to showcase the distribution of the two census variables among the census tracts. This feature allows the user to select for the tracts in different categories and zoom into those specific tracts on the map. For instance, someone may want to look at tracts with the highest median income and tracts with an average aboriginal and visible minority population density. Lastly, the choropleth layers are turned off and may be switched on as per the user’s interest.

Insight

The map shows that census tracts where the median income is relatively high, tend to have a low distribution of aboriginal and visible minority population density. The distribution of hospitals appear to be uniform throughout the city with a few more concentrated in the downtown core. Conversely, the physician offices appear to be more concentrated in tracts with higher income or close to the downtown core. That being said, this does not mean that higher income groups have better access than lower income groups. However, the map does identify areas where there is a low number of physician offices, and most often, these areas tend to be classified as having a low to medium income. There are of course other variables that must be considered when identifying access, however, due to the limiting number of layers this option was not feasible for this project. Overall, this map can be used to identify ideal locations for future health facilities and to identify groups that have limited access to these resources.

Limitations & Future Work

Initially, I wanted to include more variables in the map; the goal was to map median income, visible minority & Aboriginal population density, education attainment, and employment conditions. However, Carto only allows for the addition of four variables. This limited the diversity of the visualized variables. Ideally, exploring other geo-visualization software such as Tableau, ArcGIS Online, or the Esri Dashboard would aid in creating a more nuanced map.

Ideally, I would also want to map the change of these variables over time. For instance, to show whether the distribution of median income, and visible minority & Aboriginal population density per census tract has always been the same or if there are slight changes in pattern. It would be interesting to capture which census tracts had experienced better access due to changes in health determinants over time.

Toronto’s Waterfront Parking Lot Transformation

Author Name: Vera Usherovich

StoryMap Project link: https://arcg.is/004vSb

SA 8905 Fall 2020

Introduction:

During one of my study breaks, I was looking at aerial photographs of Toronto’s Waterfront. One thing in particular caught my attention; the parking lots. I did not grow up in Toronto and had no idea how drastically different the waterfront area looked like. I kept on opening up images from various years and comparing the changes. The Waterfront area was different; at first the roundhouses disappeared and followed by parking lots and industrial warehouses. This is the short answer to what inspired this StoryMap. I wanted to see how the surface of our city changed over time, specifically the role of parking lots.

Key Findings

  1. There has been a 32 % reduction in surfaces dedicated to parking lots between 2003 and 2019.
  2. Even though there are fewer parking lots, there is a similar proportion of parking lot size surfaced between 2003-2019.
  3. Many of the parking lots in the entertainment district turned into condos.

About the StoryMap

Data

For this project, I used areal photographs from the City of Toronto, works and Emergency Services. I chose 2003 and 2019 as my years to compare.

Platform and Method

The digitization process was done through Esri’s software, ArcMap. I then exported the layers into gis online and made a map. this map was embedded into the StoryMap with adjustment to the layers. Additionally, I cross-referenced information with google maps, to identify what has replaced the parking lots (broke into 4 categories: residential, commercial, public, and other).

Limitations

Note: The data showcased in this story and maps is based on manual aerial photograph digitization. Some features might have been inadvertently missed or incorrectly categorized.

Future Work

This can be done for a wider range of years. Also, a more comprehensive classification of what is no longer a parking lot could be described in greater detail.

A Pandemic in Review: a Trajectory of the Novel Coronavirus

Author: Swetha Salian

Geovisualization Project Assignment @SA8905, Fall 2020

Introduction to Covid-19

Covid-19 is a topic at the top of many of our minds right now, and has been the subject of discussion all around the world. There are various sources of information out there, and as with most current issues, while sources of legitimate information exist, there is also a great deal of misinformation that may be disseminated. This has lead me to investigate the topic further, and to explore the patterns of the disease, in an effort to understand what has transpired in the past year and where we may be headed, as we enter into the second year of this pandemic.

Let’s begin with where it started, what the trajectory has looked like over the past year, and where it is currently as the year is coming to a close. Covid-19 is a disease caused by the new Coronavirus called SARS-CoV-2. The first report was of ‘viral pneumonia’ in Wuhan, China on December 31, 2019 and spread to all the continents except Antarctica, causing widespread infections and deaths. Investigations are ongoing, but as with other coronaviruses, it is believed to be spread by large respiratory droplets containing the virus through person-person contact. In January 2020, the total number of cases across the globe numbered 37,907 and within five months, by June 2020, the number rose to 10,182,385. We currently sit at over 6 million cases across 202 countries and territories, as of November 2020. The numbers still appear to be on a rise even with a number of countries taking various initiatives and measures in an effort to curb to spread of the disease. The data, however, shows that the death rate has been declining in the past few weeks, with a total of 1,439,784 deaths globally as of today. This is a ratio of approximately 2% of cumulative deaths to the total number of cases.

Using Tableau desktop 2019.2, I created a time lapse map of weekly reported COVID-19 cases from January 1 to November 15. Additionally, there is a graph displaying weekly reported deaths for the same date range as mentioned earlier.

Link to my Tableau Public map: https://public.tableau.com/profile/swetha8500#!/vizhome/Salian_Swetha_Geoviz/Dashboard1

Data

I chose to acquire data from WHO (World Health Organization) because of the reputable research and their outreach globally. The global literature cited in the WHO COVID-19 database is updated daily from searches of bibliographic databases, hand searching, and the addition of other expert-referred scientific articles. 

The data for this project is a .csv file that has a list of new & cumulative cases, new and cumulative deaths, sorted by country and reported date from January 1 through November 15. This list consists of data from 236 countries, territories and areas and a total of 72966 data entries for the year. For my analysis, I had a time lapse map of cases for the year, for which I used Cumulative_cases column. For the graphs representing weekly death count as well as top 10 countries by death count, I used New_deaths column.

Creating a Dashboard in Tableau Desktop

Tableau is a data visualization software which is fairly easy to use with minimum coding skills. It is also a great tool for importing large data and has the option for a variety of data to be imported as shown in the image below.

The .csv file imported opens up on the Data Source tab. There are options to open a New Worksheet and this is where we start creating all the visualizations separately and the last step would be to bring them all into a Dashboard tab.

In the side bar displayed on the left, there are Dimensions and Measures. Tableau is intelligent to generate longitude and latitude by country names. Rows and Columns are automatically filled in with coordinates when Country is added. In the Pages section, drag Date reported and this can be filtered by how you want to display the data, I chose weekly reported. In Marks section, drag and drop Category from Dimensions into Color and Cumulative Cases into Size and change the measure to sum.

By adding Date reported to Pages, it generates a Time Slider, which enables you to automatically play, choose a particular date and also set the speed setting to slow, medium or fast. The Category value generated a range for the number of cases reported weekly, which is what is shown as the changing colors on the map. Highlight country gives you an option to search for a particular country you want to view data for.

Create a new Dashboard and import the sheets that you have worked on and create a visual story. you have the option to add text, borders, background color, etc. to enhance the data.

As shown below, this is the static representation of the dashboard, which displays the weekly reported cases on the map and weekly reported deaths on the graph.

To publish to an online public portal follow the steps as shown below.

Limitations

As I was collecting data from the World Health Organization, I realized I couldn’t find comprehensive data on age groups and gender for cases or deaths. However, with the data I had, I was able to find a narrative for my story.

I had a hiccup while I was trying to publish to Tableau public from desktop. After creating an account online, I was getting an error on the desktop as shown below.

The solution to this is to go to the Data menu, scroll down to your data source, .csv files name in my case, and select Use Extract. Extracts are saved subsets of data that you can use to improve performance or to take advantage of Tableau functionality not available or supported in your original data. When you create an extract of your data, you can reduce the total amount of data by using filters and configuring other limits

The 100 largest wildfires in the province of Quebec from 1976 to 2019.

Author: Samuel Emard

Source: Forest fires – Open Government Portal (canada.ca)

Project link: Top 100 Fires in the Province of Quebec (1976-2019) (arcgis.com)

Web Experience Direct link: https://experience.arcgis.com/experience/b7a0987afdb1486fb97532788261cfd6/

Project background

The idea for this project originated from a curiosity about the numerous environmental catastrophes that the populace is often unaware of. Especially wildfires. In the last few years, every summer’s news cycle is dominated with terrible reportages about wild fires rampaging in California, British Columbia or Alberta, and rightly so, but it is often only the largest that get mentioned on TV.

Myself being from the province of Quebec, I became curious about the wildfires that happen in my home province because I haven’t heard of them quite as often as the ones in the US or the Canadian West. Fortunately, a dataset compiling data on the wildfires in the province was available on the Federal Government open data website. However, since 1976, which I assume is the year the government started compiling data on the phenomena, 60 799 wildfires happened. Since this project focuses specifically on the online aspect of things, this amount of polygons would either be impossible to draw completely or it would take too much time to draw each of the 60 799 polygons. I juggled with multiples possible solutions to remediate the issue, such as using a smaller temporal scale, but it all ultimately depended on the platform I would choose to portray the data on. Speaking of which, here’s a small description of ArcGIS Web Experience Designer.

Technology

Finding a platform to portray the data depended on my familiarity with it. Unfortunately, online GIS wasn’t my forte and I only knew of ArcGIS Online and its Story Maps. However, I felt that Story Maps were not novel enough. That’s when I happened upon the Dashboard and the Web Experience creators available on ArcGIS Online. After fiddling with both, I decided to settle on the Web Experience to portray the data.

The ArcGIS Online Web Experience is, according to their own website, a tool that allows the “creation of unique web experiences using flexible layouts, content, and widgets that interact with 2D and 3D data”. It creates a mobile-friendly output built from scratch without coding. It creates interactive maps that are formatted to be viewable and interactable on desktop, tablet and phones. It has 26 widgets available to put on the map, going from a legend to a 3D data viewer tool. For this project, I used a few simple widgets that would enhance the experience for users, which are going to be described further down.

Data and Methods

The data and methodology for this project are pretty straightforward and most of the work went into the web experience designer (to assure the optimal experience on desktop and mobile alike). The data for this project came from a vast dataset on forest fires available on the federal government’s open data website. On their page (link provided above), it is mentioned that the data was made available by multiple municipalities and government (see figure 2). However, they do also mention that the creator of the dataset is the “Secteur des Forêts-Direction des inventaires forestiers” and “Direction de la protection des forêts”, which mean “The Forest Sector-Forestry inventory direction” and “Direction of the protection of forests” respectively.

Figure 2: Warning on data source on Open Data Website.

Anyway, the dataset contains data on every forest fires that occurred in the province of Quebec between 1976 and 2019. That includes geometric data on each of the polygons, the year the fire started, the way it started, the year it was “extinguished” and the superficies of the fire in hectare. Sadly, some of the variables are abbreviated and their meaning wasn’t mentioned on the website and couldn’t be used in this project, but I didn’t need them for what I intended to accomplish.

At first, I wanted to map all polygons, all 60 799, but I decided otherwise due to the sheer size of the dataset. Then, I filtered the data by the year the fires started and extracted all the data from 2013 to 2019. I hoped to display all the fires of the last few years, but even that was too big. There were a bit less than 10 000 polygons and ArcGIS Online was already giving a warning about it not being able to draw the entire thing. So, I was looking for a solution to remediate the problem of having too many polygons to draw and I figured that showing the 100 largest fires since 1976 would be indeed a very interesting, and informative, way to show what I wanted.

To that end, I filtered by the area burnt by the fire, which is in hectare, and extracted the top 100 fires. The data extraction part was done offline, on ArcGIS Pro, because it was simply faster and easier to manipulate the dataset. I then uploaded the 100 largest fires to the ArcGIS Online Platform to make a map because the Web Experience Designer couldn’t create its own map, I had to make one beforehand and then upload it to the Web Experience Designer.

Once the map was done, I could then start working toward the creation of the web experience. Figure 3 shows the user interface of the Web Experience Designer.

Figure 3: Web Experience Desktop U.I.

The Web Experience Designer is fairly straightforward to use and is designed to be usable by people without experience in coding. All of its widgets and tools are available on the left side of the screen and usable with a simple drag and drop. Every widgets/tools are then adjustable in their settings that appears to the right side of the screen. For this project, I used the following widgets/tools: Image (which is, in fact, the legend), table, share and the button widget. Here’s a small description of each and how I used them;

Image/Legend: Sadly, legends on ArcGIS Online are very hard to modify without modifying the entire dataset and its variables and the Web Experience Designer could only use the legends from ArcGIS Online. In my case, the original legend only said “SUPERFICIE” as the field for the superficies of the fires. That wasn’t exactly what I wanted and the workaround that I used was to simply create the legend I wanted in ArcGIS Pro and then screenshot it and upload it as an image to the web experience. The result (Figure 4) shows the end result.

Figure 4: Example of the W.E.D. Legend on the image widgets.

Table: The table widget is simple. It allows the users to see and interact with the data table of the dataset. It allows them to see almost everything there is to see in the data. For simplicity’s sake, I hid some of the more technical columns, especially those populated with the geometry data. The table only shows the fire ID, the size and the year it started. The goal was to make the experience most straightforward. The table also allows selecting specific fires without selecting them on the map (even though you also can select directly from the map).

Share: The share widget is a simple share button that any good online experience should have nowadays. I allow the users to share the link to the web experience on a multitude of social media.

Button: This widget was put on the web experience to allows the users to directly go to the source of the dataset. The link to the open data portal was already available in the web experience’s description, but this button makes it easier to use on mobile devices since you only need to click it and the link to the dataset’s source is made available.

So after making sure every widget works, the next step was to make sure that the web experience is good for each device (computers, tablets and phones). That means changing the formatting of the web experience to fit the resolution and screen sizes of each device.

Finally, the last step of the creation process was to make sure that the map was correctly interactable. That means that I tested my own web experience and verified that the polygons were selectable and that the information for the polygon appeared on screen. I made sure the data table was correct (though it seems to bug a bit as it in beta stage still) and that the polygons were drawn correctly.

Then there it was. The Web Experience was made. Only needed to write descriptions and other small paragraphs on the info page of the web experience and then publish it. I thoroughly enjoyed using the Web Experience Designer to create an interactive map, but, as much I as liked it, there were many limitations that I had to overcome.

Limitations

The limitations of this project were many, but minor. The very first one I encountered was the lack of a clear description of the variables and the abbreviations used in the data. Maybe I haven’t seen it on their page or missed it in the metadata, but I couldn’t find an explanation for some of the abbreviations they use in the data to describe the origin point (human-caused or naturally caused forest fires) and in some other variables. Knowing those could’ve led me to display the data in a much different way.

Another limit I encountered were the online capabilities of ArcGIS Online, such as the inability to draw large amounts of data, and the lack of modification to legend’s title. I could easily find other solutions by doing it offline in ArcGIS Pro, but not everyone has that ability, so I’d count that as a limitation encountered in this project.

The Web Experience Designer, while quite advanced and easy to use, was a bit of a chore to understand its intricacies and has a steep learning curve for the more in-depth features of the platform. By that, I mean that this project only uses a fraction of the options available in the Web Experience Designer. There are more widgets available, but every object part of the experience can be given actions to perform set by specific triggers. For example, If the user clicks on a polygon for a fire, it is possible to set the data table of this specific polygon to appear (in a multitude of ways) on the map. There were also many other actions and triggers to use, but the platform doesn’t make it easy for the new users to utilize the full potential of the designer.

Future Work

In a perfect world where unlimited resources were available for this project, I would make it so the web experience would display the 100 largest forest fires of the province of Quebec for every year since the start (1976).

In other words, I would set-up a year button for each of the years in the dataset. Then, the users would simply click on one (i.e. 2012) and the web experience would display the 100 largest forest fires of that chosen year. That way, the users could see a much larger dataset that would be much more informative. The top 100 forest fires show also focuses on the southern half of the province since most of the population (about 95%) lives there. So, with unlimited resources, the dataset would also include the forest fires that occurred in the northern half of the province.

In a perfect world, the dataset could include the entirety of Canada so that a top 100 forest fires could be done for each province and for every year since 1976. That would be a massive dataset however.

Utility of the project

The goal of this project was to inform the population on the locations and sizes of the wildfires in the province of Quebec. Specifically, it aims to inform fellow Quebecers of the largest forest fires that occurred in their own province. This dataset can be updated every year, if needed, to display a more up-to-date version of the wildfires. Its interactive aspects allow the users to see the information of every fire that occurred (ID, year, size, etc..). It can also be used for forestry companies and environmental agencies that wish to visualize the largest forest fires.

How Does Canada Generate Electricity?

by Arthur Tong

GeoVisualization Project @RyersonGeo, SA8905, FALL 2020

Project Weblink (Click Here)


  • INTRODUCTION

Getting electricity to a country’s homes, different types of buildings and industries is an extremely challenging task, especially for countries that are enourmous in land area; transporting power over long distances are much more difficult. Up to now, the produced electrical energy is either very inconvenient to store or expensive, and with the increasing demand over the years in Canada, balancing betwen two in real time is crucial.

The way how electricity is generated solely depends on what kind of technologies and fuels are avaiable by that area. According to Natural Resources Canada (2020), “the most important energy source in Canada is moving water , which accounts for 59.3% of electricty supply, making it the second largest producer of hydroelectricity in the world with over 378 tearwatt hours in 2014.”

The goal of this interactive map project is to view most of the power plants in Canada and their respective sources and generating capacties (MW), which are proportional to the size of the circles shown in the project weblink above.


  • METHODOLOGY

In this section, I will be introducing the methdology for conducting this project. I would first describe how the data was collected, then followed by steps needed to produce the final dashboard with Tableau Public.

Data Collection

For the purpose of this study, I would need to retrieve pin-point (latitude/longitude) location of all types of power plants across Canada: from primary energy like nuclear energy and the renewables, to secondary energy that are produced from primary energy commodities like coal, natural gas and diesel. I tried looking up on various sources like Open Government Portal, but most of the open data they provide does not necessarily contain the power plants’ exact location.

Therefore, I had to manually pin-point all the data from external sources, mostly based on these two websites Global Energy Observatory (GEO) and The Wind Power. Other projects were identified by looking up on either the publicly/privately owned electricity utility company’s websites for all the provinces, for example BC Hydro, Ontario Hydro, TransAlta, etc, and their relative coordinates were retrieved using google maps. A similar interactive map “Electricity Generating Stations in British Columbia Map” has been done by researchers from University of Victoria, which provided most of the data for British Columbia and framework on what other relevant data I would like to include for my other provinces (as shown in the figure below).

Figure 1: Snapshot of the columns included for the dataset.

In addition, all 13 provinces were accounted and a total of 612 points were collected manually.


Construction of Tableau Dashboard

Tableau Public is the software used for this project. First, load in the excel data into Tableau through Data->Open New Data Source-> Microsoft Excel. Here, make sure the latitude and longitude columns were assigned a Geographic role as shown in the snapshot below, so they could be used to map the data.

Figure 2: Snapshot showcasing the Geographic roles assigned to the Latitude and Longitude columns.

From the new worksheet screen, sections on the left corresponds to the columns of the table. Drag the non-generated latitude and longitude to columns and rows and choose the ‘symbol map’ under ‘show me’ on top right. If the ‘unknown locations’ tab pop-up from the bottom right, it means that Tableau was not able to automatically align the name of the provinces given to the column to their database, which can be simply fixed by clicking that tab and manually edit the unknown locations. After dragging in essential elements you want to present, it would look something like this as shown in the figure below. In addition, the base map can also be changed into a dark theme under Map->Background Maps.

Figure 3: Taleau Interactive Map Layout. ‘Source’ is presented by differnet colours while their ‘capacity’ is presented by the sizes of the circles.

Moving on, to create a bar/pie chart, hover the bar on the left to choose which graph would best visualize the data you are trying present, then drag essential data into columns/rows.

Figure 4: Bar graph showing “Total capacity by all provinces”.

Last but not least, add a new ‘dashboard’ sheet and drag in all the maps/graphs into the dashboard to be the final product. Organizing the layout in the dashboard could be frustrating without the proper frame, you may also consider making elements like the filters and smaller graphs into a ‘float’ item by right clicking it, so that those ‘floating’ items could be placed on top of other elements on the dashbaord; in this case, I made the bar graph ‘floating’ so it is layed on top of the interactive map.

Figure 5: Dashboard Layout.

RESULTS & LIMITATIONS

Hydroelectricty do contribute to 56.67% of electricity generation across the country, followed by natural gas (12.39%) and nuclear energy (11.29%). However, a lot of electricity generation in Alberta are still based on coal, which takes up to 46.21% of the total capacity in that province.

Since all the data were collected manually, they may not be 100% accurate but the idea is to have a sense on where approximately it is located. For example, one single wind farm containing ten wind turbines may consist a large space across the mountain/field, the data collected was based on one wind turbine instead of plotting all ten of them.

Moreover, less developed provinces like the Northwest Territories has a very low amount of electricity generated due to its lower population (one diesel power plant per small town located using google satellite), there could have been more power plants around the area.

In conclusion, precise and consistent data is lacking for all the provinces from open data source portal, creating a potential for future similar studies carried out if more data is allowed. A time line perspective could also be added to this interactive map as well, so as users drag along the bar they can see the change in different types of powerplants that were being built in different locations.

Geovisualization of the York Region 2018 Business Directory


(Established Businesses across Region of York from 1806 through 2018)

Project Weblink (ArcGIS Online): Click here or direct weblink at https://ryerson.maps.arcgis.com/apps/opsdashboard/index.html#/82473f5563f8443ca52048c040f84ac1

Geovisualization Project @RyersonGeo
SA8905- Cartography and Geovisualization, Fall 2020
Author: Sridhar Lam

Introduction:

York Region, Ontario as identified in Figure 1, with over one million people from a variety of cultural backgrounds is across 1,776 square kilometres stretching from Steeles Avenue in the south to Lake Simcoe and the Holland Marsh in the north. By 2031, projections indicate 1.5 million residents, 780,000 jobs, and 510,000 households. Over time, York Region attracted a broad spectrum of business activity and over 30,000 businesses.

Fig.1: Region of York showing context within Ontario, Greater Toronto Area (GTA) and its nine Municipalities.
(Image-Sources: https://www.fin.gov.on.ca/en/economy/demographics/projections/ , https://peelarchivesblog.com/about-peel/ and https://www.forestsontario.ca/en/program/emerald-ash-borer-advisory-services-program)

Objective:

To create a geovisualization dashboard for the public to navigate, locate and compare established Businesses across the nine Municipalities within the Region of York.

The dashboard is intended to help Economic Development market research divisions sort and visualize businesses’ nature, year of establishment (1806 through 2018), and identify clusters (hot-spots) at various scales.

Data-Sources & References:

  1. Open-Data York Region
  2. York Region Official Plan 2010

Methodology:

First, the Business Directory updated as of 2018, and the municipal boundaries layer files, which are made available at the Open-Data Source of York Region, are downloaded. As shown in Figure 2, the raw data is analyzed to identify the Municipal data based on the address / municipal location distribution. It is identified that the City of Markham and the City of Vaughan have a major share.

Fig.2: The number of businesses and the percentage of share within the nine Municipalities of the York Region.

The raw-data is further analyzed, as shown in Figure 3, to identify the major business categories, and the chart below presents the top categories within the dataset.

Fig.3: Major Business Categories identified within the dataset.

Further, the raw data is analyzed, as shown in figure 4, to identify the businesses by the year of establishment, and identifies that most of the businesses within the dataset were established after the 1990s.

Fig 4: Business Establishment Years identified within the dataset.

The Business addressed data is checked for consistency, and Geocodio service is used to geocode the address list for all the business location addresses. The resulting dataset is imported into ArcGIS Map, as shown in figure 5, along with the municipal boundaries layers and checked for inconsistent data before being uploaded onto ArcGIS Online as hosted layers.

Fig.5: Business Locations identified after geocoding of the addresses across the York Region.

Once hosted on ArcGIS Online, a new dashboard titled: ‘Geovisualization of the York Region 2018 Business Directory’ is created. To the dashboard, the components are tested for visual hierarchy, and careful selection is made to use the following components to display the data:

  1. Dashboard Title
  2. Navigation (as shown in figure 6, is placed on the left of the interface, which provides information and user-control to navigate)
  3. Pull-Down/ Slider Lists for the user to select and sort from the data
  4. Maps – One map to display the point data and the other to display cluster groups
  5. Serial Chart (List from the data)- To compare the selected data by the municipality
  6. Map Legend, and
  7. Embedded Content – A few images and videos to orient the context of the dashboard

The user is given a choice to select the data by:

Fig.6: User interface for the dashboard offering selection in dropdown and slider bar.

Thus a user of the dashboard can select or make choices using one or a combination of the following to display the results in on the right panes (Map, data-chart and cluster density map):

  1. Municipality: By each or all Municipalities within York Region
  2. Business Type: By each type or multiple selections
  3. Business Establishment Year Time-Range using the slider (the Year 1806 through 2018)

For the end-user of this dashboard, results are also provided based on business locations identified after geocoding the addresses across the York Region, comparative and quantifiable by each of the nine municipalities shown in Figure 7.

Fig.7: Data-Chart displayed once the dashboard user makes a selection.

By plotting the point locations on a map, and simultaneously showing the clusters within the selected range (Region/ by Municipality / by Business Type / Year of Establishment selections), Figure 8.

Fig.8: Point data map and cluster map indicate the exact geolocation as well as the cluster for the selection made by the user across the York Region at different scales.

Results:

Overall, the dashboard provides an effective geovisualization with a spatial context and location detail of the York Region’s 2018 businesses. The business type index with an option to select one/ multiple at a time and the timeline slider bar offers an end-user of the dashboard to drill down to the information they seek to obtain from this dashboard. The dashboard design offers a dark theme interface maintaining a visual hierarchy of the different map elements such as the map title, legend, colour scheme, colour combinations ensuring contrast and balance, font face selection and size, background and map contrast, choice of hues, saturation, emphasis etc. The maps also offer the end-user to change the background map base layers to see the data in the context of their choice. As shown in figure 9 of location data and quantifiable data at different scales, the dashboard interface offers visuals to display the 30,000+ businesses across the York Region.

This image has an empty alt attribute; its file name is Capture-1-1024x496.jpg

Fig.9: Geovisualization Dashboard to display the York Region 2018 Business Directory across the Nine Municipalities of the York Region.

The weblink to access the ArcGIS Online Dashboard where it is hosted is: https://ryerson.maps.arcgis.com/apps/opsdashboard/index.html#/82473f5563f8443ca52048c040f84ac1

(Please note an ArcGIS Online account is required)

Limitation:

The 2018 business data across York Region contains over 38,000 data points, and the index/ legend of the business types may look cluttered while a selection is made as well. The fixed left navigation panel width is definitely a technical limitation because the pull-down display cannot be made wider. However, the legend screen could be maximized to read all the business categories clearly. There may be errors, incomplete or missing data in the compilation of business addresses. This dashboard can be updated quickly but requires a little effort, whenever there is an update of the York Region business directory’s new release in the coming years.

An Interactive Introduction to Retail Geography

by Jack Forsyth
Geovis Project Assignment @RyersonGeo, SA8905, Fall 2020

Project Link: https://gis.jackforsyth.com/


Who shops at which store? Answers to this fundamentally geographic question often use a wide variety of models and data to understand consumer decision making to help locate new stores, target advertisements, and forecast sales. Understanding store trade areas, or where a store’s customers come from, plays an important role in this kind of retail analysis. The Trade Area Models web app lets users dip their toes into the world of retail geography in a dynamic, interactive fashion to learn about buffers, Voronoi polygons, and the Huff Model, some of the models that can underlie trade area modeling.

The Huff Model on display in the Trade Area Models web app

The web app features a tutorial that walks new users through the basics of trade area modeling and the app itself. Step by step, it introduces some of the underlying concepts in retail geography, and requires users to interact with the app to relocate a store and resize the square footage of another, giving them an introduction to the key interactions that they can use later when interacting with the models directly.

A tutorial screenshot showing users how to interact with the web app

The web app is designed to have a map dominate the screen. On the left of the browser window, users have a control panel where they can learn about the models displayed on the map, add and remove stores, and adjust model parameters where appropriate. As parameters are changed, users receive instant feedback on the map that displays the result of their parameter changes. This quick feedback loop is intended to encourage playful and exploratory interactions that are not available in desktop GIS software. At the top of the screen, users can navigate between tabs to see different trade area models, and they are also provided with an option to return to the tutorial, or read more about the web app in the About tab.

The Buffers tab allows for Euclidean distance and drive time buffers (pictured above)

Implementation

The Trade Area Models web app was implemented using HTML/CSS/JavaScript and third party libraries including Bootstrap, JQuery, Leaflet, Mapbox, and Turf.js. Bootstrap and JQuery provided formatting and functionality frameworks that are common in web development. Leaflet provided the base for the web mapping components, including the map itself, most of the map-based user interactions, and the polygon layers. Mapbox was used for the base map layer and its Isochrone API was used to visualize drive time buffers. Turf.js is a JavaScript-based geospatial analysis library that makes performing many GIS-related functions and analysis simple to do in web browsers, and it was used for distance calculation, buffering, and creating Voronoi polygons. Toronto (Census Metropolitan Area) census tract data for 2016 were gathered from the CensusMapper API, which provides an easy to use interface to extract census data from Statistics Canada. Data retrieved from the API included geospatial boundaries, number of households, and median household income. The Huff Model was written from scratch in JavaScript, but uses Turf.js’s distance calculation functionality to understand the distance from each store to each census tract’s centroid. Source code is available at https://github.com/mappinjack/spatial-model-viz

Limitations

One of the key limitations in the app is a lack of specificity in models. Buffer sizes and store square footage areas are abstracted out of the app for simplicity, but this results in a lack of quantitative feedback. The Huff Model also uses Euclidean distance rather than drive time which ignores the road network and alternative means of transit like subway or foot traffic. The Huff Model also uses census tract centroids, which can lead to counter intuitive results in large census tracts. The sales forecasting aspect of the Huff Model tab makes large assumptions on the amount of many spent by each household on goods, and is impacted by edge effects of both stores and customers that may fall outside of the Toronto CMA. The drive time buffers also fully rely on the road network (rather than incorporating transit) and are limited by an upper bounded travel time of 60 minutes from the Mapbox Isochrone API.

Future work

The application in its current form is useful for spurring interest and discussion around trade area modeling, but should be more analytical to be useful for genuine analysis. A future iteration should remove the abstractions of buffer sizes and square footage estimates to allow an experienced user to directly enter exact values into the models. Further, more demographic data to support the Huff Model, and parameter defaults for specific industries would help users more quickly create meaningful models. Applying demographic filters to the sales forecasting would allow, for example, a store that sells baby apparel to more appropriately identify areas where there are more new families. Another useful addition to the app would be integration of real estate data to show retail space that is actually available for lease in the city so that users can pick their candidate store locations in a more meaningful way.

Summary

The Trade Area Models web app gives experienced and inexperienced analysts alike the opportunity to learn more about retail geography. While more analytical components have been abstracted out of the app in favour of simplicity, users can not only learn about buffers, Voronoi polygons, and the Huff Model, but interact with them directly and see how changes in store location and model parameters affect the retail landscape of Toronto.

An interactive demo of Voronoi polygons that includes adding and moving stores

100 Years of Wildfires in California – Tableau Dashboard Time Series

Shanice Rodrigues

GeoVis Project Assignment @RyersonGeo, SA8905, Fall 2020

Natural phenomenon can be challenging to map as they are dynamic through time and space. However, one solution is dynamic visualization itself through time series maps, which offered on Tableau. Through this application, an interactive dashboard can be created which can relay your data in various ways, including time series maps, graphs, text and graphics. If you are interested in creating a dashboard in Tableau with interactive time series and visuals, keep reading.

In this example, we will be creating a timeseries dashboard for the distribution of California’s wildfires over time. The overall dashboard can be viewed on Tableau Public HERE.

First, let’s go over the history of these wildfires which will present an interesting context for what we observe from these fires over time.

History of Wildfires

There is a rich, complicated history between civilization and wildfires. While indigenous communities found fires to be productive in producing soils rich in fertile ash ideal for crops, colonizers dismissed all fires as destructive phenomenon that needed to be extinguished. Especially with the massive fires in the early 1900s causing many fatalities, such as that in the Rocky Mountains that killed 85 people. The United States Forest Service (USFS) decided in implementing a severe fire suppression policy, requiring fires of 10 acres or less to be put out beginning in 1926, and then all fires to be put out by 10 A.M. the next day in 1935. It is expected that from the immediate extinction of fires in the early to mid-1900s, natural fire fuels such as forest debris continued to build up. This is likely the cause of massive fires that appeared in the late 1900s and persist to the current age which continue to be both difficult and expensive to manage. This pattern is obvious, as shown on the bar graph below for the number of fires and acres burned over the years (1919-2019).

Dashboard Creation

Data Importation

Many types of spatial files can be imported into Tableau such as shapefiles and KML files to create point, line or polygon maps. For our purposes, we will be extracting wildfire perimeter data from the Fire and Resource Assessment Program (FRAP) as linked here or on ArcGIS here.  This data displays fire perimeters dating back to 1878 up till the last calendar year, 2019 in California. Informative attribute data such as fire alarm dates, fire extinction dates, causes of fire and acre size of fires are included. While there is a file on prescribed burns, we will only be looking at the wildfire history file. The data imported into Tableau as a ‘Spatial file” where the perimeter polygons are automatically labelled as a geometry column by Tableau.

Timeseries

The data table is shown on the “Data Source” tab, where the table can be sorted by fields, edited or even joined to other data tables. The “Sheet” tabs are used to produce the maps or graphs individually that can all be added in the “Dashboard” table. First, we will create the wildfire time series for California. Conveniently, Tableau categorizes table columns by their data types, such as date, geometry, string text or integers. We can add column “Year” to the “Pages” card from which Tableau will use as the temporal reference for the time series.

The following timeseries toolbar will appear, where wildfire polygons will appear on the map depending on the year they occurred and is defined by the following scroll bar. The map can be shown as a looped animation with different speeds.

Additionally, the “Geometry” field can be added to the “Marks” card which are the wildfire perimeter polygons. Tableau has also generated “Longitude” and “Latitude” that are the total spatial extent of the wildfire geometries and can be added to the “Columns” and “Rows” tab.

In the upper-right “Show Me” table, the map icon can be selected to generate the base map.

Proportionally Sized Point Features

Multiple features can be added to this map to improve the visualization. First, the polygon areas appear to be very small and hard to see on the map above therefore it may be more effective to display them as point locations. In the “Marks” card, use the dropdown and select the ‘Shape” tab.

From the shape tab, there are multiple symbols to select from, or symbols can be uploaded from your computer into Tableau. Here, we chose a glowing point symbol to represent wildfire locations.

Additionally, to add more information to the points, such as proportional symbol sizes according to area burned (GIS ACRES field) by each fire. A new calculated field will have to be created for the point size magnitudes as shown:

The field is named “Area Burned (acres)” and is brought to the power of 10 so that the differences in magnitude between the wildfire points are noticeable and large enough on the map to be spotted, even at the lowest magnitude.

Tool Tip

Another informative feature to add to the points is the “Tool Tip,” or the attribute box about the feature that a reader has scrolled over. Often, attribute fields are already available in the data table to use in the tool tip such as fire names or the year of the fire. However, some fields need to be calculated such as the length of each wildfire. This can be calculated from the analysis tab as shown:

For the new field named “Fire Life Length (Days)” the following script was used:

Essentially this script finds the difference between the alarm date (when the fire started) and the contained date (when the fire ended) in unit “days.”  

For instance, here are some important attributes about each wildfire point that was added to the tool tip.

As shown, limitless options of formatting such as font, text size, and hovering options can be applied to the tool tip.

Graphics and Visualizations

The next aspects of the dashboard to incorporate would be the graphs to better inform the reader on the statistics of wildfire history. For the first graph, it will not only show the number of fires annually, but the acres burned as this will show the sizes of the fires.

Similarly to the map, the appropriate data fields need to be added to the columns and rows to generate a graph. Here the alarm date (start of the fire) is added to the x-axis, whereas the number of fires and Gis Acres (acres burned) was added to the y-axis and are filtered by “year.”

The field for the number of fires was a new field calculated with the following script:

Essentially, every row with a unique fire name is counted for every year under the “Alarm_Date” field to count the number of fires per year.

Another graph to be added to this dashboard is to inform the reader about the causes of fires and if they vary temporally. Tableau offers many novel ways of displaying mundane data into interesting visualizations that are both informative and appealing. Below is an example of a clustering graph, showing number of fires by cause against months over the entire timeseries. A colour gradient was added to provide more emphasis on causes that result in the most fires, displaying a bright yellow against less popular causes displayed with crimson red.

Similarly to the map, the “(Alarm_Date)” was added to the “Filters” card, however since we want to look at the average of causes per month rather than year, we can use the dropdown to change the date of interest to “MONTH.”

We also want to add the “Number of Fires” field to the “Marks” card to quantify how many fires are attributed to each cause. As shown, the same field can be added twice, such as one to edit its size attribute and one to edit its colour gradient attribute.

Putting it All Together

Finally, in the “Dashboard” tab, all these pages below of the timeseries map and graphs can be dragged and dropped into the viewer. The left toolbar can be used to import sheets into, change the extent size of the dashboard, as well as add/edit graphics and text.

Hopefully you’ve learned some of the basics of map and statistical visualizations that can be done in Tableau using this tutorial. If you’re interested in the history, recommendations and limits of this visualization, it is continued below.

Data Limitations and Recommendations

Firstly, with the wildfire data itself there are shortcomings, particularly that fires may have not been well documented prior to the mid-1900s due to the lack of observational technology. Additionally, only large fires were detected by surveyors whereas smaller fires were left unreported. With today’s technology in satellite imagery and LiDAR, fires of all sizes can be detected therefore it may appear that more fires of all sizes happen frequently in the modern age than prior. Besides the data, there are limitations with Tableau itself. First, the spatial data are all transformed to the spatial reference system WGS84 (EPSG:4326) when imported into Tableau and there can be inaccuracies of the spatial data through the system conversion. Therefore, it would be helpful for Tableau to utilize other reference systems and provide the user the choice to convert systems or not. Another limitation is with the proportional symbols for wildfires. The proportional symbol field had to be calculated and used had to be put to the “power of 10” to show up on the map, with no legend of the size range produced. It would be easier for Tableau to have a ‘Proportional Symbol” added onto the “Size” tab as this is a basic parameter required for many maps and would communicate the data easier to the reader. Hopefully Tableau can resolve these technical limitations to making mapping a more exclusive format that will work in visualizing many dataset types.

With gaps in wildfire history data for California, many recommendations can be made. While this visualization looked at the general number of fires per month by cause, it would be interesting to go in depth with climate or weather data, such as if there are an increasing number of thunderstorms or warmer summers that are sparking more fires in the 200s than the 1900s. Additionally, visualizing wildfire distributions with urban sprawl, such as if fires in range of urban centers or are more commonly in the range of people so are ranked as more serious hazards than those in the wilderness. Especially since the majority of wildfires are caused by people, it would be important to point out major camping groups and residential areas and their potential association with wildfires around them. Also, recalling the time since areas were last burned, as this can quantify the time regrowth has occurred for vegetation as well as the build-up of natural fuels which can then predict the size of future wildfires that can occur here if sparked. This is important for residential areas near these areas of high natural-fuel buildup and even insurance companies to locate large fire-prone areas. Overall, improving a visualization such as this requires the building of context surrounding it, such as filling in gaps of wildfire history through reviewing historical literature and surveying, as well as deriving data of wildfire risk using environmental and anthropogenic data.