Monthly Archives: April 2017

Hexmaps in QGIS

Hexes! If you like boardgames you probably love those awesome maps were terrain has been transformed into a grid of hexagons (popularly known as hexes). Beyond this geeky interest hex-based maps can be used to create interesting visualizations where you want to colour the map based on a specific variable.

These visualizations are technically known as choropleth maps and they divide the space into a set of polygons that could be anything: country boundaries, Voronoi diagrams or regular tiles.

A choropleth map that visualizes the fraction of Australians that identified as Anglican at the 2011 census by Toby Hudson

Regular tiles are interesting if you don’t have a relevant distribution of pre-existing polygons. But what type of tile would you use? The typical approach is to use a grid of squares but as boardgamers already know this is far from perfect. The issue here is that the distance between a square centre and its neighbours depends on their configuration: it will be larger for the ones in the corners than the ones at right/left/top/bottom as Pythagoras knew some centuries ago. Hexagons better capture the spatial relation between tiles because the 6 neighbors of each hexagon are roughly positioned at the same distance of its centre. Also, did I mention that hexmaps look awesome?

Ok, let’s see how can we create an hexagon-based map with QGIS.

Roman camps in Scotland

We know that the Romans legions adventured beyond the Antonine wall; the sources talk about military campaigns and archaeological evidence support this idea because several temporary march camps have been found in Scotland. But where do we find these camps? To answer this question let’s create an hexmap where the hexes are coloured based on the number of temporary camps in the region.

Load the dataset

This zip file contains 2 vector files in Shapefile format:
scotland_boundaries.shp has the boundaries of the region
roman_camps.shp is the set of identified Roman temporary camps compiled by Canmore.

Roman temporary camps in Scotland

Install the mmqgis plugin

Go to Plugins -> Manage and Install Plugins and install mmqgis. This plugin extends the functionality of QGIS for vector map layers.

Install mmqgis using the plugin manager

Create an hexagonal grid

Once mmqgis is enabled you can use its functionality to create the hexes; go to MMQGIS -> Create -> Create Grid Layer and select Hexagons as the layer type.

You should set as extent the layer scotland_boundaries because we want the grid to cover the entire region. You can finally define the size of the hexes; I chose here 25km because it is roughly the distance a legion can cover in a day.

Parameters for a 25km-based hexagonal grid of Scotland

Intersect the grid with the boundaries

You probably got an hexagonal grid covering a large rectangle; it is kind of useless because to my knowledge Romans did not have submarines, so we should remove from the grid everything that is not land. In essence we want to remove from the grid everything outside the scotland_boundaries layer. You can do this using the command Intersection from Vector -> Geoprocessing Tools.

You need to specify the hex grid as the Input layer and the boundaries as the Intersect Layer. Please be aware that this process will take a while, specially if you defined a small hex size.

Count the number of camps per hex

The last step is to create a new hexagonal layer with an attribute defining the number of camps per hex. This algorithm is not in the Menus so you should open the Toolbox inside the Processing menu. Go to QGIS -> Vector analysis tools and select Count points in polygon.

The algorithm is hidden in the toolbox

The input parameters for the algorithm are straightforward; fill them and create this new layer.

Not much to explain here…

Visualize the result

Double-click on the new layer and set the type of Style to Graduated based on the attribute NUMPOINTS. Classify using a decent color ramp and you are done!

Standard Deviation is a good color mode for this type of data

Looks like an 80s-Avalon Hill-style wargame!

Discussion

You can use this method to overlap several layers of information:

Blending hexes with the Stamen Terrain layer

What can we say from this visualization? Some interesting spatial patterns are clearly visible:
1. The Firth of Forth seems to concentrate the majority of camps
2. The Romans definitely did not like Western Scotland. They probably did not want to move far from the coast where their fleet supported them
3. The route followed by the armies was probably used as the basis for the Gask Ridge fortification line

Acknowledgements

This post was heavily inspired by the entry written by Anita Graser in her blog

Identifying gaps in your data

One of the first things you want to do when you explore a new dataset is to identify possible gaps. Sample size and the number of variables are relevant but…how many observations do you have for each variable? This distinction is even more relevant for archaeologists because (if we are being honest…) most of our data has huge gaps.

Just to make the post clear:
– The Sample is the set of entities you collected.
Variables are measures and properties of this sample
Observations are the values of the variables for each item in your sample

The identification of variables with a decent number of observations is crucial for several processes. Let’s say that you have a bunch of archaeological sites and you want to create a map where the size of each dot (i.e. site) is proportional to the area of the site. This would be a bad idea if 90% of your sample does not have an assigned area because these points will be ignored.

This is even more relevant if you want to do some modelling (e.g. a linear regression). Lots of statistical models ignore variables that have observations with unassigned values so you have to be very careful about it. Let’s see how can we explore this issue.

Example: Arrowheads in UK

Loading the dataset

We downloaded a dataset of arrowheads collected by the Portable Antiquities Scheme:

[generic linenumbers=”False”] arrowheads <- read.csv("https://git.io/v9JJd", na.string="")
[/generic]

As you can see I specified that empty strings of text should be read as NA (i.e. Not Available). If you don’t do that then it will be read as an empty string, which is different than not having a value.

If we take a look at the newly created arrowheads data frame we will see a bunch of interesting metrics:

[generic linenumbers=”False”] str(arrowheads)
[/generic] You should get something like:

[generic linenumbers=”False”] ‘data.frame’: 1079 obs. of 13 variables:
$ id : int 522443 179174 233283 199204 106547 45059 508485 646649 401936 133638 …
$ classification : Factor w/ 81 levels “Arrowhead”,”Barb and Tanged”,..: NA NA 10 NA NA NA NA NA NA NA …
$ subClassification: Factor w/ 19 levels “barbed and tanged”,..: NA NA NA NA NA NA NA NA NA NA …
$ length : num 28 55.2 45.3 100.3 39 …
$ width : num 18 11 25 7.6 11 …
$ thickness : num 3 2 5.11 6.7 NA 3.54 3.47 6.5 1 NA …
$ diameter : num 2.2 4.5 4.58 5.1 6 6.82 6.96 7.5 8 8 …
$ weight : num NA 6.1 2.78 16.69 4.76 …
$ quantity : int 1 1 1 1 1 1 1 1 1 1 …
$ broadperiod : Factor w/ 12 levels “BRONZE AGE”,”EARLY MEDIEVAL”,..: 1 4 1 4 1 4 4 4 4 4 …
$ fromdate : int -2150 NA -2150 1250 -2150 1066 1066 1200 1066 1200 …
$ todate : int -1500 NA -1500 1499 -800 1350 1400 1300 1500 1499 …
$ district : Factor w/ 188 levels “Arun”,”Ashford”,..: 45 181 74 NA 26 93 177 137 186 53 …
[/generic]

See all these NA values? These are the gaps in our data. We can suspect that diameter or “subClassification* will not be popular, but having over 1000 arrowheads it is difficult to know what variables should we be used in the analysis.

Visualizing gaps

How can you identify these gaps? My preferred method is to visualize them using the Amelia package (yes, awesome name for an R package on missing data…). Its use is straightforward:

[generic linenumbers=”False”] install.package(“Amelia”)
library(“Amelia”)
missmap(arrowheads)
[/generic]

Missingness map

The structure is R-classic: rows are sample units while columns are variables. Red cells are the ones that have some values while the other ones are empty.

Interpretation

The map of missing values allows us to make informed decisions on how to proceed with the analysis. In this case:
– We should not use diameter for analysis because it is not present in most of the sample.
– We have a almost complete information on broad spatial and temporal coordinates (broadperiod and district)
classification and subClassification are quite useless here
– Measures that can be used are: weight, thickness, width and length

Impact

You can easily visualize the impact by creating a visualization with diameter and another one without this value:
[generic linenumbers=”False”] ggplot(arrowheads, aes(x=width, y=diameter, col=broadperiod)) + geom_point() + theme_bw() + facet_wrap(~broadperiod)
[/generic]

Scatterplot width vs diameter

Not looking good…R even tells you that you lost 1051 points in your dataset…also, most of the periods. Compare it with:

[generic linenumbers=”False”] ggplot(arrowheads, aes(x=width, y=length, col=broadperiod)) + geom_point() + theme_bw() + facet_wrap(~broadperiod)
[/generic]

Scatterplot width vs length

Only 84 rows contained missing values, much better!

Loading and visualizing Shapefiles in R

Geographical Information Systems such as QGIS are common in the typical archaeologist’s toolbox. However, the complexities of our datasets mean that we need additional tools to identify patterns in time and space.

Imagine that you are working with a set of points in a vector format such as a Shapefile. It can be a set of settlements, C14 dates or shipwrecks…it doesn’t matter; you will want to combine spatial analysis with other methods. This is particularly true when you are trying to get a feeling of the dataset by performing Exploratory Data Analysis, so you need to compile summary statistics and create some meaningful visualisations.

My personal approach is to combine QGIS with R and exploit the awesome ggplot package to get some insight into the dataset beyond spatial patterns. R has some nice spatial functionality, but the cool thing is to integrate these methods with other plots to get a general picture of the case study you are studying.

Example: Aircraft crashes in the Orkney islands

The Orkney islands were a key location for the British war effort during the Two World Wars because it hosted Scapa Flow: the main naval base of the Royal Navy. For this reason it saw an intense aerial activity of all sorts: squadrons defending the base, German bombers attacking it, aircraft carrier operations…you name it. Inevitably this activity generated aircraft crashes due to accidents and combat and these events have been compiled by different initiatives such as the Project Adair-Whitaker or the ARGOS group.

I created a nice Shapefile of aircraft crash sites based on the information provided by Canmore. How can I explore the dataset beyond the spatial dynamics? How many German aircrafts were shot down? What types and squadrons sustained more losses? Let’s take a look!

Download Data

The first thing to do is to load the Shapefile. This format requires of several files so I created a zip file with all the set. First of all we need to download it to a temporary directory and extract its contents:

[generic linenumbers=”False”] tmp <- tempdir()
url <- "https://github.com/xrubio/pastByNumbers/raw/master/data/aircraft_orkney.zip"
file <- basename(url)
download.file(url, file)
unzip(file, exdir = tmp)
[/generic]

Load the spatial data

We will use the rgdal library to load the contents of the Shapefile into R:
[generic linenumbers=”False”] library(rgdal)
aircraftShp <- readOGR(dsn = tmp, layer="aircraft_orkney")
str(aircraftShp)
[/generic]

aircraftShp is a variable of type SpatialPointsDataFrame from package sp. It is a container of some interesting information such as the bounding box delimiting our spatial entities or the Coordinate Reference System that it is being used. Ideally we would like to get the attributes of the spatial entities as a data frame so we can use it with most R packages:

[generic linenumbers=”False”] aircraft <- as.data.frame(aircraftShp)
str(aircraft)
[/generic]

Maps and planes

First of all let’s plot the spatial coordinates of the aircraft crash sites against a nice background. I created a rectangle location based on the bounding box of aircraftShp:

[generic linenumbers=”False”] aircraftShp@bbox
location <- c(-3.85, 58.7, -1.95, 59.4)
[/generic]

We can use ggmap to download the background:
[generic linenumbers=”False”] library(ggplot2)
library(ggmap)
bgMap <- get_map(location=location, source="stamen", maptype="watercolor")
ggmap(bgMap) + geom_point(data=aircraft, aes(x=coords.x1, y=coords.x2))
[/generic]

Aircraft crash sites around Scapa Flow

Exploring other dimensions

This is ok but it is something we can already do with any GIS so…what’s the deal? The thing is that aircraft is an R Data Frame so we can plot any variable using a graphic library, something that R does way better than any GIS. For example, let’s take a look at the temporal dimension:

[generic linenumbers=”False”] ggplot(aircraft, aes(x=year, fill=force)) + geom_histogram(binwidth=1) + facet_wrap(~war, scales=”free_x”)
[/generic]

Losses by force and year

We can also create a table of planes used by the different British squadrons that suffered losses:

[generic linenumbers=”False”] ggplot(na.omit(aircraft), aes(y=type, col=force, x=as.factor(sqdn))) + geom_raster()
[/generic]

Aircraft types per squadron

Combining everything

We can finally create an infographic combining space, time and other stuff with the library gridExtra:

First, we create nice version of the plots we just saw:

[generic linenumbers=”False”] g1 <- ggmap(bgMap, extent="panel", darken = c(.4,"white")) + geom_point(data=aircraft, aes(x=coords.x1, y=coords.x2, col=war), size=2) + geom_text(data=aircraft, aes(x=coords.x1, y=coords.x2,label=type), family = "Trebuchet MS", color="grey40", check_overlap=T, nudge_y=0.01) + theme(legend.position="bottom") + ggtitle("aircraft losses") + xlab("") + ylab("") + scale_color_manual(values=c("indianred2", "goldenrod1"))

g2 <- ggplot(na.omit(aircraft), aes(y=type, col=force, x=as.factor(sqdn))) + geom_point(size=3) + theme_bw() + theme(legend.position="bottom") + xlab("squadron") + ylab("") + ggtitle("Britain – aircraft types per squadron")

g3 <- ggplot(aircraft, aes(x=year, fill=force)) + geom_histogram(binwidth=1, col="black") + facet_wrap(~war, scales="free_x", ncol=1) + theme_bw() + theme(legend.position="bottom") + ggtitle("losses per year") + xlab("") + ylab("")
[/generic]

And we can then combine the 3 plots into a nice visualization:
[generic linenumbers=”False”] library(gridExtra)
grid.arrange(arrangeGrob(arrangeGrob(g1,g2, heights=c(2/3,1/3), ncol=1), g3, widths=c(2/3, 1/3), ncol=2))
[/generic]

Aicraft Losses in Orkney islands during the two World Wars

Interpretation

These visualizations allow us to identify several interesting things going on:

  • The Second World War had way more accidents than the First one.
  • It would seem that Luftwaffe stopped flying over Scapa Flow after the first years of WW2
  • The Fleet Air Arm had a very large increase on losses after 1941.
  • Most squadrons only flew 1 type of plane being the exception the 771 Naval Air Squadron. It makes total sense given the diversity of missions flown by this squadron.

Calibrating C14 dates

C14 sampling is the most popular technique that archaeologists use to estimate absolute dates of sites. The method uses an organic sample to measure radiocarbon age that finally needs to be calibrated to calendar years (see the Wikipedia entry for details).

I have always been puzzled that C14 samples are typically calibrated using specialized proprietary software. I don’t usually work with C14 but I was curious to see if you can calibrate samples using R (the language I used for everyday statistical stuff). The answer is that someone already created a package for this (surprise, surprise). It is called bchron and was developed by Andrew Parnell.

Install and load the library

[generic linenumbers=”False”] install.packages(“Bchron”)
library(Bchron)
[/generic]

Load a dataset

You will need a data frame with at least 2 columns: a) age and b) standard deviation. I used data from the amazing Maes Howe cairn in Orkney islands collected from the Scottish Radiocarbon Database in Canmore.

[generic linenumbers=”False”] c14 <- read.csv("https://git.io/vS0mY")
[/generic]

Calibrate!

Bchron provides a function called BchronCalibrate receiving 4 parameters:

  1. age
  2. deviation
  3. curve
  4. sample Id optional

You can calibrate a single date:
[generic linenumbers="False"] singleResult <- BchronCalibrate(3765, 70, "intcal13", "Q-1481")
plot(singleResult)
[/generic]

The cool thing here is that you can also pass a list of values to each parameter instead of a single value. If you do this then you will calibrate all dates at the same time:

[generic linenumbers="False"] results <- BchronCalibrate(c14$age, c14$ageSd, rep("intcal13", nrow(c14)), c14$labId)
[/generic]

As you can see in this code block we are pass the columns age and ageSd as the first 2 parameters. The third parameter is a list of "intcal13" as long as our sample size (so you are saying that each sample will be calibrated using the "intcal13" curve). We use the column labId as the last argument to help us identify each sample.

So…that's it! You can cycle through the calibrated samples with:
[generic linenumbers="False"] plot(results,pause=T)
[/generic]

You can also export the calibrated samples to a pdf document:
[generic linenumbers=”False”] pdf(“calibrated.pdf”)
plot(results)
dev.off()
[/generic]