Lab 04 - University of Edinburgh Art Collection

The University of Edinburgh Art Collection “supports the world-leading research and teaching that happens within the University. Comprised of an astonishing range of objects and ideas spanning two millennia and a multitude of artistic forms, the collection reflects not only the long and rich trajectory of the University, but also major national and international shifts in art history.”1 Source:

See the sidebar here and note that there are 2970 pieces in the art collection we’re collecting data on.

In this workshop we’ll scrape data on all art pieces in the Edinburgh College of Art collection.

Before getting started, let’s check that a bot has permissions to access pages on this domain.

## [1] TRUE

Learning goals

Complete the following steps before you join the live workshop!

Workshop prep

You have three tasks you should complete before the workshop:

Complete the following steps during the live workshop with your team.

As usual, start out by cloning your lab repo, named lab-04-uoe-art-YOUR_TEAMNAME. Each team member should clone the repo and you should take turns working on various parts of the lab. Note that each team member should make commits to the repository to be eligible for points for this assignment. And remember that when each team member takes over, their first action should be to pull from repo before adding more content.

R scripts vs. R Markdown documents

Today we will be using both R scripts and R Markdown documents:

Here is the organization of your repo, and the corresponding section in the lab that each file will be used for:

|  |-
|-lab-06-uoe-art.Rmd              # analysis
|-scripts                         # webscraping
|  |- 01-scrape-page-one.R        # scraping a single page
|  |- 02-scrape-page-function.R   # functions
|  |- 03-scrape-page-many.R       # iteration

Scraping a single page

Tip: To run the code you can highlight or put your cursor next to the lines of code you want to run and hit Command+Enter.

Work in scripts/01-scrape-page-one.R.

We will start off by scraping data on the first 10 pieces in the collection from here.

First, we define a new object called first_url, which is the link above. Then, we read the page at this url with the read_html() function from the rvest package. The code for this is already provided in 01-scrape-page-one.R.

# set url
first_url <- "*:*/Collection:%22edinburgh+college+of+art%7C%7C%7CEdinburgh+College+of+Art%22?offset=0"

# read html page
page <- read_html(first_url)

For the ten pieces on this page we will extract title, artist, and link information, and put these three variables in a data frame.


Let’s start with titles. We make use of the SelectorGadget to identify the tags for the relevant nodes:

page %>%
  html_nodes(".iteminfo") %>%
  html_node("h3 a")
## {xml_nodeset (10)}
##  [1] <a href="./record/112340?highlight=*:*">untitled                         ...
##  [2] <a href="./record/112342?highlight=*:*">Untitled                         ...
##  [3] <a href="./record/112343?highlight=*:*">Untitled                         ...
##  [4] <a href="./record/112344?highlight=*:*">Untitled                         ...
##  [5] <a href="./record/112354?highlight=*:*">Untitled                         ...
##  [6] <a href="./record/112356?highlight=*:*">Untitled                         ...
##  [7] <a href="./record/112352?highlight=*:*">Untitled                         ...
##  [8] <a href="./record/112349?highlight=*:*">Untitled                         ...
##  [9] <a href="./record/112351?highlight=*:*">Anatomy Test Jeckon H            ...
## [10] <a href="./record/112353?highlight=*:*">Untitled                         ...

Then we extract the text with html_text():

page %>%
  html_nodes(".iteminfo") %>%
  html_node("h3 a") %>%
##  [1] "untitled                                                            (1984)"
##  [2] "Untitled                            (2019)"                                
##  [3] "Untitled                                                            (1961)"
##  [4] "Untitled                            (2019)"                                
##  [5] "Untitled                            (2019)"                                
##  [6] "Untitled                            (2019)"                                
##  [7] "Untitled                            (2019)"                                
##  [8] "Untitled                            (2019)"                                
##  [9] "Anatomy Test Jeckon H                            (2019)"                   
## [10] "Untitled                            (2019)"

And get rid of all the spurious white space in the text with str_squish(), which reduces repeated whitespace inside a string.

Take a look at the help for str_squish() to find out more about how it works and how it’s different from str_trim().

page %>%
  html_nodes(".iteminfo") %>%
  html_node("h3 a") %>%
  html_text() %>%
##  [1] "untitled (1984)"              "Untitled (2019)"             
##  [3] "Untitled (1961)"              "Untitled (2019)"             
##  [5] "Untitled (2019)"              "Untitled (2019)"             
##  [7] "Untitled (2019)"              "Untitled (2019)"             
##  [9] "Anatomy Test Jeckon H (2019)" "Untitled (2019)"

And finally save the resulting data as a vector of length 10:

titles <- page %>%
  html_nodes(".iteminfo") %>%
  html_node("h3 a") %>%
  html_text() %>%

Put it altogether

  1. Fill in the blanks to organize everything in a tibble.

Scrape the next page

  1. Click on the next page, and grab its url. Fill in the blank in to define a new object: second_url. Copy-paste code from top of the R script to scrape the new set of art pieces, and save the resulting data frame as second_ten.

✅ ⬆️ If you haven’t done so recently, commit and push your changes to GitHub with an appropriate commit message. Make sure to commit and push all changed files so that your Git pane is cleared up afterwards.


Work in scripts/02-scrape-page-function.R.

You’ve been using R functions, now it’s time to write your own!

Let’s start simple. Here is a function that takes in an argument x, and adds 2 to it.

add_two <- function(x){
  x + 2

Let’s test it:

## [1] 5
## [1] 12

The skeleton for defining functions in R is as follows:

function_name <- function(input){
  # do something with the input(s)
  # return something

Then, a function for scraping a page should look something like:

Reminder: Function names should be short but evocative verbs.

function_name <- function(url){
  # read page at url
  # extract title, link, artist info for n pieces on page
  # return a n x 3 tibble
  1. Fill in the blanks using code you already developed in the previous exercises. Name the function scrape_page.

Test out your new function by running the following in the console. Does the output look right? Discuss with teammates whether you’re getting the same results as before.


✅ ⬆️ If you haven’t done so recently, commit and push your changes to GitHub with an appropriate commit message. Make sure to commit and push all changed files so that your Git pane is cleared up afterwards.


Work in scripts/03-scrape-page-many.R.

We went from manually scraping individual pages to writing a function to do the same. Next, we will work on making our workflow a little more efficient by using R to iterate over all pages that contain information on the art collection.

Reminder: The collection has 2970 pieces in total.

That means we give develop a list of URLs (of pages that each have 10 art pieces), and write some code that applies the scrape_page() function to each page, and combines the resulting data frames from each page into a single data frame with 2970 rows and 3 columns.

List of URLs

Click through the first few of the pages in the art collection and observe their URLs to confirm the following pattern:

[sometext]offset=0     # Pieces 1-10
[sometext]offset=10    # Pieces 11-20
[sometext]offset=20    # Pieces 21-30
[sometext]offset=30    # Pieces 31-40
[sometext]offset=2960  # Pieces 2961-2970

We can construct these URLs in R by pasting together two pieces: (1) a common (root) text for the beginning of the URL, and (2) numbers starting at 0, increasing by 10, all the way up to 2970. Two new functions are helpful for accomplishing this: glue() for pasting two pieces of text and seq() for generating a sequence of numbers.

  1. Fill in the blanks to construct the list of URLs.


Finally, we’re ready to iterate over the list of URLs we constructed. We will do this by mapping the function we developed over the list of URLs. There are a series of mapping functions in R (which we’ll learn about in more detail tomorrow), and they each take the following form:

map([x], [function to apply to each element of x])

In our case x is the list of URLs we constructed and the function to apply to each element of x is the function we developed earlier, scrape_page. And as a result we want a data frame, so we use map_dfr function:

map_dfr(urls, scrape_page)
  1. Fill in the blanks to scrape all pages, and to create a new data frame called uoe_art.

Write out data

  1. Finally write out the data frame you constructed into the data folder so that you can use it in the analysis section.

Aim to make it to this point during the workshop.

✅ ⬆️ If you haven’t done so recently, commit and push your changes to GitHub with an appropriate commit message. Make sure to commit and push all changed files so that your Git pane is cleared up afterwards.


Work in lab-04.Rmd for the rest of the lab.

Now that we have a tidy dataset that we can analyze, let’s do that!

We’ll start with some data cleaning, to clean up the dates that appear at the end of some title text in parentheses. Some of these are years, others are more specific dates, some art pieces have no date information whatsoever, and others have some non-date information in parentheses. This should be interesting to clean up!

First thing we’ll try is to separate the title column into two: one for the actual title and the other for the date if it exists. In human speak, we need to

“separate the title column at the first occurrence of ( and put the contents on one side of the ( into a column called title and the contents on the other side into a column called date

Luckily, there’s a function that does just this: separate()!

And once we have completed separating the single title column into title and date, we need to do further clean-up in the date column to get rid of extraneous )s with str_remove(), capture year information, and save the data as a numeric variable.

Hint: Remember escaping special characters from yesterday’s lecture? You’ll need to use that trick again.

  1. Fill in the blanks in to implement the data wrangling we described above. Note that this will result in some warnings when you run the code, and that’s OK! Read the warnings, and explain what they mean, and why we are ok with leaving them in given that our objective is to just capture year where it’s convenient to do so.
  1. Print out a summary of the data frame using the skim() function. How many pieces have artist info missing? How many have year info missing?

  2. Make a histogram of years. Use a reasonable binwidth. Do you see anything out of the ordinary?

Hint: You’ll want to use mutate() and if_else() or case_when() to implement the correction.

  1. Find which piece has the out of the ordinary year and go to its page on the art collection website to find the correct year for it. Can you tell why our code didn’t capture the correct year information? Correct the error in the data frame and visualize the data again.

🧶 ✅ ⬆️ If you haven’t done so recently, knit, commit, and push your changes to GitHub with an appropriate commit message. Make sure to commit and push all changed files so that your Git pane is cleared up afterwards.

  1. Who is the most commonly featured artist in the collection? Do you know them? Any guess as to why the university has so many pieces from them?

Hint: str_subset() can be helful here. Y consider how you might capture titles where the word appears as “child” and “Child”.

  1. Final question! How many art pieces have the word “child” in their title? Try to figure it out, and ask for help if you’re stuck.

🧶 ✅ ⬆️ Knit, commit, and push your final changes to GitHub with an appropriate commit message. Make sure to commit and push all changed files so that your Git pane is cleared up afterwards.