Goal

The Goal of this session is to learn how to get data from the World Wide Web using R. Although we are going to talk about a few concepts first, the core of this session will be spent on getting data from websites that do not offer any interface to automate information retrieval, like via Web services such as REST, SOAP nor application programming interfaces (APIs). Therefore it is necessary to scrape the information embedded in the website itself.

When you want to extract information or download data from a website that is too large for efficient manual downloading or needs to be frequently updated, you should first:

  1. Check if the website has any available Web services or if APIs have been developed to this end
  2. Check if any R (or other language you know) package has been developed by others as a wrapper around the API to facilitate the use of these Web services
  3. Nothing found? Well let’s code this ourselves then!

Which R packages are available?

As usual, it is a good start to look at the CRAN View to have an idea of R packages available: https://CRAN.R-project.org/view=WebTechnologies

Here are some of the key packages:

There are also function in the utils package, such as download.file(). Note that these functions do not handle https (motivation behind the curl R package)

In this session we are going to use rvest: first for a simple tutorial, followed by a challenge in groups

Some background

HTTP: Hypertext Transfer Protocol

URL

At the heart of web communications is the request message, which is sent via Uniform Resource Locators (URLs). Basic URL structure:

The protocol is typically http or https for secure communications. The default port is 80, but one can be set explicitly, as illustrated in the above image. The resource path is the local path to the resource on the server.

Request

The actions that should be performed on the host are specified via HTTP verbs. Today we are going to focus on two actions that are often used in web forms:

  • GET: fetch an existing resource. The URL contains all the necessary information the server needs to locate and return the resource.
  • POST: create a new resource. POST requests usually carry a payload that specifies the data for the new resource.

Response

Status codes:

  • 1xx: Informational Messages
  • 2xx: Successful; most known is 200: OK, request was successfully processed
  • 3xx: Redirection
  • 4xx: Client Error; the famous 404: resource not found
  • 5xx: Server Error

HTML

The HyperText Markup Language (HTML) describes and defines the content of a webpage. Other technologies besides HTML are generally used to describe a webpage’s appearance/presentation (CSS) or functionality (JavaScript).

“Hyper Text” in HTML refers to links that connect webpages to one another, either within a single website or between websites. Links are a fundamental aspect of the Web.

HTML uses “markup” to annotate text, images, and other content for display in a Web browser. HTML markup includes special “elements” such as <head>, <title>, <body>, <header>, <footer>, <article>, <section>, <p>, <div>, <span>, <img>, and many others.

Using you web browser, you can inspect the HTML content of any webpage of the World Wide Web.

XML

The eXtensible Markup Language (XML) provides a general approach for representing all types of information, such as data sets containing numerical and categorical variables. XML provides the basic, common, and quite simple structure and syntax for all “dialects” or vocabularies. For example, HTML, SVG and EML are specific vocabularies of XML.

XPath

XPath is quite simple but yet very powerful. Similar syntax to a file system hierarchy, it allows to identify nodes of interest by specifying paths through the tree, based on node names, node content, and a node’s relationship to other nodes in the hierarchy. We typically use XPath to locate nodes in a tree and then use R functions to extract data from those nodes and bring the data into R.

CSS

Cascading Style Sheets (CSS) is a stylesheet language used to describe the presentation of a document written in HTML or XML. CSS describes how elements should be rendered on screen, on paper, in speech, or on other media. In CSS, selectors are used to target the HTML elements on a web page that we want to style. There are a wide variety of CSS selectors available, allowing for fine grained precision when selecting elements to style.

Web scraping workflow

  1. Check for existing API and existing R packages
  2. Information identification: use your web browser inspector and/or http://selectorgadget.com/ to inspect the content and structure of the webpages you want to extract information from
  3. Choice of strategy: e.g. Xpath, CSS selector, …
  4. Information extraction: Choose the relevant R package(s) to accomplish your data extraction and code it


rvest

rvest is a set of wrappers functions around the xml2 and httr packages

Main functions

  • read_html: read a webpage into R as XML (document and nodes)
  • html_nodes: extract pieces out of HTML documents using XPath and/or CSS selectors
  • html_attr: extract attributes from HTML, such as href
  • html_text: extract text content

For more information on the package: here

Quick example

Let us get started with organizing your evenings. The Funk Zone is a pretty fun part of Santa Barbara, where you can find wineries, bars and restaurants. Check this out: http://santabarbaraca.com/explore-and-discover-santa-barbara/neighborhoods-towns/santa-barbara/the-funk-zone/

We are going to scrape the name of the places and their websites out of this webpage and compile this information into a csv, so you will be able to quickly choose where to go to relax at the end of the day.

1. Look at the website structure using our web browser inspector:

web browser inspector

web browser inspector

2. Use this information to extract the bar names from the webpage:

#install.packages("rvest")
library("rvest")

URL <- "http://santabarbaraca.com/explore-and-discover-santa-barbara/neighborhoods-towns/santa-barbara/the-funk-zone/"

# Read the webpage into R
webpage <- read_html(URL)

# Parse the webpage for bars
bars <- html_nodes(webpage, ".neighborhoods-towns-business .neighborhoods-towns-business-title")

# Extract the name of the bar
bar_names <- html_text(bars)

bar_names
##  [1] "Cutler’s Artisan Spirits"                 
##  [2] "DV8 Cellars"                              
##  [3] "Eat This, Shoot That! Food and Wine Tours"
##  [4] "Kunin Wines"                              
##  [5] "The Lark"                                 
##  [6] "Municipal Winemakers"                     
##  [7] "Les Marchands"                            
##  [8] "Loquita Santa Barbara"                    
##  [9] "Riverbench Vineyard & Winery"             
## [10] "Santa Barbara Wine Collective"            
## [11] "The Valley Project"                       
## [12] "Santa Barbara Winery"

4. Extract the URLs to the websites:

# Parse the page for the nodes containing the website URLs
websites <- html_nodes(webpage, ".neighborhoods-towns-business .website-button")

# Extract the reference 
websites_urls <- html_attr(websites, "href")

websites_urls
##  [1] "http://www.cutlersartisan.com "        
##  [2] "http://www.dv8cellars.com "            
##  [3] "http://www.eatthisshootthat.com "      
##  [4] "http://www.kuninwines.com "            
##  [5] "http://www.thelarksb.com "             
##  [6] "http://www.lesmarchandswine.com "      
##  [7] "http://www.loquitasb.com "             
##  [8] "http://www.riverbench.com "            
##  [9] "http://santabarbarawinecollective.com "
## [10] "http://www.thevalleyprojectwines.com " 
## [11] "http://www.sbwinery.com "

We got back only 11 URLs, but we got 12 bar names returned!? Looking at the website, we realize for one of the new bar, no website was provided. Yes, web scraping is messy. A short Goggling let us find the website URL: http://www.municipalwinemakers.com/. Let us add it to the list:

# Get the index of the missing bar
mb_ind <- which(bar_names == "Municipal Winemakers")

# add the URL
websites_urls <- append(websites_urls, "http://www.municipalwinemakers.com/", after = mb_ind-1)

5. Create a data frame and save it as csv:

# Create the data frame
my_cool_bars <- data.frame(funkzone_bar = bar_names, 
                           website = websites_urls)

# my_cool_bars

# write it to csv
write.csv(my_cool_bars, "~/oss/places_you_will_go.csv", row.names = FALSE)

Challenge

In groups of two, work on downloading all the fishery shapefiles for the Gulf of Mexico from this NOAA website: http://sero.nmfs.noaa.gov/maps_gis_data/fisheries/gom/GOM_index.html


Final Thoughts

Please always check that the data you are scraping are publicly available data and that there is no personal or confidential information gathered. Also please do not overload the web server you are scraping: when getting a large amount of data, it is often recommended to insert pauses between the requests sent to the web server to let it handle other requests.


References and sources