Chapter 4: Crawling the Web and Extracting Data
Intro
Today’s session will be dedicated to getting data from the web. This process is also called scraping since we scrape data off from the surface (and remodel it for our purposes). The following picture shows you the web scraping cheat sheet that outlines the process of scraping the web. On the left side, you can see the first step in scraping the web which is requesting the information from the server. This is basically what is going under the hood when you make requests using a browser. The response is the website, usually stored in an XML document, which is then the starting point for your subsequent queries and data extraction.
In the first part of this chapter, you will learn different techniques to get your hands on data. In particular, this will encompass making simple URL requests with read_html()
, using session()
s to navigate around on a web page, and submitting html_form()
s to fill in forms on a web page. The second part will be dedicated to only choosing particular contents of the page.
Getting started with rvest
Making requests
The most basic form of making a request is by using read_html()
from the xml2
package.
needs(httr, rvest, tidyverse)
<- read_html("https://en.wikipedia.org/wiki/Tidyverse")
page
|> str() page
List of 2
$ node:<externalptr>
$ doc :<externalptr>
- attr(*, "class")= chr [1:2] "xml_document" "xml_node"
|> as.character() |> write_lines("wiki.html")
page
#page |> html_text()
This is perfectly fine for making requests to static pages where you do not need to take any further action. Sometimes, however, this is not enough, and you want to accept cookies or move on the page.
session()
s
However, the slickest way to do this is by using a session()
. In a session, R behaves like a normal browser, stores cookies, allows you to navigate between pages, by going session_forward()
or session_back()
, session_follow_link()
s on the page itself or session_jump_to()
a different URL, or submit form()
s with session_submit()
.
First, you start the session by simply calling session()
.
<- session("https://scrapethissite.com/") my_session
Some servers may not want robots to make requests and block you for this reason. To circumnavigate this, we can set a “user agent” in a session. The user agent contains data that the server receives from us when we make the request. Hence, by adapting it we can trick the server into thinking that we are humans instead of robots. Let’s check the current user agent first:
$response$request$options$useragent my_session
[1] "libcurl/8.7.1 r-curl/5.2.3 httr/1.4.7"
Not very human. We can set it to a common one using the httr
package (which powers rvest
).
<- user_agent("Mozilla/5.0 (Macintosh; Intel Mac OS X 12_0_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36")
user_a <- session("https://scrapethissite.com/", user_a)
session_with_ua $response$request$options$useragent session_with_ua
[1] "Mozilla/5.0 (Macintosh; Intel Mac OS X 12_0_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36"
You can check the response using session$response$status_code
– 200 is good.
$response$status_code my_session
[1] 200
When you want to save a page from the session, do so using read_html()
.
<- read_html(session_with_ua) page
If you want to open a new URL, use session_jump_to()
.
<- session_with_ua |>
session_with_ua session_jump_to("https://www.scrapethissite.com/pages/")
session_with_ua
<session> https://www.scrapethissite.com/pages/
Status: 200
Type: text/html; charset=utf-8
Size: 10603
You can also click buttons on the page using CSS selectors or XPATHs (more on them next session!):
<- session_with_ua |>
session_with_ua session_jump_to("https://www.scrapethissite.com/") |>
session_follow_link(css = ".btn-primary")
Navigating to </lessons/>.
session_with_ua
<session> http://www.scrapethissite.com/lessons/sign-up/
Status: 200
Type: text/html; charset=utf-8
Size: 24168
Wanna go back – session_back()
; thereafter you can go session_forward()
, too.
<- session_with_ua |>
session_with_ua session_back()
session_with_ua
<session> https://www.scrapethissite.com/
Status: 200
Type: text/html; charset=utf-8
Size: 8117
<- session_with_ua |>
session_with_ua session_forward()
session_with_ua
<session> http://www.scrapethissite.com/lessons/sign-up/
Status: 200
Type: text/html; charset=utf-8
Size: 24168
You can look at what your scraper has done with session_history()
.
|> session_history() session_with_ua
https://www.scrapethissite.com/
https://www.scrapethissite.com/pages/
https://www.scrapethissite.com/
- http://www.scrapethissite.com/lessons/sign-up/
Exercise
- Start a session with the tidyverse Wikipedia page. Adapt your user agent to some sort of different value. Proceed to Hadley Wickham’s page. Go back. Go forth. Jump to Pierre Bourdieu’s Wikipedia page. Check the
session_history()
to see if it has worked.
Solution. Click to expand!
needs(tidyverse, rvest, httr)
<- "https://en.wikipedia.org/wiki/Tidyverse"
tidyverse_wiki <- "https://en.wikipedia.org/wiki/Pierre_Bourdieu"
pierre_wiki <- user_agent("Hi, I'm Felix and I'm trying to steal your data.") #can be changed user_agent
<- session(tidyverse_wiki, user_agent)
wiki_session
<- wiki_session |>
wiki_session_jumped session_jump_to(tidyverse_wiki) |>
session_back() |>
session_forward() |>
session_jump_to(pierre_wiki)
|> session_history() wiki_session_jumped
Forms
Sometimes we also want to provide certain input, e.g., to provide login credentials or to scrape a website more systematically. That information is usually provided using so-called forms. A <form>
element can contain different other elements such as text fields or check boxes. Basically, we use html_form()
to extract the form, html_form_set()
to define what we want to submit, and html_form_submit()
to finally submit it. For a basic example, we search for something on Google.
<- read_html("http://www.google.com")
google <- html_form(google) |> pluck(1)
search
|> str() search
List of 5
$ name : chr "f"
$ method : chr "GET"
$ action : chr "http://www.google.com/search"
$ enctype: chr "form"
$ fields :List of 10
..$ ie :List of 4
.. ..$ type : chr "hidden"
.. ..$ name : chr "ie"
.. ..$ value: chr "ISO-8859-1"
.. ..$ attr :List of 3
.. .. ..$ name : chr "ie"
.. .. ..$ value: chr "ISO-8859-1"
.. .. ..$ type : chr "hidden"
.. ..- attr(*, "class")= chr "rvest_field"
..$ hl :List of 4
.. ..$ type : chr "hidden"
.. ..$ name : chr "hl"
.. ..$ value: chr "de"
.. ..$ attr :List of 3
.. .. ..$ value: chr "de"
.. .. ..$ name : chr "hl"
.. .. ..$ type : chr "hidden"
.. ..- attr(*, "class")= chr "rvest_field"
..$ source:List of 4
.. ..$ type : chr "hidden"
.. ..$ name : chr "source"
.. ..$ value: chr "hp"
.. ..$ attr :List of 3
.. .. ..$ name : chr "source"
.. .. ..$ type : chr "hidden"
.. .. ..$ value: chr "hp"
.. ..- attr(*, "class")= chr "rvest_field"
..$ biw :List of 4
.. ..$ type : chr "hidden"
.. ..$ name : chr "biw"
.. ..$ value: NULL
.. ..$ attr :List of 2
.. .. ..$ name: chr "biw"
.. .. ..$ type: chr "hidden"
.. ..- attr(*, "class")= chr "rvest_field"
..$ bih :List of 4
.. ..$ type : chr "hidden"
.. ..$ name : chr "bih"
.. ..$ value: NULL
.. ..$ attr :List of 2
.. .. ..$ name: chr "bih"
.. .. ..$ type: chr "hidden"
.. ..- attr(*, "class")= chr "rvest_field"
..$ q :List of 4
.. ..$ type : chr "text"
.. ..$ name : chr "q"
.. ..$ value: chr ""
.. ..$ attr :List of 8
.. .. ..$ class : chr "lst"
.. .. ..$ style : chr "margin:0;padding:5px 8px 0 6px;vertical-align:top;color:#000"
.. .. ..$ autocomplete: chr "off"
.. .. ..$ value : chr ""
.. .. ..$ title : chr "Google Suche"
.. .. ..$ maxlength : chr "2048"
.. .. ..$ name : chr "q"
.. .. ..$ size : chr "57"
.. ..- attr(*, "class")= chr "rvest_field"
..$ btnG :List of 4
.. ..$ type : chr "submit"
.. ..$ name : chr "btnG"
.. ..$ value: chr "Google Suche"
.. ..$ attr :List of 4
.. .. ..$ class: chr "lsb"
.. .. ..$ value: chr "Google Suche"
.. .. ..$ name : chr "btnG"
.. .. ..$ type : chr "submit"
.. ..- attr(*, "class")= chr "rvest_field"
..$ btnI :List of 4
.. ..$ type : chr "submit"
.. ..$ name : chr "btnI"
.. ..$ value: chr "Auf gut GlÃck!"
.. ..$ attr :List of 5
.. .. ..$ class: chr "lsb"
.. .. ..$ id : chr "tsuid_MPc-Z-7QNtSmi-gPzO7AkQI_1"
.. .. ..$ value: chr "Auf gut GlÃck!"
.. .. ..$ name : chr "btnI"
.. .. ..$ type : chr "submit"
.. ..- attr(*, "class")= chr "rvest_field"
..$ iflsig:List of 4
.. ..$ type : chr "hidden"
.. ..$ name : chr "iflsig"
.. ..$ value: chr "AL9hbdgAAAAAZz8FQNQ2qfB0K5md5v5lQg6dFVSKDDmm"
.. ..$ attr :List of 3
.. .. ..$ value: chr "AL9hbdgAAAAAZz8FQNQ2qfB0K5md5v5lQg6dFVSKDDmm"
.. .. ..$ name : chr "iflsig"
.. .. ..$ type : chr "hidden"
.. ..- attr(*, "class")= chr "rvest_field"
..$ gbv :List of 4
.. ..$ type : chr "hidden"
.. ..$ name : chr "gbv"
.. ..$ value: chr "1"
.. ..$ attr :List of 4
.. .. ..$ id : chr "gbv"
.. .. ..$ name : chr "gbv"
.. .. ..$ type : chr "hidden"
.. .. ..$ value: chr "1"
.. ..- attr(*, "class")= chr "rvest_field"
- attr(*, "class")= chr "rvest_form"
<- search |> html_form_set(q = "something")
search_something <- html_form_submit(search_something, submit = "btnG")
resp read_html(resp)
{html_document}
<html lang="de">
[1] <head>\n<meta http-equiv="Content-Type" content="text/html; charset=UTF-8 ...
[2] <body jsmodel="hspDDf ">\n<header id="hdr"><script nonce="E8dZsiTbWtPAcs2 ...
<- list(q = "web scraping", hl = "fr")
vals
<- search |> html_form_set(!!!vals)
search_1 <- search |> html_form_set(q = "web scraping", hl = "fr")
search_2
<- html_form_submit(search_1)
resp read_html(resp)
{html_document}
<html lang="fr-DE">
[1] <head>\n<meta http-equiv="Content-Type" content="text/html; charset=UTF-8 ...
[2] <body jsmodel="hspDDf ">\n<header id="hdr"><script nonce="Dl3pcMQYvrMrYg1 ...
If you are working with a session, the workflow is as follows:
- Extract the form.
- Set it.
- Start your session on the page with the form.
- Submit the form using
session_submit()
.
<- read_html("http://www.google.com") |>
google_form html_form() |>
pluck(1) #another way to do [[1]]
<- google_form |> html_form_set(q = "something")
search_something
<- session("http://www.google.com") |>
google_session session_submit(search_something, submit = "btnG")
|>
google_session read_html()
{html_document}
<html lang="de">
[1] <head>\n<meta http-equiv="Content-Type" content="text/html; charset=UTF-8 ...
[2] <body jsmodel="hspDDf ">\n<header id="hdr"><script nonce="8lTYn3RxVB2C0GM ...
Exercise
- Start a session on “https://www.scrapethissite.com/pages/forms/”, fill out, and submit the form to search for a Hockey team called Toronto Maple Leafs. Store the resulting output in “base_session”.
You can check your code by looking at the output of base_session |> read_html() |> html_table() |> pluck(1)
and checking whether there are only Maple Leaf entries.
Solution. Click to expand!
<- "https://www.scrapethissite.com/pages/forms/"
url
<- read_html(url) |>
search_form html_form() |>
pluck(1) #extract
<- search_form |>
set_form html_form_set(q = "Toronto Maple Leafs") #set login form
<- session(url) |>
base_session session_submit(set_form)
|>
base_session read_html() |>
html_table() |>
pluck(1)
Scraping hacks
Some web pages are a bit fancier than the ones we have looked at so far (i.e., they use JavaScript). rvest
works nicely for static web pages, but for more advanced ones you need different tools such as selenium
– see chapter 7.
A web page may sometimes give you time-outs (i.e., it doesn’t respond within a given time). This can break your loop. Wrapping your code in safely()
or insistently()
from the purrr
package might help. The former moves on and notes down what has gone wrong, the latter keeps sending requests until it has been successful. They both work easiest if you put your scraping code in functions and wrap those with either insistently()
or safely()
.
Sometimes a web page keeps blocking you. Consider using a proxy server.
<- httr::use_proxy(url = "http://example.com",
my_proxy user_name = "myusername",
password = "mypassword",
auth = "one of basic, digest, digest_ie, gssnegotiate, ntlm, any")
<- session("https://scrapethissite.com/", my_proxy) my_session
Find more useful information – including the stuff we just described – and links on this GitHub page.
Extracting Data
In the prior section you learned how to make calls to web pages and get responses. Now it will be all about how you can extract content from web pages in a structured way. The (in our opinion) easiest way to achieve that is by harnessing the way the web is written.
Before we start to extract data from the web, we will briefly touch upon how the web is written. This is since we will harness this structure to extract content in an automated manner. Basic commands will be shown thereafter.
#install.packages("needs")
::needs(janitor, polite, rvest, tidyverse) needs
HTML 101
Web content is usually written in HTML (Hyper Text Markup Language). An HTML document is comprised of elements that are letting its content appear in a certain way.
The way these elements look is defined by so-called tags.
The opening tag is the name of the element (p
in this case) in angle brackets, and the closing tag is the same with a forward slash before the name. p
stands for a paragraph element and would look like this (since RMarkdown can handle HTML tags, the second line will showcase how it would appear on a web page):
<p> My cat is very grumpy. <p/>
My cat is very grumpy.
The <p>
tag makes sure that the text is standing by itself and that a line break is included thereafter:
<p>My cat is very grumpy</p>. And so is my dog.
would look like this:
My cat is very grumpy
. And so is my dog.
There do exist many types of tags indicating different kinds of elements (about 100). Every page’s content must be in an <html>
element with two children <head>
and <body>
. The former contains the page title and some metadata, the latter the contents you are seeing in your browser. So-called block tags, e.g., <h1>
(heading 1), <p>
(paragraph), or <ol>
(ordered list), structure the page. Inline tags (<b>
– bold, <a>
– link) format text inside block tags.
You can nest elements, e.g., if you want to make certain things bold, you can wrap text in <b>
:
<p>My cat is <b> very </b> grumpy</p>
My cat is very grumpy
Then, the <b>
element is considered the child of the <p>
element.
Elements can also bear attributes:
Those attributes will not appear in the actual content. Moreover, they are super-handy for us as scrapers. Here, class
is the attribute name and "editor-note"
the value. Another important attribute is id
. Combined with CSS, they control the appearance of the element on the actual page. A class
can be used by multiple HTML elements whereas an id
is unique.
Extracting content in rvest
To scrape the web, the first step is to simply read in the web page. rvest
then stores it in the XML format – just another format to store information. For this, we use rvest
’s read_html()
function.
To demonstrate the usage of CSS selectors, I create my own, basic web page using the rvest
function minimal_html()
:
<- minimal_html('
basic_html <html>
<head>
<title>Page title</title>
</head>
<body>
<h1 id="first">A heading</h1>
<p class="paragraph">Some text & <b>some bold text.</b></p>
<a> Some more <i> italicized text which is not in a paragraph. </i> </a>
<a class="paragraph">even more text & <i>some italicized text.</i></p>
<a id="link" href="www.nyt.com"> The New York Times </a>
</body>
')
basic_html
{html_document}
<html>
[1] <head>\n<meta http-equiv="Content-Type" content="text/html; charset=UTF-8 ...
[2] <body>\n <h1 id="first">A heading</h1>\n <p class="paragraph">Some ...
#https://htmledit.squarefree.com
CSS is the abbreviation for cascading style sheets and is used to define the visual styling of HTML documents. CSS selectors map elements in the HTML code to the relevant styles in the CSS. Hence, they define patterns that allow us to easily select certain elements on the page. CSS selectors can be used in conjunction with the rvest
function html_elements()
which takes as arguments the read-in page and a CSS selector. Alternatively, you can also provide an XPath which is usually a bit more complicated and will not be covered in this tutorial.
p
selects all<p>
elements.
|> html_elements(css = "p") basic_html
{xml_nodeset (1)}
[1] <p class="paragraph">Some text & <b>some bold text.</b></p>
.title
selects all elements that are ofclass
“title”
|> html_elements(css = ".title") basic_html
{xml_nodeset (0)}
There are no elements of class
“title”. But some of class
“paragraph”.
|> html_elements(css = ".paragraph") basic_html
{xml_nodeset (2)}
[1] <p class="paragraph">Some text & <b>some bold text.</b></p>
[2] <a class="paragraph">even more text & <i>some italicized text.</i>\n ...
p.paragraph
analogously takes every<p>
element which is ofclass
“paragraph”.
|> html_elements(css = "p.paragraph") basic_html
{xml_nodeset (1)}
[1] <p class="paragraph">Some text & <b>some bold text.</b></p>
#link
scrapes elements that are ofid
“link”
|> html_elements(css = "#link") basic_html
{xml_nodeset (1)}
[1] <a id="link" href="www.nyt.com"> The New York Times </a>
You can also connect children with their parents by using the combinator. For instance, to extract the italicized text from “a.paragraph,” I can do “a.paragraph i”.
|> html_elements(css = "a.paragraph i") basic_html
{xml_nodeset (1)}
[1] <i>some italicized text.</i>
You can also look at the children by using html_children()
:
|> html_elements(css = "a.paragraph") |> html_children() basic_html
{xml_nodeset (1)}
[1] <i>some italicized text.</i>
read_html("https://rvest.tidyverse.org") |>
html_elements("#installation , p")
{xml_nodeset (8)}
[1] <p>rvest helps you scrape (or harvest) data from web pages. It is designe ...
[2] <p>If you’re scraping multiple pages, I highly recommend using rvest in c ...
[3] <h2 id="installation">Installation<a class="anchor" aria-label="anchor" h ...
[4] <p>If the page contains tabular data you can convert it directly to a dat ...
[5] <p></p>
[6] <p>Developed by <a href="https://hadley.nz" class="external-link">Hadley ...
[7] <p></p>
[8] <p>Site built with <a href="https://pkgdown.r-lib.org/" class="external-l ...
Unfortunately, web pages in the wild are usually not as easily readable as the small example one I came up with. Hence, I would recommend you to use the SelectorGadget – just drag it into your bookmarks list.
Its usage could hardly be simpler:
- Activate it – i.e., click on the bookmark.
- Click on the content you want to scrape – the things the CSS selector selects will appear green.
- Click on the green things that you don’t want – they will turn red; click on what’s not green yet but what you want – it will turn green.
- copy the CSS selector the gadget provides you with and paste it into the
html_elements()
function.
read_html("https://en.wikipedia.org/wiki/Hadley_Wickham") |>
html_elements(css = "p:nth-child(4)") |>
html_text()
[1] "Hadley Alexander Wickham (born 14 October 1979) is a New Zealand statistician known for his work on open-source software for the R statistical programming environment. He is the chief scientist at Posit PBC and an adjunct professor of statistics at the University of Auckland, Stanford University, and Rice University. His work includes the data visualisation system ggplot2 and the tidyverse, a collection of R packages for data science based on the concept of tidy data.\n"
Tying it Together: Scraping HTML pages with rvest
So far, I have shown you how HTML is written and how to select elements. However, what we want to achieve is extracting the data the elements contained in a proper format and storing it in some sort of tibble. Therefore, we need functions that allow us to grab the data.
The following overview taken from the web scraping cheatsheet shows you the basic “flow” of scraping web pages plus the corresponding functions. In this tutorial, I will limit myself to rvest
functions. Those are of course perfectly compatible with things, for instance, RSelenium
, as long as you feed the content in XML format (i.e., by using read_html()
).
In the prior chapter, I have introduced you to acquiring the contents of singular pages. Given that you now know how to choose the content you want, all that you are lacking for successful scraping is the tools to extract these contents in a proper format.
html_text()
and html_text2()
Extracting text from HTML is easy. You use html_text()
or html_text2()
. The former is faster but will give you not-so-nice results. The latter will give you the text like it would be returned in a web browser.
The following example is taken from the documentation
# To understand the difference between html_text() and html_text2()
# take the following html:
<- minimal_html(
html "<p>This is a paragraph.
This is another sentence.<br>This should start on a new line<p/>"
)
# html_text() returns the raw underlying text, which includes white space
# that would be ignored by a browser, and ignores the <br>
|> html_element("p") |> html_text() |> writeLines() html
This is a paragraph.
This is another sentence.This should start on a new line
# html_text2() simulates what a browser would display. Non-significant
# white space is collapsed, and <br> is turned into a line break
|> html_element("p") |> html_text2() |> writeLines() html
This is a paragraph. This is another sentence.
This should start on a new line
A “real example” would then look like this:
<- read_html("https://en.wikipedia.org/wiki/List_of_current_United_States_senators")
us_senators <- us_senators |>
text html_elements(css = "p:nth-child(6)") |>
html_text2()
Extracting tables
The general output format we strive for is a tibble. Oftentimes, data is already stored online in a table format, basically ready for us to analyze them. In the next example, I want to get a table from the Wikipedia page that contains the senators of different States in the United States I have used before. For this first, basic example, I do not use selectors for extracting the right table. You can use rvest::html_table()
. It will give you a list containing all tables on this particular page. We can inspect it using str()
which returns an overview of the list and the tibbles it contains.
Here, the table I want is the sixth one. We can grab it by either using double square brackets – [[6]]
– or purrr
’s pluck(6)
.
<- tables |>
senators pluck(6)
glimpse(senators)
Rows: 100
Columns: 12
$ State <chr> "Alabama", "Alabama", "Alaska", "Alaska",…
$ Portrait <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, N…
$ Senator <chr> "Tommy Tuberville", "Katie Britt", "Lisa …
$ Party <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, N…
$ Party <chr> "Republican", "Republican", "Republican",…
$ Born <chr> "(1954-09-18) September 18, 1954 (age 70)…
$ `Occupation(s)` <chr> "Investment management firm partner\nColl…
$ `Previous electiveoffice(s)` <chr> "None", "None", "Alaska House of Represen…
$ Education <chr> "Southern Arkansas University (BS)", "Uni…
$ `Assumed office` <chr> "January 3, 2021", "January 3, 2023", "De…
$ Class <chr> "2026Class 2", "2028Class 3", "2028Class …
$ `Residence[6]` <chr> "Auburn[7]", "Montgomery", "Girdwood", "A…
## alternative approach using css
<- us_senators |>
senators html_elements(css = "#senators") |>
html_table() |>
pluck(1) |>
::clean_names() janitor
You can see that the tibble contains “dirty” names and that the party column appears twice – which will make it impossible to work with the tibble later on. Hence, I use clean_names()
from the janitor
package to fix that.
Extracting attributes
You can also extract attributes such as links using html_attrs()
. An example would be to extract the headlines and their corresponding links from r-bloggers.com.
<- read_html("https://www.r-bloggers.com") rbloggers
A quick check with the SelectorGadget told me that the element I am looking for is of class “.loop-title” and the child of it is “a”, standing for normal text. With html_attrs()
I can extract the attributes. This gives me a list of named vectors containing the name of the attribute and the value:
Links are stored as attribute “href” – hyperlink reference. html_attr()
allows me to extract the attribute’s value. Hence, building a tibble with the article’s title and its corresponding hyperlink is straight-forward now:
tibble(
title = r_blogger_postings |> html_text2(),
link = r_blogger_postings |> html_attr(name = "href")
)
# A tibble: 20 × 2
title link
<chr> <chr>
1 How to Perform a Wald Test in R | wald.test function in R http…
2 Expand your Bluesky network with R (repost) http…
3 Mastering Conditional Logic and Small Change Operators in C http…
4 era 0.5.0: chronological ordering and extremes http…
5 Vendée Globe 2024 http…
6 Crafting Custom and Reproducible PDF Reports with Quarto and Typst in … http…
7 You’ve Been Waiting for Native Mobile Apps with R? The Wait Is Over. http…
8 How to Combine Vectors in R: A Comprehensive Guide with Examples http…
9 How to Compare Two Vectors in base R With Examples http…
10 GLMNet in Python: Generalized Linear Models http…
11 Using LLM agents to review tutorials ‘in character’ as learners http…
12 Understanding and extending the methods of comparing spatial patterns … http…
13 Design effects for stratified sub-populations by @ellis2013nz http…
14 Linux Environment Variables: A Beginner’s Guide to printenv, set, expo… http…
15 R Dev Day @ SIP 2024 http…
16 Package Updates http…
17 Create and Interpret a Interactive Volcano Plot in R | What & How http…
18 How to Keep Certain Columns in Base R with subset(): A Complete Guide http…
19 Time Series Machine Learning: Shanghai Composite http…
20 Understanding Logical Operators in C Programming http…
Another approach for this would be using the polite
package and its function html_attrs_dfr()
which binds together all the different attributes column-wise and the different elements row-wise.
|>
rbloggers html_elements(css = ".loop-title a") |>
html_attrs_dfr() |>
select(title = 3,
link = 1) |>
glimpse()
Rows: 20
Columns: 2
$ title <chr> "How to Perform a Wald Test in R | wald.test function in R", "Ex…
$ link <chr> "https://www.r-bloggers.com/2024/11/how-to-perform-a-wald-test-i…
Exercise
- Download the links and names of the top 250 IMDb movies. Put them in a tibble with the columns
rank
– in numeric format (you know regexes already),title
,url
to IMDb entry,rating
– in numeric format,number_votes
– the number of votes a movie has received, in numeric format. Also, what do you notice?
Solution. Click to expand!
<- read_html("https://www.imdb.com/chart/top/?ref_=nv_mv_250")
imdb_top250
<- tibble(
movies rank = imdb_top250 |>
html_elements(".cli-title .ipc-title__text") |>
html_text2() |>
str_extract("^[0-9]+(?=\\.)") |>
parse_integer(),
title = imdb_top250 |>
html_elements(".cli-title .ipc-title__text") |>
html_text2() |>
str_remove("^[0-9]+\\. "),
url = imdb_top250 |>
html_elements(".cli-title a") |>
html_attr("href") |>
str_c("https://www.imdb.com", x = _),
rating = imdb_top250 |>
html_elements(".ratingGroup--imdb-rating") |>
html_text() |>
str_extract("[0-9]\\.[0-9]") |>
parse_double(),
no_votes = imdb_top250 |>
html_elements(".ratingGroup--imdb-rating") |>
html_text() |>
str_remove("^[0-9]\\.[0-9]") |>
str_remove_all("[() ]")
)
Automating scraping
Well, grabbing singular points of data from websites is nice. However, if you want to do things such as collecting large amounts of data or multiple pages, you will not be able to do this without some automation.
An example here would again be the R-bloggers page. It provides you with plenty of R-related content. If you were now eager to scrape all the articles, you would first need to acquire all the different links leading to the blog postings. Hence, you would need to navigate through the site’s pages first to acquire the links.
In general, there are two ways to go about this. The first is to manually create a list of URLs the scraper will visit and take the content you need, therefore not needing to identify where it needs to go next. The other one would be automatically acquiring its next destination from the page (i.e., identifying the “go on” button). Both strategies can also be nicely combined with some sort of session()
.
Looping over pages
For the first approach, we need to check the URLs first. How do they change as we navigate through the pages?
<- "https://www.r-bloggers.com/page/2/"
url_1 <- "https://www.r-bloggers.com/page/3/"
url_2
<- adist(url_1, url_2, counts = TRUE) |>
initial_dist attr("trafos") |>
diag() |>
str_locate_all("[^M]")
str_sub(url_1, start = initial_dist[[1]][1]-5, end = initial_dist[[1]][1]+5) # makes sense for longer urls
[1] "page/2/"
str_sub(url_2, start = initial_dist[[1]][1]-5, end = initial_dist[[1]][1]+5)
[1] "page/3/"
There is some sort of underlying pattern and we can harness that. url_1
refers to the second page, url_2
to the third. Hence, if we just combine the basic URL and, say, the numbers from 1 to 10, we could then visit all the pages (exercise 3a) and extract the content we want.
<- str_c("https://www.r-bloggers.com/page/", 1:10, "/") # this is the stringr equivalent of paste()
urls urls
[1] "https://www.r-bloggers.com/page/1/" "https://www.r-bloggers.com/page/2/"
[3] "https://www.r-bloggers.com/page/3/" "https://www.r-bloggers.com/page/4/"
[5] "https://www.r-bloggers.com/page/5/" "https://www.r-bloggers.com/page/6/"
[7] "https://www.r-bloggers.com/page/7/" "https://www.r-bloggers.com/page/8/"
[9] "https://www.r-bloggers.com/page/9/" "https://www.r-bloggers.com/page/10/"
You can run this in a for-loop, here’s a quick revision. For the loop to run efficiently, space for every object should be pre-allocated (i.e., you create a list beforehand, and its length can be determined by an educated guess).
## THIS IS PSEUDO CODE!!!
<- vector(mode = "list", length = length(urls)) # pre-allocate space!!!
result_list <- "https://www.r-bloggers.com/page/1/"
starting_link ####PSEUDO CODE!!!
for (i in seq_along(urls)){
in urls[[i]] --> page <- read_html(url)
read in result_list result_list[[i]] <- extract_content(page)
store content of page }
Exercise
- Scrape 5 pages of the latest UN press releases in an automated fashion. Make sure to take breaks between requests by including
Sys.sleep(2)
. For each iteration, store the articles and links in a tibble containing the columnstitle
,link
, anddate
(bonus: store it in date format). (Tip: wrap the code that extracts and stores content in a tibble in a function.)
- Do so using running numbers in the urls.
- Do so by using
session()
in a loop. (Note: make sure to specifycss =
)
Solution. Click to expand!
<- function(page){
extract_press_releases tibble(
title = page |>
html_elements(".field__item a") |>
html_text2(),
link = page |>
html_elements(".field__item a") |>
html_attr("href"),
date = page |>
html_elements(".field--type-datetime") |>
html_text2() |>
as.Date(format = "%d %B %Y")
)
}
#a
<- str_c("https://press.un.org/en/content/secretary-general/press-release?page=", 0:4)
urls
<- map(urls,
pages
\(x){Sys.sleep(2)
read_html(x) |>
extract_press_releases()
}
)
#b
<- session("https://press.un.org/en/content/secretary-general/press-release")
un_session <- 1
i <- vector(mode = "list", length = 5L)
page_list while (i < 6) {
<- read_html(un_session) |>
page_list[[i]] extract_press_releases()
<- un_session |>
un_session session_follow_link(css = ".me-s .page-link")
<- i + 1
i Sys.sleep(2)
}
Conclusion
To sum it up: when you have a good research idea that relies on Digital Trace Data that you need to collect, ask yourself the following questions:
- Is there an R package for the web service?
- If 1. == FALSE: Is there an API where I can get the data (if TRUE, use it) – next chapter.
- If 1. == FALSE & 2. == FALSE: Is screen scraping an option and any structure in the data that you can harness?
If you have to rely on screen scraping, also ask yourself the question how you can minimize the number of requests you make to the server. Going back and forth on web pages or navigating through them might not be the best option since it requires multiple requests. The most efficient way is usually to try to get a list of URLs of some sort which you can then just loop over.