利用R从网站上抓数据

2026/1/24 0:56:40

Webscraping using readLines and RCurl

There is a massive amount of data available on the web. Some of it is in the form of precompiled, downloadable datasets which are easy to access. But the majority of online data exists as web content such as blogs, news stories and cooking recipes. With precompiled files, accessing the data is fairly straightforward; just download the file, unzip if necessary, and import into R. For “wild” data however, getting the data into an analyzeable format is more difficult. Accessing online data of this sort is sometimes reffered to as “webscraping”. Two R facilities, readLines() from the base package and getURL() from the RCurl package make this task possible.

readLines

For basic webscraping tasks the readLines() function will usually suffice. readLines() allows simple access to webpage source data on non-secure servers. In its simplest form, readLines() takes a single argument – the URL of the web page to be read:

web_page <- readLines(\

As an example of a (somewhat) practical use of webscraping, imagine a scenario in which we wanted to know the 10 most frequent posters to the R-help listserve for January 2009. Because the listserve is on a secure site (e.g. it has https:// rather than http:// in the URL) we can't easily access the live version with readLines(). So for this example, I've posted a local copy of the list archives on the this site.

One note, by itself readLines() can only acquire the data. You'll need to use grep(), gsub() or equivalents to parse the data and keep what you need.

# Get the page's source

web_page <- readLines(\

# Pull out the appropriate line

author_lines <- web_page[grep(\

# Delete unwanted characters in the lines we pulled out authors <- gsub(\

# Present only the ten most frequent posters

author_counts <- sort(table(authors), decreasing = TRUE) author_counts[1:10]

We can see that Gabor Grothendieck was the most frequent poster to R-help in January 2009.

The RCurl package

To get more advanced http features such as POST capabilities and https access, you'll need to use the RCurl package. To do webscraping tasks with the RCurl package use the getURL() function. After the data has been acquired via getURL(), it needs to be restructured and parsed. The htmlTreeParse() function from the XML package is tailored for just this task. Using getURL() we can access a secure site so we can use the live site as an example this time.

# Install the RCurl package if necessary

install.packages(\library(\

# Install the XML package if necessary

install.packages(\library(\

# Get first quarter archives

jan09 <- getURL(\2009-January/date.html\ssl.verifypeer = FALSE)

jan09_parsed <- htmlTreeParse(jan09)

# Continue on similar to above ...

For basic webscraping tasks readLines() will be enough and avoids over complicating the task. For more difficult procedures or for tasks requiring other http features getURL() or other functions from the RCurl package may be required. For more information on cURL visit the project page here.


利用R从网站上抓数据.doc 将本文的Word文档下载到电脑
搜索更多关于: 利用R从网站上抓数据 的文档
相关推荐
相关阅读
× 游客快捷下载通道(下载后可以自由复制和排版)

下载本文档需要支付 10

支付方式:

开通VIP包月会员 特价:29元/月

注:下载文档有可能“只有目录或者内容不全”等情况,请下载之前注意辨别,如果您已付费且无法下载或内容有问题,请联系我们协助你处理。
微信:xuecool-com QQ:370150219