Showing posts with label HTML. Show all posts
Showing posts with label HTML. Show all posts

Sunday, September 20, 2015

Producing Google Map Embeds with R Package googleVis

(1) for producing html code for a Google Map with R-package googleVis do something like:
 

library(googleVis)
df <- data.frame(Address = c("Innsbruck", "Wattens"),
Tip = c("My Location 1", "My Location 2"))
mymap <- gvisMap(df, "Address", "Tip", options = list(showTip = TRUE, mapType = "normal",
enableScrollWheel = TRUE))
plot(mymap) # preview
(2) then just copy-paste the html to your blog or website after customizing for your purpose..
Read more »

Webscraping Google Scholar & Show Result as Word Cloud Using R

NOTE: Please see the update HERE and HERE!

...When reading Scott Chemberlain's last post about web-scraping I felt it was time to pick up and complete an idea that I was brooding over for some time now:

When a scientist aims out for a new project the first thing to do is to evaluate if other people already have come along to answer the very questions he is about to work on. I.e., I was interested if there has been done any research regarding amphibian diversity at regional/geographical scales correlated to environmental/landscape parameters. Usually I would got to Google-Scholar and search something like - intitle:amphibians AND intitle:richness OR intitle:diversity AND environment OR landscape - and then browse thru the results. But, this is often tedious and a way for a quick visual examination would be of great benefit.

The code I present will solve this task. It may be awkward in places and there might be a more effective way to yield the same result - but it may serve as a starter and I would very much appreciate people more literate than me picking up the torch...

For my example-search it is shown that there has not been very much going on regarding amphibian diversity correlated to environment and landscape...

See code HERE.

PS: I'd be happy about collaboration / tips / editing - so feel free to contact me and I will add you to the list of editors - you then could edit / comment / add to the script on Google Docs.

...some drawbacks need to be considered:
  • Maximum no. of search results = 100
  • Only titles are considered. Additionally considering abstracts may yield more representative results.. but abstracts are truncated in the search result and I don't know if it is possible to retrieve the full abstracts.
  • Also, long titles may be truncated...
  • A more illustrative result would be achieved if one could get rid of all other words than nouns, verbs and adjectives - don't know how to do this, but I am sure this is possible.
  • more drawbacks? you tell..

Read more »

Saturday, September 19, 2015

A Little Webscraping-Exercise...

In R it's quite easy to pull out anything from a webpage and I'll show a little exercise in doing so. Here I retrieve all blog addresses from R-bloggers by the function readLines() and some subsequent data processing.
















# get the page's html-code
web_page <- readLines("http://www.r-bloggers.com")

# extract relevant part of web page:
# missing line added on oct. 24th:
ul_tags <- grep("ul>", web_page)

pos_1 <- grep("Contributing Blogs", web_page) + 2
pos_2 <- ul_tags[which(ul_tags > pos_1)[1]] - 2

blog_list_1 <- web_page[pos_1:pos_2]

# extract 2nd element of sublists produced by stringsplit:
blog_list_2 <- unlist(lapply(strsplit(blog_list_1, "\""), "[[", 2))

# exclude elememts without propper address:
blog_list_3 <- blog_list_2[grep("http:", blog_list_2)]

# plot results:
len <- length(blog_list_3)
x <- rep(1:3, ceiling(len/3))[1:len]
y <- 1:len

par(mar = c(0, 5, 0, 5), xpd = T)
plot(x, y, ylab = "", xlab = "", type = "n",
bty = "n", axes = F)
text(x, y, blog_list_3, cex = 0.5)
Read more »