Home > General post > How to check if a file exists with HTTP and R

How to check if a file exists with HTTP and R

So, there’s probably an easier way to do this (please let me know if you know it)…

Suppose you’re working with a system which creates (binary) files and posts them for download on a website. You know the names of the files that will be created. However, they may not have been made yet (they’re generated on the fly, and appear in a vaguely random order over time). There are several of them and you want to know which ones are there yet, and when there are enough uploaded, run an analysis.

I spent quite a bit of time trying to work this out, and eventually came up with the following solution:

require(RCurl)
newurl <- c("http://cran.r-project.org/web/packages/RCurl/RCurl.pdf",
            "http://cran.r-project.org/web/packages/RCurl/RCurl2.pdf")
for (n in 2:1){
   z <- ""
   try(z <- getBinaryURL(newurl[n], failonerror = TRUE))   
   if (length(z) > 1) {print(paste(newurl[n], " exists", sep = ""))
      } else {print(paste(newurl[n], " doesn't exist", sep =  ""))}
   }

What this does is uses RCurl to download the file into a variable z. Then your system will check to see if z now contains the file.

If the file doesn’t exist, getBinaryURL() returns an error, and your loop (if you are doing several files) will quit. Wrapping the getBinaryURL() in try() means that the error won’t stop the loop from trying the next file (if you don’t trust me, try doing the above without the try wrapper). You can see how wrapping this in a loop could quickly go through several files and download ones which exist.

I’d really like to be able to do this, but not actually download the whole file (e.g. just the first 100 bytes) to see how many files of interest have been created, and if enough have, then download them all. I just can’t work out how to yet – I tried the range option of getBinaryURL() but this just crashed R. This would be useful if you are collecting data in real time, and you know you need at least (for example) 80% of the data to be available before you jump into a computationally expensive algorithm.

So, there must be an easier way to do all this, but can I find it? …

About these ads
Categories: General post Tags: , , ,
  1. Chris
    September 2, 2010 at 7:09 pm

    I’m running some simulations in R, and if an output file gets too big, I want to kill the process via a call to taskkill (windows) via the shell() function. I simply use a combination of the file.exists() function and the file.info()$size value. Of course, I’m using local files, not files stored on a server, so perhaps something is different, that I don’t understand….?

    • Chris
      September 2, 2010 at 7:10 pm

      I suppose I should also note that I run this function in a separate R task from the simulator in a while(TRUE!=FALSE){} loop. I run the loop every 60 seconds, with the Sys.sleep() function.

      • September 2, 2010 at 7:33 pm

        Thanks – good point. I tried file.exists for remote
        computers and didn’t have any joy. I guess I could run the download for a very short period (enough for a few bytes) and then cease it. I’ll re-post if I get this to work.

      • March 8, 2012 at 5:37 pm

        > while(TRUE!=FALSE){}
        Ew. Aside from the fact that TRUE!=FALSE is just TRUE, I think you want repeat{}.

  2. Mark Davis
    September 2, 2010 at 10:16 pm

    Hi, i think the HTTP HEAD method might be something you could look into, it should return the status of the request just a like a GET, but without the body.

    This is a brilliant resource on RCurl
    http://www.omegahat.org/RCurl/philosophy.html

    Using that, i quickly tested the following, seems ok
    h = getCurlHandle()
    getURL(“http://cran.r-project.org/web/packages/RCurl/index.html”, header=1,nobody=1, curl = h)
    getCurlInfo(h,”response.code”)

    You’ll be looking for 200 fr files exists, and 404 or other when the resource doesn’t.

    Hope that helps.

  3. Aaron
    September 3, 2010 at 12:28 am

    Reading http://w-shadow.com/blog/2007/08/02/how-to-check-if-page-exists-with-curl/ it looks like you could set some curl options to only download the HTTP header, and if that succeeds the file is actually there (otherwise you would get a 404.

    something along the lines of

    try(z <- getBinaryURL(newurl[n], nobody=TRUE, header=TRUE, failonerror = TRUE))

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: