Archive

Posts Tagged ‘statistics’

Logistic regression – simulation for a power calculation…

November 18, 2010 6 comments

Please note I’ve spotted a problem with the approach taken in this post – it seems to underestimate power in certain circumstances. I’ll post again with a correction or a more full explanation when I’ve sorted it.

So, I posted an answer on cross validation regarding logistic regression.   I thought I’d post it in a little more depth here, with a few illustrative figures. It’s based on the approach which Stephen Kolassa described.

Power calculations for logistic regression are discussed in some detail in Hosmer and Lemeshow (Ch 8.5).  One approach with R is to simulate a dataset a few thousand times, and see how often your dataset gets the p value right.  If it does 95% of the time, then you have 95% power.

In this code we use the approach which Kleinman and Horton use to simulate data for a logistic regression.  We then initially calculate the overall proportion of events.   To change the number of events adjust odds.ratio. The independent variable is assumed to be normally distributed with mean 0 and variance 1.

nn <- 950
runs <- 10000
intercept <- log(9)
odds.ratio <- 1.5
beta <- log(odds.ratio)
proportion  <-  replicate(
              n = runs,
              expr = {
                  xtest <- rnorm(nn)
                  linpred <- intercept + (xtest * beta)
                  prob <- exp(linpred)/(1 + exp(linpred))
                  runis <- runif(length(xtest),0,1)
                  ytest <- ifelse(runis < prob,1,0)
                  prop <- length(which(ytest <= 0.5))/length(ytest)
                  }
            )
summary(proportion)

This plot shows how the intercept and odds ratio affect the overall proportion of events per trial:

When you’re happy that the proportion of events is right (with some prior knowledge of the dataset), you can then fit a model and calculate a p value for that model. We use R’s inbuilt function replicate to do this 10,000 times, and count the proportion where it gets it right (i.e. p < 0.05). The proportion of the time that the simulation correctly get's the p < 0.05 is essentially the power of the logistic regression for your number of cases, odds ratio and intercept.

result <-  replicate(
              n = runs,
              expr = {
                  xtest <- rnorm(nn)
                  linpred <- intercept + (xtest * beta)
                  prob <- exp(linpred)/(1 + exp(linpred))
                  runis <- runif(length(xtest),0,1)
                  ytest <- ifelse(runis < prob,1,0)
                  summary(model <- glm(ytest ~ xtest,  family = "binomial"))$coefficients[2,4] < .05
                  }
            )
print(sum(result)/runs)

I checked it against the examples given in Hsieh, 1999.  It seemed to work pretty well calculating the power to be within ~ 1% of the power of the examples given in table II of that paper.

We can do some interesting things with R. I simulated a range of odds ratios and a range of sample sizes. The plot of these looks like this (each line represents an odd ratio):-

We can also keep the odds ratio constant, but adjust the proportion of events per trial. This looks like this (each line represents an event rate):

As ever, if anyone can spot an error or suggest a simpler way to do this then let me know. I haven’t tested my simulation against any packages which calculate power for logistic regression, but if anyone can it would be great to hear from you.

Advertisements

How to check if a file exists with HTTP and R

September 1, 2010 8 comments

So, there’s probably an easier way to do this (please let me know if you know it)…

Suppose you’re working with a system which creates (binary) files and posts them for download on a website. You know the names of the files that will be created. However, they may not have been made yet (they’re generated on the fly, and appear in a vaguely random order over time). There are several of them and you want to know which ones are there yet, and when there are enough uploaded, run an analysis.

I spent quite a bit of time trying to work this out, and eventually came up with the following solution:

require(RCurl)
newurl <- c("http://cran.r-project.org/web/packages/RCurl/RCurl.pdf",
            "http://cran.r-project.org/web/packages/RCurl/RCurl2.pdf")
for (n in 2:1){
   z <- ""
   try(z <- getBinaryURL(newurl[n], failonerror = TRUE))   
   if (length(z) > 1) {print(paste(newurl[n], " exists", sep = ""))
      } else {print(paste(newurl[n], " doesn't exist", sep =  ""))}
   }

What this does is uses RCurl to download the file into a variable z. Then your system will check to see if z now contains the file.

If the file doesn’t exist, getBinaryURL() returns an error, and your loop (if you are doing several files) will quit. Wrapping the getBinaryURL() in try() means that the error won’t stop the loop from trying the next file (if you don’t trust me, try doing the above without the try wrapper). You can see how wrapping this in a loop could quickly go through several files and download ones which exist.

I’d really like to be able to do this, but not actually download the whole file (e.g. just the first 100 bytes) to see how many files of interest have been created, and if enough have, then download them all. I just can’t work out how to yet – I tried the range option of getBinaryURL() but this just crashed R. This would be useful if you are collecting data in real time, and you know you need at least (for example) 80% of the data to be available before you jump into a computationally expensive algorithm.

So, there must be an easier way to do all this, but can I find it? …

Categories: General post Tags: , , ,

Summary plots

August 2, 2010 5 comments

So, when you first look at some data, it’s helpful to get a feel of it. One way to do this is to do a plot or two. I’ve found myself continuously doing the same series of plots for different datasets, so in the end I wrote this short code to put all the plots together as a time saving device. Not pretty, but gets the job done.

The output looks like this:

So on the top a histogram with a normal distribution plot. On the right a QQ normal plot, with an Anderson Darling p value. Then in the middle on the left is the same data put into different numbers of bins, to see how this affects the look of the data. And on the right, we pretend that each value is the next one in a time series with equal time intervals between readings, and plot these. Below this is the ACF and PACF plots.

Hope someone else finds this useful. If there’s easier ways to do this, let me know. To use the code – put your data into a text file as a series of numbers called data.txt in the working directory, and run this code:

## univariate data summary
require(nortest)
data <- as.numeric(scan ("data.txt"))
# first job is to save the graphics parameters currently used
def.par <- par(no.readonly = TRUE)
par("plt" = c(.2,.95,.2,.8))
layout( matrix(c(1,1,2,2,1,1,2,2,4,5,8,8,6,7,9,10,3,3,9,10), 5, 4, byrow = TRUE))

#histogram on the top left
h <- hist(data, breaks = "Sturges", plot = FALSE)
xfit<-seq(min(data),max(data),length=100)
yfit<-yfit<-dnorm(xfit,mean=mean(data),sd=sd(data))
yfit <- yfit*diff(h$mids[1:2])*length(data)
plot (h, axes = TRUE, main = "Sturges")
lines(xfit, yfit, col="blue", lwd=2)
leg1 <- paste("mean = ", round(mean(data), digits = 4))
leg2 <- paste("sd = ", round(sd(data),digits = 4)) 
legend(x = "topright", c(leg1,leg2), bty = "n")

## normal qq plot
qqnorm(data, bty = "n", pch = 20)
qqline(data)
p <- ad.test(data)
leg <- paste("Anderson-Darling p = ", round(as.numeric(p[2]), digits = 4))
legend(x = "topleft", leg, bty = "n")

## boxplot (bottom left)
boxplot(data, horizontal = TRUE)
leg1 <- paste("median = ", round(median(data), digits = 4))
lq <- quantile(data, 0.25)
leg2 <- paste("25th quantile =  ", round(lq,digits = 4)) 
uq <- quantile(data, 0.75)
leg3 <- paste("75th quantile = ", round(uq,digits = 4)) 
legend(x = "top", leg1, bty = "n")
legend(x = "bottom", paste(leg2, leg3, sep = "; "), bty = "n")


## the various histograms with different bins
h2 <- hist(data,  breaks = (0:12 * (max(data) - min (data))/12)+min(data), plot = FALSE)
plot (h2, axes = TRUE, main = "12 bins")

h3 <- hist(data,  breaks = (0:10 * (max(data) - min (data))/10)+min(data), plot = FALSE)
plot (h3, axes = TRUE, main = "10 bins")
 
h4 <- hist(data,  breaks = (0:8 * (max(data) - min (data))/8)+min(data), plot = FALSE)
plot (h4, axes = TRUE, main = "8 bins")

h5 <- hist(data,  breaks = (0:6 * (max(data) - min (data))/6)+min(data), plot = FALSE)
plot (h5, axes = TRUE,main = "6 bins")

## the time series, ACF and PACF
plot (data, main = "Time series", pch = 20)
acf(data, lag.max = 20)
pacf(data, lag.max = 20)

## reset the graphics display to default
par(def.par)
Categories: Uncategorized Tags: ,

Visualizing 3d data – plotting quartiles separately

July 30, 2010 3 comments

In this previous post, we’ve looked at displaying three dimensional data.  One major problem is when there is a high density of data, it can be difficult to see what’s going on in a 3 dimensional plot.

One way of looking at the data in more detail is to break it up.  Take a look at this graph:

This is a plot of data of air quality in Nottingham, UK, taken hourly in 2009 (the code to create it in base R is on the bottom of the page).  On the left is a scatterplot of NO2 against ozone (plot A).   The different colours indicate grouping the data by the level of ozone into quartiles.  On the right are plots of the NO vs NO2 for the same data, but a  separate plot for each quartile of the ozone data.  The points are all colour co-ordinated, so the red points indicating the upper quartile of the ozone data in plot A are matched by red points in plot B.

So you can see by comparing plot E and D, that at the lowest quartile of ozone levels, there is a greater spread of both NO2 and NO.

How this is done is pretty simple (most of the code is to make things vaguely pretty).  Essentially, the values for x,y and z are put into a matrix xyz.  The rows of the matrix are ordered according to the z variable.  The rows which deliniate each quartile are calculated, and then the plots for B to E of x vs y are drawn, using only the rows for that quartile.  The axes are plotted so that they are the same scale for each of the plots. There’s not much room for the axis labels – so these are added afterwards with the legend command.

Then on the left the plot for y (on the horizontal axis) and z (on the vertical axis) is drawn, with some added lines to show where the boundaries of each quartile lie.  The colours are stored in the xyz matrix in the col column.  Like most of my code, the graph is portable, you just need to input different values for x, y and z and re-label the names for each variable.  The original dataset is the same one which I have used for my previous posts.  It is from the UK airquality database.  If you copy this file into your working directory and run the code below, you’ll repeat the plot.

Any suggestions for improvements / comments would be most appreciated!

## name the columns of the data
columns <- c("date", "time", "NO", "NO_status", "NO_unit", "NO2",
	"NO2_status", "NO2_unit", "ozone", "ozone_status", "ozone_unit", 
	"SO2", "SO2_status", "SO2_unit")
## read in the data, store it in variable data
data <- read.csv("27899712853.csv", header = FALSE, skip = 7, 
	col.names = columns, stringsAsFactors = FALSE)

## now make the x,y and z variables

x <- data$NO
y <- data$NO2
z <- data$ozone
cols <- rep(1,length(z))

xyz <- cbind (x,y)
xyz <- cbind(xyz,z)
xyz <- cbind(xyz,cols)
colq1 <- 6
colq2 <- 4
colq3 <- 3
colq4 <- 2

xl <- "NO"
yl <- "NO2"
zl <- "Ozone"

point <- 20 

# re order by z
xyz <- xyz[order(xyz[,3]),]
# now define the row numbers for the quartile boundries
maxxyz <-  nrow(xyz)
q1xyz <- round(maxxyz/4)
medianxyz <-  round(maxxyz/2)
q3xyz <- round(maxxyz*3/4)

# assign colours to xyz$col
xyz[1:q1xyz,4] <- colq1
xyz[q1xyz:medianxyz,4] <- colq2
xyz[medianxyz:q3xyz,4] <- colq3
xyz[q3xyz:nrow(xyz),4] <- colq4

# define the maximum values for x,y, and z 
# these are used to ensure all the axes are the same scale
maxx <- x[which.max(x)]
maxy <- y[which.max(y)]
maxz <- z[which.max(z)]


# now make the plot
# first job is to save the graphics parameters currently used
def.par <- par(no.readonly = TRUE)
# define the margins around each plot
par("mar" = c(2,2,0.5,0.5))
# make the layout for the plot
layout(matrix(c(5,1,5,2,5,3,5,4), 4, 2, byrow = TRUE))

# now do the four plots on the right
plot(xyz[q3xyz:maxxyz,1],xyz[q3xyz:maxxyz,2], col = colq4, 
	xlab = xl, ylab = yl, pch=point, xlim = c(0,maxx), 
	ylim = c(0,maxy))
legend(x = "right", yl, bty = "n")
legend(x = "topright", "B", bty = "n")

plot(xyz[medianxyz:q3xyz,1],xyz[medianxyz:q3xyz,2], col = colq3,
	pch=point, xlim = c(0,maxx), ylim = c(0,maxy))
legend(x = "right", yl, bty = "n")
legend(x = "topright", "C", bty = "n")

plot(xyz[q1xyz:medianxyz,1],xyz[q1xyz:medianxyz,2], col = colq2, 
	pch=point, xlim = c(0,maxx), ylim = c(0,maxy))
legend(x = "right", yl, bty = "n")
legend(x = "topright", "D", bty= "n")

plot(xyz[0:q1xyz,1],xyz[0:q1xyz,2], col = colq1, pch=point, 
	xlim = c(0,maxx), ylim = c(0,maxy))
legend(x = "right", yl, bty = "n")
legend(x = "bottom", xl, bty = "n")
legend(x = "topright", "E", bty = "n")

# now do the plot on the left
plot(xyz[,2],xyz[,3], col = xyz[,4], pch=point, xlim = c(0,maxy))
legend(x = "bottom", yl, bty = "n")
legend(x = "right", zl, bty = "n")
legend(x = "topright", "A", bty = "n")

abline(h=xyz[q1xyz,3],col=3,lty=2)
abline(h=xyz[medianxyz,3],col=4)
abline(h=xyz[q3xyz,3],col=5,lty=2)

## reset the graphics display to default
par(def.par)
Categories: General post Tags: ,

Turning your data into a 3d chart

July 23, 2010 3 comments

Some charts are to help you analyse data. Some charts are to wow people. 3d charts are often the latter, but occasionally the former. In this post, we’ll look at how to turn your data into a 3d chart.

Let’s use the data from this previous post. Use the code which turns the .csv spreadsheet into 3 variables, x, y, and z.

3d charts generally need other packages. We’ll kick off with scatterplot3d, which perhaps makes things too easy:

library(scatterplot3d)
scatterplot3d(x,y,z, highlight.3d = T, angle = 75, scale.y = .5)

The difficulty with 3d plots is that by definition, you’re looking at a 3d plot on a 2d surface. Wouldn’t you like to be able to rotate that plot around a bit? We’ll use the package rgl. Then type:

library(rgl)
plot3d(x,y,z)

This pulls up an interactive window which you can rotate. Very helpful? Perhaps, but there are too many plots. Perhaps you only want to look at the middle 33% of the plots (i.e. look at a subset of the plot)?

startplot <- 33
endplot <- 66
a <- round(startplot/100*length(x))
b <- round(endplot/100*length(x))
plot3d(x[a:b],y[a:b],z[a:b], col = heat.colors(1000))

This looks much better. We’ve said we’d start at 33% of the way through the x,y,z co-ordinates, and end at 66% with the startplot and endplot variables. This is helpful – remember this is one year of data, and we’ve just displayed the middle of the year. The heatmap also helps to distinguish between plots, but in this case it doesn’t add any extra data – more of that in posts to come.

Categories: Uncategorized Tags: , ,

Quick scatterplot with associated histograms

July 22, 2010 1 comment

R can produce some beautiful graphics, and there are some excellent packages, such as lattice and ggplot2 to represent data in original ways.  But sometimes, all you want to do is explore the realtionship between pairs of variables with the minimum of fuss.

In this post we’ll use the data which we imported in the previous post to make a quick graphic.  I’ll assume you already got as far as importing the data and placing the variable for NO concentration into x and ozone into y.

We’re going to make a scatterplot with the histogram of x below the x axis, and the histogram of y rotated anti-clockwise through 90 degrees and alongside the y axis (all will become clear).  The first thing is to set up the graphics display:

## start by saving the original graphical parameters
def.par <- par(no.readonly = TRUE)
## then change the margins around each plot to 1
par("mar" = c(1,1,1,1))
## then set the layout of the graphic
layout(matrix(c(2,1,1,2,1,1,4,3,3), 3, 3, byrow = TRUE))

The layout command tells R to split the graphical output into a 3 by 3 array of panels. Each panel is given a number corresponding to the order in which graphics are plotted into it. To see this array, type:

matrix(c(2,1,1,2,1,1,4,3,3), 3, 3, byrow = TRUE)

This output shows that the display is split into 4 zones. The top right is a large area for plot one, the top left is a smaller panel for plot 2, and the bottom right is for plot 3.

So then, we need something for the top right – a straight forward scatter plot of x vs y (we set the maximum for the x axis with the xlim parameter of plot and using the maxx variable, which contains the maximum value held in the vector:

maxx <- x[which.max(x)]
maxy <- y[which.max(y)]
plot(x, y, xlab = "", ylab = "", pch = 20, bty = "n", 
   xlim = c(0, maxx), ylim = c(0,maxy))

Then, we need to create a histogram of the y values, and plot it to the left of the histogram appropriately orientated. To do this we first store a histogram into the variable yh, and then plot it with the barplot command. The reason for this is that barplots can be easily rotated:

breaks <- 50
yh <- hist(y, breaks = (maxy/breaks)*(0:breaks), plot = FALSE)
barplot(-(yh$intensities),space=0,horiz=T, axes = FALSE)

The breaks variable stores the number of bins into which the histogram is divided, maxy is the maximum value for the vector y, yh is the histogram, and then barplot extracts the heights of the bars from the histogram object draws it as a bar chart, but flips it on its side. The negative sign before yh$intensities points the bars to the left rather than the right.
We do the same for the x values, and also then reset the graphics display to defaults.

xh <- hist(x, breaks = (maxx/breaks)*(0:breaks), plot = FALSE)
barplot(-(xh$intensities),space=0,horiz=F, axes = FALSE)
## reset the graphics display to default
par(def.par)

We get this output:

The advantage of this over the straight scatterplot is that you can see the density of overlapping points on the histogram. I’ve set the number of bins in the histogram to 50 – it’s worth playing around with this with your data. There are more elegant ways of doing this, but if you have paired variables x and y, and you want to quickly look at their distributions and association, this code works fine.

Categories: Uncategorized Tags: ,

Matrix scatterplot of the Airquality data using lattice

In this post we will build on the last one, and create a matrix scatterplot. The package lattice allows for some really excellent graphics. In case you haven’t already seen it I recommend the R Graph Gallery for some examples of what it can do – browse the graphics by package used to create them. We’ll use the same dataset as last time, where we made a plot of the NO levels in the atmosphere vs ozone levels for Nottingham, UK.

First step is to load the lattice package.

require("lattice")

Download the dataset from here, and put the file in your working directory. Now we’ll put the dataset into the matrix data.

columns <- c("date", "time", "NO", "NO_status", "NO_unit",
      "NO2", "NO2_status", "NO2_unit", "ozone", "ozone_status",
      "ozone_unit", "SO2", "SO2_status", "SO2_unit")
data <- read.csv("27899712853.csv", header = FALSE,
      skip = 7, col.names = columns, stringsAsFactors = FALSE)
x <- data$NO
y <- data$ozone
z <- data$SO2

So that it’s easier to follow, I’ve extracted 3 vectors from the matrix: x, y, and z.   These are the columns of the data for NO, ozone and SO2.  Hopefully this will help you follow things.  When working with graphs, I usually do this (in the last post I extracted x and y).  If I make a nice graphic I can then “cut and paste” it into another program, and just change the data in xy and z and hey presto, the same graphic is instantly used with new data.

For a matrix scatterplot, we need to make a matrix of the variables to compare. We join the vectors into a matrix and then name the columns.

mat <- cbind(x,y)
mat <- cbind(mat,z)
colnames(mat) <- c("NO", "ozone", "SO2")

You can look at the first 10 lines of mat with

mat[1:10,]

Finally we create the matrix plot:

title <- "Matrix scatterplot of air polutants"
print(splom(mat, main = title))

The final result is here:

For those unfamiliar with scatterplots – this plot is essentially 3 scatterplots of x vs y, x vs z and y vs z.  The middle left plot is the scatterplot created in this previous post.  The package lattice can do lots more than this – get help on line for it with the command

?lattice