Archive for the ‘Uncategorized’ Category

Machine learning techniques in the biomedical literature

December 16, 2011 2 comments

There are relatively few articles published on using machine learning techniques on what many would consider “classical” biomedical study designs (e.g a sample size of 200 and about 10 parameters) and approaches to dealing with . But they may start being published. This is a list to get going with. No all of the article below fit into the above criteria but I’ve kept them here as they’re interesting (at least to me).

This post was motivated by this question on Crossvalidated. I will add to it as I find them or people point them out to me. It’s very short at the moment! Let me know of any broken links.

Statnikov A, Wang L, Aliferis CF A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification. BMC Bioinformatics. 2008 Jul 22;9:319.

Van Loon K, Guiza F, Meyfroidt G, Aerts JM, Ramon J, Blockeel H, Bruynooghe M, Van Den Berghe G, Berckmans D. Dynamic data analysis and data mining for prediction of clinical stability. Stud Health Technol Inform. 2009;150:590-4.

Luaces O, Taboada F, Albaiceta GM, Domínguez LA, Enríquez P, Bahamonde A; GRECIA Group.Predicting the probability of survival in intensive care unit patients from a small number of variables and training examples.Artif Intell Med. 2009 Jan;45(1):63-76. Epub 2009 Jan 29.

Wu TT, Chen YF, Hastie T, Sobel E, Lange K. Genome-wide association analysis by lasso penalized logistic regression. Bioinformatics. 2009 Mar 15;25(6):714-21. Epub 2009 Jan 28.

Schwaighofer A, Schroeter T, Mika S, Blanchard G. Comb Chem High Throughput Screen. How wrong can we get? A review of machine learning approaches and error bars. 2009 Jun;12(5):453-68.

Huang H, Chanda P, Alonso A, Bader JS, Arking DE. Gene-based tests of association. PLoS Genet. 2011 Jul;7(7):e1002177. Epub 2011 Jul 28.

Liu Z, Shen Y, Ott J. Multilocus association mapping using generalized ridge logistic regression. BMC Bioinformatics. 2011 Sep 29;12:384.


Hug, Caleb W. Predicting the risk and trajectory of intensive care patients using survival models. 2006 Massachusetts Institute of Technology

Talks / slides / videos:
Victoria Stodden’s slides


Logistic regression – simulation for a power calculation…

November 18, 2010 6 comments

Please note I’ve spotted a problem with the approach taken in this post – it seems to underestimate power in certain circumstances. I’ll post again with a correction or a more full explanation when I’ve sorted it.

So, I posted an answer on cross validation regarding logistic regression.   I thought I’d post it in a little more depth here, with a few illustrative figures. It’s based on the approach which Stephen Kolassa described.

Power calculations for logistic regression are discussed in some detail in Hosmer and Lemeshow (Ch 8.5).  One approach with R is to simulate a dataset a few thousand times, and see how often your dataset gets the p value right.  If it does 95% of the time, then you have 95% power.

In this code we use the approach which Kleinman and Horton use to simulate data for a logistic regression.  We then initially calculate the overall proportion of events.   To change the number of events adjust odds.ratio. The independent variable is assumed to be normally distributed with mean 0 and variance 1.

nn <- 950
runs <- 10000
intercept <- log(9)
odds.ratio <- 1.5
beta <- log(odds.ratio)
proportion  <-  replicate(
              n = runs,
              expr = {
                  xtest <- rnorm(nn)
                  linpred <- intercept + (xtest * beta)
                  prob <- exp(linpred)/(1 + exp(linpred))
                  runis <- runif(length(xtest),0,1)
                  ytest <- ifelse(runis < prob,1,0)
                  prop <- length(which(ytest <= 0.5))/length(ytest)

This plot shows how the intercept and odds ratio affect the overall proportion of events per trial:

When you’re happy that the proportion of events is right (with some prior knowledge of the dataset), you can then fit a model and calculate a p value for that model. We use R’s inbuilt function replicate to do this 10,000 times, and count the proportion where it gets it right (i.e. p < 0.05). The proportion of the time that the simulation correctly get's the p < 0.05 is essentially the power of the logistic regression for your number of cases, odds ratio and intercept.

result <-  replicate(
              n = runs,
              expr = {
                  xtest <- rnorm(nn)
                  linpred <- intercept + (xtest * beta)
                  prob <- exp(linpred)/(1 + exp(linpred))
                  runis <- runif(length(xtest),0,1)
                  ytest <- ifelse(runis < prob,1,0)
                  summary(model <- glm(ytest ~ xtest,  family = "binomial"))$coefficients[2,4] < .05

I checked it against the examples given in Hsieh, 1999.  It seemed to work pretty well calculating the power to be within ~ 1% of the power of the examples given in table II of that paper.

We can do some interesting things with R. I simulated a range of odds ratios and a range of sample sizes. The plot of these looks like this (each line represents an odd ratio):-

We can also keep the odds ratio constant, but adjust the proportion of events per trial. This looks like this (each line represents an event rate):

As ever, if anyone can spot an error or suggest a simpler way to do this then let me know. I haven’t tested my simulation against any packages which calculate power for logistic regression, but if anyone can it would be great to hear from you.

Science is vital – what we don’t know yet

This post is not about R (for a change). For working UK scientists, science is vital – sign the on-line petition to preserve science funding.

For my contribution of what we don’t know yet –

We don’t know whether we can use biomarkers of kidney injury to personalise the doses of medications to maximise the dose for the patient whilst minimizing any renal side effects.

Categories: Uncategorized

An HSV colour wheel in R

If you’ve read any of my previous posts, you’ll notice that they’re rather scanty on colour. There’s a reason for this. Mainly, that to get a good colour output takes some time. I recently read a commentary in Nature methods (sorry if you don’t have access to it, but this looks like it may be the first part of an interesting series of articles), which discusses colour in graphics. The author suggests a colour wheel, and I thought I’d have a go in R:

You have to click on it to read the text, sorry. There’s probably much easier ways to do it, and it takes a silly amount of time to render (several seconds! – all those nested loops), but this code below makes the colour wheel. If you set the variables t.hue, t.sat and t.val, the bottom right box is the resulting colour (the box just to the bottom right of the colour wheel is the hue with sat and val set to 1.0). Then on the right is the plot of val, and below is the plot of sat. As you go anti-clockwise from the x axis round your hue increases from 0.0 to 1.0.

So you can play around with colour, see what works and what doesn’t. This uses the HSV approach, which seemed okay for my purposes. rgb2hsv() converts rgb into hsv (obviously), if you are more familiar with the RGB approach. There are lots of other resources for colour in R, one of my favourites is here, and of course you can always search R-bloggers.

## colour plot

t.hue <- 0.65     ## this is the user entered hue, sat and value
t.sat <- 0.5
t.val <- 0.9
def.par <- par(no.readonly = TRUE)
layout( matrix(c(1,1,2,1,1,2,3,3,4), 3, 3, byrow = TRUE))

## prepare the plot for the wheel 
x <- (-100:100)*0.01
y <- (-100:100)*0.01
## blank plot to prepare the axis
plot(x,y, pch = 20, col = 0, bty = "n", xaxt = "n", yaxt = "n", ann = F) 

## make the wheel
for (x in (-100:100)*0.01){
  for (y in (-100:100)*0.01){
    theta <- atan2(y,x)     # theta is the angle
    hue <-  Mod(theta/(pi)) # make the hue dependent upon the angle 
    sat <- (x^2 + y^2)      # make the saturation depend upon distance from origin
    if (x^2 + y^2 <= 1){
       if (y > 0) {points(x,y, pch = 19, col = hsv(h = hue/2, s = sat, v = 1))}
       if (y < 0) {points(-x,y, pch = 19, col = hsv(h = hue/2 + 0.5, s = sat, v = 1))}
legend("center", "hue", bty = "n")
text(0.9,0, labels = "0.0")
text(0,0.9, labels = "0.25")
text(-0.9,0, labels = "0.5")
text(0,-0.9, labels = "0.75") 
## bottom right colour box inset into wheel
for (x in (80:100)*0.01){
  for (y in (-80:-100)*0.01){
    points (x,y, pch = 19, col = hsv(t.hue, s = 1, v = 1))

## right sided v scale 
x <- (0:100)*0.01
y <- (0:100)*0.01
plot(x,y, pch = 20, col = 0, xaxt = "n", yaxt = "n", bty = "n", ann = F)
for (x in (50:100)*0.01){
  for (y in (0:100)*0.01){
    hue <-  t.hue
    sat <- 1
    points(x,y, pch = 19, col = hsv(h = hue, s = sat, v = y))
legend("topleft", "value", bty = "n")
arrows(0.0, t.val, 0.5, t.val,length = 0.01, angle = 20)

  ## bottom saturation scale 
x <- (0:100)*0.01
y <- (0:100)*0.01
plot(x,y, pch = 20, col = 0, xaxt = "n", yaxt = "n", bty = "n", ann = F)
for (x in (0:100)*0.01){
  for (y in (0:50)*0.01){
    hue <-  t.hue
    points(x,y, pch = 19, col = hsv(h = hue, s = x, v = 1))
legend("topleft", "saturation", bty = "n")
arrows(t.sat,1.0, t.sat, 0.5, length = 0.01, angle = 20)

## bottom right plot
x <- (0:100)*0.01
y <- (0:100)*0.01
plot(x,y, pch = 20, col = 0, xaxt = "n", yaxt = "n", bty = "n", ann = F)
for (x in (0:25)*0.01){
  for (y in (0:100)*0.01){    
    points(x,y, pch = 19, col = hsv(h = t.hue, s = t.sat, v = t.val))
legtr <- paste( "hue=", t.hue, sep = "")
legr  <- paste( "sat=", t.sat, sep = "")
legbr <- paste("val=", t.val, sep = "")
legend("topright", legtr, bty = "n")
legend("right", legr, bty = "n")
legend("bottomright", legbr, bty = "n")

## reset the graphics display to default
Categories: Uncategorized Tags: ,

Summary plots

August 2, 2010 5 comments

So, when you first look at some data, it’s helpful to get a feel of it. One way to do this is to do a plot or two. I’ve found myself continuously doing the same series of plots for different datasets, so in the end I wrote this short code to put all the plots together as a time saving device. Not pretty, but gets the job done.

The output looks like this:

So on the top a histogram with a normal distribution plot. On the right a QQ normal plot, with an Anderson Darling p value. Then in the middle on the left is the same data put into different numbers of bins, to see how this affects the look of the data. And on the right, we pretend that each value is the next one in a time series with equal time intervals between readings, and plot these. Below this is the ACF and PACF plots.

Hope someone else finds this useful. If there’s easier ways to do this, let me know. To use the code – put your data into a text file as a series of numbers called data.txt in the working directory, and run this code:

## univariate data summary
data <- as.numeric(scan ("data.txt"))
# first job is to save the graphics parameters currently used
def.par <- par(no.readonly = TRUE)
par("plt" = c(.2,.95,.2,.8))
layout( matrix(c(1,1,2,2,1,1,2,2,4,5,8,8,6,7,9,10,3,3,9,10), 5, 4, byrow = TRUE))

#histogram on the top left
h <- hist(data, breaks = "Sturges", plot = FALSE)
yfit <- yfit*diff(h$mids[1:2])*length(data)
plot (h, axes = TRUE, main = "Sturges")
lines(xfit, yfit, col="blue", lwd=2)
leg1 <- paste("mean = ", round(mean(data), digits = 4))
leg2 <- paste("sd = ", round(sd(data),digits = 4)) 
legend(x = "topright", c(leg1,leg2), bty = "n")

## normal qq plot
qqnorm(data, bty = "n", pch = 20)
p <- ad.test(data)
leg <- paste("Anderson-Darling p = ", round(as.numeric(p[2]), digits = 4))
legend(x = "topleft", leg, bty = "n")

## boxplot (bottom left)
boxplot(data, horizontal = TRUE)
leg1 <- paste("median = ", round(median(data), digits = 4))
lq <- quantile(data, 0.25)
leg2 <- paste("25th quantile =  ", round(lq,digits = 4)) 
uq <- quantile(data, 0.75)
leg3 <- paste("75th quantile = ", round(uq,digits = 4)) 
legend(x = "top", leg1, bty = "n")
legend(x = "bottom", paste(leg2, leg3, sep = "; "), bty = "n")

## the various histograms with different bins
h2 <- hist(data,  breaks = (0:12 * (max(data) - min (data))/12)+min(data), plot = FALSE)
plot (h2, axes = TRUE, main = "12 bins")

h3 <- hist(data,  breaks = (0:10 * (max(data) - min (data))/10)+min(data), plot = FALSE)
plot (h3, axes = TRUE, main = "10 bins")
h4 <- hist(data,  breaks = (0:8 * (max(data) - min (data))/8)+min(data), plot = FALSE)
plot (h4, axes = TRUE, main = "8 bins")

h5 <- hist(data,  breaks = (0:6 * (max(data) - min (data))/6)+min(data), plot = FALSE)
plot (h5, axes = TRUE,main = "6 bins")

## the time series, ACF and PACF
plot (data, main = "Time series", pch = 20)
acf(data, lag.max = 20)
pacf(data, lag.max = 20)

## reset the graphics display to default
Categories: Uncategorized Tags: ,

Turning your data into a 3d chart

July 23, 2010 3 comments

Some charts are to help you analyse data. Some charts are to wow people. 3d charts are often the latter, but occasionally the former. In this post, we’ll look at how to turn your data into a 3d chart.

Let’s use the data from this previous post. Use the code which turns the .csv spreadsheet into 3 variables, x, y, and z.

3d charts generally need other packages. We’ll kick off with scatterplot3d, which perhaps makes things too easy:

scatterplot3d(x,y,z, highlight.3d = T, angle = 75, scale.y = .5)

The difficulty with 3d plots is that by definition, you’re looking at a 3d plot on a 2d surface. Wouldn’t you like to be able to rotate that plot around a bit? We’ll use the package rgl. Then type:


This pulls up an interactive window which you can rotate. Very helpful? Perhaps, but there are too many plots. Perhaps you only want to look at the middle 33% of the plots (i.e. look at a subset of the plot)?

startplot <- 33
endplot <- 66
a <- round(startplot/100*length(x))
b <- round(endplot/100*length(x))
plot3d(x[a:b],y[a:b],z[a:b], col = heat.colors(1000))

This looks much better. We’ve said we’d start at 33% of the way through the x,y,z co-ordinates, and end at 66% with the startplot and endplot variables. This is helpful – remember this is one year of data, and we’ve just displayed the middle of the year. The heatmap also helps to distinguish between plots, but in this case it doesn’t add any extra data – more of that in posts to come.

Categories: Uncategorized Tags: , ,

Quick scatterplot with associated histograms

July 22, 2010 1 comment

R can produce some beautiful graphics, and there are some excellent packages, such as lattice and ggplot2 to represent data in original ways.  But sometimes, all you want to do is explore the realtionship between pairs of variables with the minimum of fuss.

In this post we’ll use the data which we imported in the previous post to make a quick graphic.  I’ll assume you already got as far as importing the data and placing the variable for NO concentration into x and ozone into y.

We’re going to make a scatterplot with the histogram of x below the x axis, and the histogram of y rotated anti-clockwise through 90 degrees and alongside the y axis (all will become clear).  The first thing is to set up the graphics display:

## start by saving the original graphical parameters
def.par <- par(no.readonly = TRUE)
## then change the margins around each plot to 1
par("mar" = c(1,1,1,1))
## then set the layout of the graphic
layout(matrix(c(2,1,1,2,1,1,4,3,3), 3, 3, byrow = TRUE))

The layout command tells R to split the graphical output into a 3 by 3 array of panels. Each panel is given a number corresponding to the order in which graphics are plotted into it. To see this array, type:

matrix(c(2,1,1,2,1,1,4,3,3), 3, 3, byrow = TRUE)

This output shows that the display is split into 4 zones. The top right is a large area for plot one, the top left is a smaller panel for plot 2, and the bottom right is for plot 3.

So then, we need something for the top right – a straight forward scatter plot of x vs y (we set the maximum for the x axis with the xlim parameter of plot and using the maxx variable, which contains the maximum value held in the vector:

maxx <- x[which.max(x)]
maxy <- y[which.max(y)]
plot(x, y, xlab = "", ylab = "", pch = 20, bty = "n", 
   xlim = c(0, maxx), ylim = c(0,maxy))

Then, we need to create a histogram of the y values, and plot it to the left of the histogram appropriately orientated. To do this we first store a histogram into the variable yh, and then plot it with the barplot command. The reason for this is that barplots can be easily rotated:

breaks <- 50
yh <- hist(y, breaks = (maxy/breaks)*(0:breaks), plot = FALSE)
barplot(-(yh$intensities),space=0,horiz=T, axes = FALSE)

The breaks variable stores the number of bins into which the histogram is divided, maxy is the maximum value for the vector y, yh is the histogram, and then barplot extracts the heights of the bars from the histogram object draws it as a bar chart, but flips it on its side. The negative sign before yh$intensities points the bars to the left rather than the right.
We do the same for the x values, and also then reset the graphics display to defaults.

xh <- hist(x, breaks = (maxx/breaks)*(0:breaks), plot = FALSE)
barplot(-(xh$intensities),space=0,horiz=F, axes = FALSE)
## reset the graphics display to default

We get this output:

The advantage of this over the straight scatterplot is that you can see the density of overlapping points on the histogram. I’ve set the number of bins in the histogram to 50 – it’s worth playing around with this with your data. There are more elegant ways of doing this, but if you have paired variables x and y, and you want to quickly look at their distributions and association, this code works fine.

Categories: Uncategorized Tags: ,