Categorically Variable

Only search Categorically Variable.

Real Time Lightning Data

A map of real time lightning data from the World Wide Lightning Location Network (WWLLN) is now available here.

 Read more

How Much Time to Conceive?

This morning my wife presented me with a rather interesting statistic: a healthy couple has a 25% chance of conception every month [1], and that this should result in a 75% to 85% chance of conception after a year. This sounded rather interesting and it occurred to me that it really can’t be that simple. There are surely a lot of variables which influence this probability. Certainly age should be a factor and, after a short search, I found some more age-specific information which indicated that for a woman in her thirties, the probability is only around 15% [2,3].

 Read more

Processing EXIF Data

I got quite inspired by the EXIF with R post on the Timely Portfolio blog and decided to do a similar analysis with my photographic database.

 Read more

Updated Comrades Winners Chart

On Friday I received my copy of The Official Results Brochure for the 2013 Comrades Marathon. Always makes for a diverting half an hour’s reading. And the tables at the front provide some very interesting statistics. Seemed like a good opportunity to update my Chart of Comrades Winners.

 Read more

Contour and Density Layers with ggmap

 Read more

Amy Cuddy: Your body language shapes who you are...

… but be careful with your plots: they might be misinterpreted.

Amy Cuddy gives a great talk. Provided me with lots to think about and I will happily confess that I have struck a few power poses (but only after ensuring that I am quite alone)!

 Read more

Deriving a Priority Queue from a Plain Vanilla Queue

Following up on my recent post about implementing a queue as a reference class, I am going to derive a Priority Queue class.

 Read more

Implementing a Queue as a Reference Class

I am working on a simulation for an Automatic Repeat-reQuest (ARQ) algorithm. After trying various options, I concluded that I would need an implementation of a queue to make this problem tractable. R does not have a native queue data structure, so this seemed like a good opportunity to implement one and learn something about Reference Classes in the process.

The Implementation

We use setRefClass() to create a generator function which will create objects of the Queue class.

Queue <- setRefClass(Class = "Queue",
                     fields = list(
                       name = "character",
                       data = "list"
                     ),
                     methods = list(
                       size = function() {
                         'Returns the number of items in the queue.'
                         return(length(data))
                       },
                       #
                       push = function(item) {
                         'Inserts element at back of the queue.'
                         data[[size()+1]] <<- item
                       },
                       #
                       pop = function() {
                         'Removes and returns head of queue (or raises error if queue is empty).'
                         if (size() == 0) stop("queue is empty!")
                         value <- data[[1]]
                         data[[1]] <<- NULL
                         value
                       },
                       #
                       poll = function() {
                         'Removes and returns head of queue (or NULL if queue is empty).'
                         if (size() == 0) return(NULL)
                         else pop()
                       },
                       #
                       peek = function(pos = c(1)) {
                         'Returns (but does not remove) specified positions in queue (or NULL if any one of them is not available).'
                         if (size() < max(pos)) return(NULL)
                         #
                         if (length(pos) == 1) return(data[[pos]])
                         else return(data[pos])
                       },
                       initialize=function(...) {
                         callSuper(...)
                         #
                         # Initialise fields here (place holder)...
                         #
                         .self
                       }
                     )
)

Methods of the Generator Function

Let’s look at some of the methods for the generator function. The first of these is new(), which will create a fresh instance of the class. We will return to more examples of this later.

> Queue$new()
Reference class object of class "Queue"
Field "name":
character(0)
Field "data":
list()

Next we have methods() and fields() which, not surprisingly, return vectors of the names of the class’s methods and data fields respectively.

> Queue$methods()
 [1] "callSuper"    "copy"         "export"       "field"        "getClass"     "getName"      "getRefClass" 
 [8] "import"       "initFields"   "initialize"   "peek"         "poll"         "pop"          "push"        
[15] "setName"      "show"         "size"         "trace"        "untrace"      "usingMethods"
> Queue$fields()
       name        data 
"character"      "list" 

Lastly the help() method which, by default, returns some general information about the class.

> Queue$help()
Usage:  $help(topic) where topic is the name of a method (quoted or not)
The definition of class Queue follows.
Reference Class "Queue":

Class fields:
                          
Name:       name      data
Class: character      list

 Class Methods:  
    "callSuper", "copy", "export", "field", "getClass", "getName", "getRefClass", "import", "initFields", "initialize", 
"peek", "poll", "pop", "push", "setName", "show", "size", "trace", "untrace", "usingMethods"


 Reference Superclasses:  
    "envRefClass"

However, if provided with a method name as an argument, it will provide a help string for the specifed method. These help strings are embedded in the class definition using a syntax which will be familiar to Python programmers.

> Queue$help(push)
Call:
$push(item)


Inserts element at back of the queue.

Creating Objects (and some Reference Class Features)

Each Queue object can be assigned a name. I am not sure whether this will be required, but it seemed like a handy feature to have. This can be done after the object has been created…

> q1 <- Queue$new()
> q1$name <- "test queue"
> q1$name
[1] "test queue"
> 
> q1
Reference class object of class "Queue"
Field "name":
[1] "test queue"
Field "data":
list()

… or during the creation process.

> q2 <- Queue$new(name = "another test") > q2$name
[1] "another test"

We have seen that the name field can be accessed using the dollar operator. We can also use the accessors() method for the generator function to create methods for reading and writing to the name field.

> Queue$accessors("name")
>
> q2$getName()
[1] "another test"
> q2$setName("a new name")
> q2$getName()
[1] "a new name"

Standard Queue Functionality

Let’s get down to the actual queue functionality. First we add elements to the queue using the push() method.

> q2$push("item number one")
> q2$push(2)
> q2$push("third item")
> q2$size()
[1] 3

The queue now contains three items (two strings and a number). The items are stored in the data field. Ideally this should not be directly accessible, but the Reference Class implementation does not provide for private data. So you can access the items directly (but this defeats the object of a queue!).

> q2$data
[[1]]
[1] "item number one"
[[2]]
[1] 2
[[3]]
[1] "third item"

Next we can start to access and remove items from the queue. The first way to do this uses pop(), which returns the item at the head of the queue and at the same time removes it from the queue.

> q2$pop()
[1] "item number one"
> q2$pop()
[1] 2
> q2$size()
[1] 1

What if we just want to have a look at the first item in the queue but not remove it? Use peek().

> q2$peek()
[1] "third item"
> q2$size()
[1] 1

Next we have poll(), which is very much like pop() in that it removes the first item from the queue. But, as we will see in a moment, it behaves differently when the queue is empty.

> q2$poll()
[1] "third item"
> q2$size()
[1] 0

So we have ejected all three items from the queue and we should expect the behaviour of these functions to be a little different now. pop() generates an error message: obviously we cannot remove an item from the queue if it is empty!

> try(q2$pop())
Error in q2$pop() : queue is empty!

This might not be the best behaviour for harmonious code, so peek() and poll() more elegantly indicate that that the queue is exhausted by returning NULL.

> q2$peek()
NULL
> q2$poll()
NULL

Non-Standard Queue Functionality

I am going to need to be able to take a look at multiple items in the queue, so the peek() method accepts an argument which specifies item locations. To illustrate, first let’s push a number of items onto a queue.

> q3 <- Queue$new(name = "letter queue") >
> sapply(letters, function(n) {q3$push(n)})
a b c d e f g h i j k l m n o p q r s t u v w x y z
"a" "b" "c" "d" "e" "f" "g" "h" "i" "j" "k" "l" "m" "n" "o" "p" "q" "r" "s" "t" "u" "v" "w" "x" "y" "z"
>
> q3$size()
[1] 26

As we have already seen, the default peek() behaviour is just to take a look at the item at the head of the queue.

> q3$peek()
[1] "a"

But we can also choose items further down the queue as well as ranges of items.

> q3$peek(2)
[1] "b"
> q3$peek(4:7)
[[1]]
[1] "d"
[[2]]
[1] "e"
[[3]]
[1] "f"
[[4]]
[1] "g"

If the requested range of locations extends beyond the end of the queue then the method returns NULL as before.

> q3$peek(24:28)
NULL

Conclusion

That is all the infrastructure I will need for my ARQ project. Well, almost. I would actually prefer a Priority Queue. I’ll be back with the implementation of that in a day or two. In the meantime though, if anyone has any ideas for additional functionality or has suggestions for how this code might be improved, I would be very pleased to hear them.

This code is now published in the liqueueR package.

 Read more

Lightning Activity Predictions For Single Buoy Moorings

A short talk that I gave at the LIGHTS 2013 Conference (Johannesburg, 12 September 2013). The slides are a little short on text because I like the audience to hear the content rather than read it. The objective with this project was to develop a model which would predict the occurrence of lightning in the vicinity of a Single Buoy Mooring (SBM). Analysis and visualisations were all done in R. I used data from the World Wide Lightning Location Network (WWLLN) and considered four possible models: Neural Network, Conditional Inference Tree, Support Vector Machine and Random Forest. Of the four, Random Forests produced the best performance. The preliminary results from the Random Forests model are very promising: there is good agreement between predicted and observed lightning occurrence in the vicinity of the SBM.

 Read more

Iterators in R

According to Wikipedia, an iterator is “an object that enables a programmer to traverse a container”. A collection of items (stashed in a container) can be thought of as being “iterable” if there is a logical progression from one element to the next (so a list is iterable, while a set is not). An iterator is then an object for moving through the container, one item at a time.

Iterators are a fundamental part of contemporary Python programming, where they form the basis for loops, list comprehensions and generator expressions. Iterators facilitate navigation of container classes in the C++ Standard Template Library (STL). They are also to be found in C#, Java, Scala, Matlab, PHP and Ruby.

Although iterators are not part of the native R language, they are implemented in the iterators and itertools packages.

iterators Package

The iterators package is written and maintained by Revolution Analytics. It contains all of the basic iterator functionality.

> library(iterators)

Iterating over a Vector

The iter() function transforms a container into an iterator. Using it we can make an iterator from a vector.

> name = c("Bob", "Mary", "Jack", "Jane")
>
> iname <- iter(name)
> nextElem(iname)
[1] "Bob"
> nextElem(iname)
[1] "Mary"
> nextElem(iname)
[1] "Jack"
> nextElem(iname)
[1] "Jane"

Here iteration is performed manually by calling nextElem(): each call yields the next element in the vector. And when we fall off the end of the vector, we get a StopIteration error.

> nextElem(iname)
Error: StopIteration

Iterating over Data Frames and Matrices

Similarly, we can make an iterator from a data frame.

> (people = data.frame(name, ages = c(17, 23, 41, 19), gender = rep(c("M", "F"), 2)))
  name ages gender
1  Bob   17      M
2 Mary   23      F
3 Jack   41      M
4 Jane   19      F

We can iterate over the data frame by column (the default behaviour)…

> ipeople <- iter(people)
> nextElem(ipeople)
[1] Bob Mary Jack Jane
Levels: Bob Jack Jane Mary
> nextElem(ipeople)
[1] 17 23 41 19
> nextElem(ipeople)
[1] M F M F
Levels: F M

… or by row.

> ipeople <- iter(people, by = "row")
> nextElem(ipeople)
  name ages gender
1  Bob   17      M
> nextElem(ipeople)
  name ages gender
2 Mary   23      F
> nextElem(ipeople)
  name ages gender
3 Jack   41      M
> nextElem(ipeople)
  name ages gender
4 Jane   19      F

To my mind iterating by row is likely to be the most common application, so I am a little puzzled by the fact that it is not the default. No doubt the creators had a solid reason for this design choice though!

Not surprisingly, we can iterate through the rows and columns of a matrix as well. But, taking things one step further, we will do this in parallel using the foreach library. First we set up the requisites for multicore processing.

> library(foreach)
> #
> library(doMC)
> registerDoMC(cores=4)

Then, for illustration purposes, we create a matrix of normally distributed random numbers. Our parallel loop will use an iterator to run through the matrix, computing the mean for every row. Of course, there are other ways of accomplishing the same task, but this syntax is rather neat and intuitive.

> Z <- matrix(rnorm(10000), ncol = 100)
> foreach(i = iter(Z, by = "row"), .combine = c) %dopar% mean(i)
  [1]  0.0603617  0.1588855 -0.0973709  0.0624385  0.0446618 -0.1528632  0.1295854
  [8]  0.0222499 -0.0420512 -0.0202990 -0.0855573 -0.0506006 -0.0062855 -0.0621060
 [15] -0.0248356 -0.0108870  0.2863956  0.0515027  0.1393728 -0.1280088 -0.0092294
 [22]  0.0373697 -0.1844472  0.1223116  0.0094714  0.0557162 -0.0061684 -0.0404645
 [29] -0.1935638  0.0626071  0.0246498  0.0438532 -0.0254658  0.0733297 -0.1075596
 [36]  0.0659071  0.1146148  0.0684804 -0.0621941 -0.1250420  0.0303364 -0.0975498
 [43] -0.0908564 -0.2332864 -0.1978676  0.1444749  0.0202549  0.0246314  0.0871942
 [50]  0.0152622 -0.0353871 -0.0708653  0.0927205 -0.0601456 -0.0190797 -0.0397878
 [57] -0.2066360 -0.0078547 -0.1445867  0.0955066  0.0130583 -0.0656415  0.1003008
 [64]  0.0794965 -0.1044313 -0.1024588  0.0019395 -0.0390669 -0.0722961  0.0772205
 [71] -0.0137769  0.1186736  0.0714488 -0.0525488  0.1204149 -0.0682227 -0.0110207
 [78] -0.0079334  0.0589447 -0.0464116 -0.0293395  0.2112834  0.0112325  0.0907571
 [85]  0.0036712 -0.0375137  0.0817137  0.0092214  0.0068116 -0.0887101 -0.0623266
 [92] -0.1580448  0.0870227 -0.0517170  0.1160297 -0.2065726 -0.0517491 -0.0312476
 [99] -0.0363837 -0.1619014

Another way of slicing up a matrix uses iapply(), which is analogous to the native apply() function. First, let’s create a small and orderly matrix.

> (Z <- matrix(1:100, ncol = 10))
      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
 [1,]    1   11   21   31   41   51   61   71   81    91
 [2,]    2   12   22   32   42   52   62   72   82    92
 [3,]    3   13   23   33   43   53   63   73   83    93
 [4,]    4   14   24   34   44   54   64   74   84    94
 [5,]    5   15   25   35   45   55   65   75   85    95
 [6,]    6   16   26   36   46   56   66   76   86    96
 [7,]    7   17   27   37   47   57   67   77   87    97
 [8,]    8   18   28   38   48   58   68   78   88    98
 [9,]    9   19   29   39   49   59   69   79   89    99
[10,]   10   20   30   40   50   60   70   80   90   100

Choosing margin = 1 gives us the rows, while margin = 2 yields the columns.

> irow <- iapply(Z, 1)
> nextElem(irow)
[1] 1 11 21 31 41 51 61 71 81 91
> nextElem(irow)
[1] 2 12 22 32 42 52 62 72 82 92
> icol <- iapply(Z, 2)
> nextElem(icol)
[1] 1 2 3 4 5 6 7 8 9 10
> nextElem(icol)
[1] 11 12 13 14 15 16 17 18 19 20

Iterating through a File

An iterator to read lines from a file is extremely useful. This is certainly the iterator pattern that I have used most often in Python. In R a file iterator is created using ireadLines().

> ifile <- ireadLines('/etc/passwd')
> nextElem(ifile)
[1] "root:x:0:0:root:/root:/bin/bash"
> nextElem(ifile)
[1] "daemon:x:1:1:daemon:/usr/sbin:/bin/sh"
> nextElem(ifile)
[1] "bin:x:2:2:bin:/bin:/bin/sh"

Each call to nextElem() returns a new line from the file. We can put this into a loop and traverse the entire file.

> while (TRUE) {
+   d = try(nextElem(ifile))
+   if (class(d) == "try-error") break
+   print(d)
+ }
[1] "sys:x:3:3:sys:/dev:/bin/sh"
[1] "sync:x:4:65534:sync:/bin:/bin/sync"
[1] "games:x:5:60:games:/usr/games:/bin/sh"
[1] "man:x:6:12:man:/var/cache/man:/bin/sh"
#
# content omitted for brevity...
#
[1] "ntp:x:121:129::/home/ntp:/bin/false"
[1] "nx:x:122:1010::/var/NX/nx:/etc/NX/nxserver"
Error : StopIteration

Hmmmm. The code for the loop is a little clunky and it ends in a rather undignified fashion. Surely there is a better way to do this? Indeed there is! More on that later.

Filtering with an Iterator

iter() allows you to specify a filter function which can be used to select particular items from a container. This function should return a Boolean (or a value which can be coerced to Boolean) indicating whether (TRUE) or not (FALSE) an item should be accepted. To illustrate this, we will pick out prime numbers from the first one hundred integers. The gmp package has an isprime() function which is perfectly suited to this job.

> library(gmp)

First we’ll look at the basic functionality of isprime().

> isprime(13)
[1] 2
> isprime(8)
[1] 0
> isprime(2147483647)
[1] 1

A return value of 2 indicates that a number is definitely prime, while 0 indicates a composite number. For small numbers these are the only two options, however, for larger numbers the Miller-Rabin primality test is applied, in which case a return value of 1 indicates that a number is probably prime.

So, here we go, grabbing the first four primes:

> iprime <- iter(1:100, checkFunc = function(n) isprime(n))
> nextElem(iprime)
[1] 2
> nextElem(iprime)
[1] 3
> nextElem(iprime)
[1] 5
> nextElem(iprime)
[1] 7

An Iterable Version of split()

The native function split() accepts two arguments: the first is a vector and the second is a factor which dictates how the vector’s elements will be divided into groups. The return value is a list of vectors corresponding to each of the groups. The isplit() function accepts the same arguments and results in an iterator which steps through each of the groups, returning a list with two fields: the key (one of the levels of the factor) and the corresponding values extracted from the vector.

> ipeople <- isplit(people$name, people$gender)
> #
> p = nextElem(ipeople)
> p
$value
[1] Mary Jane
Levels: Bob Jack Jane Mary

$key
$key[[1]]
[1] "F"


> p = nextElem(ipeople)
> p$key
[[1]]
[1] "M"

> p$value
[1] Bob  Jack
Levels: Bob Jack Jane Mary

Functions as Iterators

It is also possible to have a function packaged as an iterator. The function is called every time that the iterator is advanced.

> ifunc <- iter(function() sample(name, 1))
> nextElem(ifunc)
[1] "Bob"
> nextElem(ifunc)
[1] "Mary"
> nextElem(ifunc)
[1] "Mary"
> nextElem(ifunc)
[1] "Jack"

Here each succesive element returned by the iterator is generated by a call to an anonymous function which extracts a single sample from our list of names. I am not quite sure how I would use this facility in a practical situation (why is this superior to simply calling the function?), but I am sure that a good application will reveal itself in due course.

Rolling Your Own: A Fibonacci Iterator

An iterator to generate Fibonacci Numbers is readily implemented in Python.

def fibonacci():
    a, b = 1, 1
    while True:
        yield a
        a, b = b, a+b

for n in fibonacci():
    print(n)

An analogous iterator can be implemented in R. Perhaps the syntax is not quite as succinct, but it certainly does the job.

> fibonacci <- function() {
+   ab = c(0, 1)
+   n <- function() {
+     ab <<- c(ab[2], sum(ab))
+     ab[1]
+   }
+   obj <- list(nextElem = n)
+   class(obj) <- c('fibonacci', 'abstractiter', 'iter')
+   obj
+ }

Now we can instantiate a Fibonacci iterator and start generating terms…

> ifib = fibonacci()
> nextElem(ifib)
[1] 1
> nextElem(ifib)
[1] 1
> nextElem(ifib)
[1] 2
> nextElem(ifib)
[1] 3
> nextElem(ifib)
[1] 5
> nextElem(ifib)
[1] 8
> #
> unlist(as.list(ifib, n = 10))
[1] 13 21 34 55 89 144 233 377 610 987

Again, you might be wondering what advantages this provides over simply having a function which generates the sequence. Consider a situation where you need to have two independent sequences. Certainly the state of each sequence could be stored and passed as an argument. An object oriented solution would be natural. However, iterators will do the job very nicely too.

> i1 = fibonacci()
> i2 = fibonacci()
> unlist(as.list(i1, n = 7))
[1] 1 1 2 3 5 8 13
> unlist(as.list(i2, n = 10))
[1] 1 1 2 3 5 8 13 21 34 55

We have generated some terms from each of the iterators. Not surprisingly, when we return to generate further terms, we find that each of the iterators picks up at just the right position in the sequence.

> unlist(as.list(i1, n = 7))
[1] 21 34 55 89 144 233 377
> unlist(as.list(i2, n = 10))
[1] 89 144 233 377 610 987 1597 2584 4181 6765

itertools Package

The itertools package is written and maintained by Steve Weston (Yale University) and Hadley Wickham (RStudio). It provides a range of extensions to the basic iterator functionality.

> library(itertools)

Stop When You Reach the End

The Fibonacci iterator implemented above will keep on generating terms indefinitely. What if we wanted it to terminate after a finite number of terms? We could introduce a parameter for the required number of terms and generate a StopIteration error when we try to exceed this limit.

> fibonacci <- function(count = NA) {
+   ab = c(0, 1)
+   n <- function() {
+     if (!is.na(count)) {
+       if (count > 0) count <<- count -1
+       else stop('StopIteration')
+     }
+     #
+     ab <<- c(ab[2], sum(ab))
+     ab[1]
+   }
+   obj <- list(nextElem = n)
+   class(obj) <- c('fibonacci', 'abstractiter', 'iter')
+   obj
+ }

In practice this works fine…

> ifib = fibonacci(4)
> nextElem(ifib)
[1] 1
> nextElem(ifib)
[1] 1
> nextElem(ifib)
[1] 2
> nextElem(ifib)
[1] 3
> nextElem(ifib)
 Show Traceback
 
 Rerun with Debug
 Error in obj$nextElem() : StopIteration

… but it generates a rather rude error message if you try to iterate beyond the prescribed number of terms. How can we handle this more elegantly? Enter the ihasNext() wrapper which enables a hasNext() function for an iterator.

> ifib = ihasNext(fibonacci(4))
> while (hasNext(ifib)) {
+   print(nextElem(ifib))
+ }
[1] 1
[1] 1
[1] 2
[1] 3

So, instead of blindly trying to proceed to the next item, we first check whether there are still items available in the iterator. This pattern can be applied to any situation where the iterator has only a finite number of terms. Iterating through the lines in a file would be a prime example!

> ifile <- ihasNext(ireadLines('/etc/passwd'))
> while (hasNext(ifile)) {
+   print(nextElem(ifile))
+ }
[1] "root:x:0:0:root:/root:/bin/bash"
[1] "daemon:x:1:1:daemon:/usr/sbin:/bin/sh"
[1] "bin:x:2:2:bin:/bin:/bin/sh"
#
# content omitted for brevity...
#
[1] "ntp:x:121:129::/home/ntp:/bin/false"
[1] "nx:x:122:1010::/var/NX/nx:/etc/NX/nxserver"

That’s much better!

Limiting the Limitless

Adapting our Fibonacci iterator to stop after a finite number of terms was a bit of a kludge. Not to worry: the ilimit() wrapper allows us to stipulate a limit on the number of terms.

> ifib <- ilimit(fibonacci(), 12)
> unlist(as.list(ifib))
[1] 1 1 2 3 5 8 13 21 34 55 89 144

Recycle Everything

We have seen how to truncate an effectively infinite series of iterates. How about replicating a finite series? Using recycle() you can run through the iterable a number of times. As a first example, we create an iterator which will run through a integer sequence (1, 2 and 3) twice.

> irec <- recycle(1:3, 2)
> nextElem(irec)
[1] 1
> nextElem(irec)
[1] 2
> nextElem(irec)
[1] 3
> nextElem(irec)
[1] 1
> nextElem(irec)
[1] 2
> nextElem(irec)
[1] 3

Next we generate the first six terms from the Fibonacci sequence and repeat them twice.

> irec <- recycle(ilimit(fibonacci(), 6), 2)
> unlist(as.list(irec))
[1] 1 1 2 3 5 8 1 1 2 3 5 8

Enumerating

The enumerate() function returns an iterator which returns each element of the iterable as a list with two elements: an index and the corresponding value. So, for example, we can number each of the elements in our list of names.

> iname <- enumerate(name)
> #
> cat(sapply(iname, function(n) sprintf("%d -> %s\n", n$index, n$value)), sep = "")
1 -> Bob
2 -> Mary
3 -> Jack
4 -> Jane

Zip Them Up!

A generalisation of enumeration is created by zipping up arbitrary iterables.

> ifib <- izip(a = letters[1:6], b = fibonacci())
> #
> cat(sapply(ifib, function(n) sprintf("%s -> %d", n$a, n$b)), sep = "\n")
a -> 1
b -> 1
c -> 2
d -> 3
e -> 5
f -> 8

Naturally, we can use this to replicate the functionality of enumerate().

> iname <- izip(index = 1:4, value = name)
> cat(sapply(iname, function(n) sprintf("%d -> %s\n", n$index, n$value)), sep = "")
1 -> Bob
2 -> Mary
3 -> Jack
4 -> Jane

Getting Chunky!

What about returning groups of values from an iterator? The ichunk() wrapper takes two arguments: the first is an iterator, the second is the number of terms to be wrapped in a chunk. Here is an example of generating groups of three successive terms from the Fibonacci sequence.

> ifib <- ichunk(fibonacci(), 3)
> unlist(nextElem(ifib))
[1] 1 1 2
> unlist(nextElem(ifib))
[1] 3 5 8
> unlist(nextElem(ifib))
[1] 13 21 34

Until I Say “Stop!”

Finally, an iterator which continues to generate terms until a time limit is reached. How many terms will our Fibonacci iterator generate in 0.1 seconds?

> length(as.list(timeout(fibonacci(), 0.1)))
[1] 701

Conclusion

I have already identified a few older projects where the code can be significantly improved by the use of iterators. And I will certainly be using them in new projects. I hope that you too will be able to apply them in your work. Iterate and prosper!

References

 Read more

Introduction to Fractals

A short while ago I was contracted to write a short piece entitled “Introduction to Fractals”. The article can be found here. Admittedly it is hard to do justice to the topic in less than 1000 words.

 Read more

Percolation Threshold: Including Next-Nearest Neighbours

In my previous post about estimating the Percolation Threshold on a square lattice, I only considered flow from a given cell to its four nearest neighbours. It is a relatively simple matter to extend the recursive flow algorithm to include other configurations as well.

Malarz and Galam (2005) considered the problem of percolation on a square lattice for various ranges of neighbor links. Below is their illustration of (a) nearest neighbour “NN” and (b) next-nearest neighbour “NNN” links.

Implementing Next-Nearest Neighbours

There were at least two options for modifying the flow function to accommodate different configurations of neighbours: either they could be given as a function parameter or defined as a global variable. Although the first option is the best from an encapsulation perspective, it does make the recursive calls more laborious. I got lazy and went with the second option.

> neighbours = c("NN")
> 
> flow <- function(g, i = NA, j = NA) {
+   # -> Cycle through cells in top row
+   #
+   if (is.na(i) || is.na(j)) {
+     for (j in 1:ncol(g)) {
+       g = flow(g, 1, j)
+     }
+     return(g)
+   }
+   #
+   # -> Check specific cell
+   #
+   if (i < 1 || i > nrow(g) || j < 1 || j > ncol(g)) return(g)
+   #
+   if (g[i,j] == OCCUPIED || g[i,j] == FLOW) return(g)
+   #
+   g[i,j] = FLOW
+   
+   if ("NN" %in% neighbours) {
+     g = flow(g, i+1, j)           # down
+     g = flow(g, i-1, j)           # up
+     g = flow(g, i, j+1)           # right
+     g = flow(g, i, j-1)           # left
+   }
+   if ("NNN" %in% neighbours) {
+     g = flow(g, i+1, j+1)         # down+right
+     g = flow(g, i-1, j+1)         # up+right
+     g = flow(g, i+1, j-1)         # down+left
+     g = flow(g, i-1, j-1)         # up+left
+   }
+   
+   g
+ }

We will add a third example grid to illustrate the effect.

> set.seed(1)
> g1 = create.grid(12, 0.6)
> g2 = create.grid(12, 0.4)
> g3 = create.grid(12, 0.5)

Generating the compound plot with two different values of the global variable required a little thought. But fortunately the scoping rules in R allow for a rather nice implementation.

> grid.arrange(
+   {neighbours = c("NN"); visualise.grid(flow(g3))},
+   {neighbours = c("NN", "NNN"); visualise.grid(flow(g3))},
+   ncol = 2)

Here we have two plots for the same grid showing (left) NN and (right) NN+NNN percolation. Including the possibility of “diagonal” percolation extends the range of cells that are reachable and this grid, which does not percolate with just NN, does support percolation with NN+NNN.

No modifications are required to the percolation function.

> neighbours = c("NN")
> 
> percolates(g1)
[1] FALSE
> percolates(g2)
[1] TRUE
> percolates(g3)
[1] FALSE
> 
> neighbours = c("NN", "NNN")
> 
> percolates(g1)
[1] FALSE
> percolates(g2)
[1] TRUE
> percolates(g3)
[1] TRUE

Effect on Percolation Threshold

Finally we can see the effect of including next-nearest neighbours on the percolation threshold. We perform the same logistic fit as previously.

> (pcrit = - logistic.fit$coefficients[1] / logistic.fit$coefficients[2])
(Intercept) 
    0.59107 
> 1-pcrit
(Intercept) 
    0.40893

This agrees well with the result for NN+NNN from Table 1 in Malarz and Galam (2005).

A Larger Lattice

Finally, let’s look at a larger lattice at the percolation threshold.

> set.seed(1)
>
> g4 = create.grid(100, pcrit)
>
> percolates(g4)
[1] TRUE

References

  • Malarz, K., & Galam, S. (2005). Square-lattice site percolation at increasing ranges of neighbor bonds. Physical Review E, 71(1), 016125. doi:10.1103/PhysRevE.71.016125.
 Read more

Percolation Threshold on a Square Lattice

Manfred Schroeder touches on the topic of percolation a number of times in his encyclopaedic book on fractals (Schroeder, M. (1991) Fractals, Chaos, Power Laws: Minutes from an Infinite Paradise. Percolation has numerous practical applications, the most interesting of which (from my perspective) is the flow of hot water through ground coffee! The problem of percolation can be posed as follows: suppose that a liquid is poured onto a solid block of some substance. If the substance is porous then it is possible for the liquid to seep through the pores and make it all the way to the bottom of the block. Whether or not this happens is determined by the connectivity of the pores within the substance. If it is extremely porous then it is very likely that there will be an open path of pores connecting the top to the bottom and the liquid will flow freely. If, on the other hand, the porosity is low then such a path may not exist. Evidently there is a critical porosity threshold which divides these two regimes.

 Read more

Eric Berlow and Sean Gourley: Mapping Ideas Worth Spreading

 Read more

Plotting Times of Discrete Events

I recently enjoyed reading O’Hara, R. B., & Kotze, D. J. (2010). _Do not log-transform count data_. Methods in Ecology and Evolution, 1(2), 118–122. doi:10.1111/j.2041-210X.2010.00021.x.</blockquote>

 Read more

Applying the Same Operation to a Number of Variables

R

Just a quick note on a short hack that I cobbled together this morning. I have an analysis where I need to perform the same set of operations to a list of variables. In order to do this in a compact and robust way, I wanted to write a loop that would run through the variables and apply the operations to each of them in turn. This can be done using get() and assign().

Simple Illustration

To illustrate the procedure, I will use the simple example of squaring the numerical values stored in three variables. First we initialise the variables.

> x = 1
> y = 2
> z = 3

Then we loop over the variable names (as strings), creating a temporary copy of each one and applying the operation to the copy. Then the copy is assigned back to the original variable.

> for (n in c("x", "y", "z")) {
+   v = get(n)
+   #
+   v = v**2
+   #
+   assign(n, v)
+ }

Finally we check that the operation has been executed as expected.

> x
[1] 1
> y
[1] 4
> z
[1] 9

This is perhaps a little wasteful in terms of resources (creating the temporary variables), but does the job. Obviously in practice you would only implement this sort of solution if there were either a large number of variables to be transformed or the transformation required a relatively complicated set of operations.

Alternative Implementations

Following up on the numerous insightful responses to this post, there are a number of other ways of skinning the same cat. But, I should point out that the solution above is still optimal for my particular application where I had a series of operations to be applied to each of the variables, some of which involved conditional branches, making a solution using vectorised operations rather messy. Furthermore, I did not want to have to pack and unpack from a list.

Usage Case

To give a better idea of the type of scenario that I was looking at, consider a situation in which you have a number of data frames. Let’s call them A, B, C and D. The data in each is similar, yet each pertains to a distinct population. And, for whatever reason, you want to keep these data separate rather than consolidating them into a single data frame. Now suppose further that you wanted to perform a set of operations on each of them:

  • retain only a subset of the columns;
  • rename the remaining columns; and
  • derive new columns using transformations of the existing columns.

Using the framework above you could achieve all of these objectives without any replication of code.

 Read more

Mounting a sshfs volume via the crontab

I need to mount a directory from my laptop on my desktop machine using sshfs. At first I was not making the mount terribly regularly, so I did it manually each time that I needed it. However, the frequency increased over time and I was eventually mounting it every day (or multiple times during the course of a day!). This was a perfect opportunity to employ some automation.

 Read more

Top 250 Movies at IMDb

Some years ago I allowed myself to accept a challenge to read the Top 100 Novels of All Time (complete list here). This list was put together by Richard Lacayo and Lev Grossman at Time Magazine.

To start with I could tick off a number of books that I had already read. That left me with around 75 books outstanding. So I knuckled down. The Lord of the Rings had been on my reading list for a number of years, so this was my first project. A little unfair for this trilogy to count as just one book… but I consumed it with gusto! One down. Other books followed. They were also great reads. And then I hit a couple of books that were just, well, to put it plainly, heavy going. I am sure that they were great books and my lack of enjoyment was entirely a reflection on me and not the quality of the books. No doubt I learned a lot from reading them. But it was hard work! At this stage it occurred to me that the book list was constructed from a rather specific perspective of what constituted a great book. A perspective which is quite different from my own. So I had to admit defeat: my literary tastes will have to mature a bit before I attack this list again!

Then last week I was reading through a back issue of The Linux Journal and came across an article which used shell tools to download and process the IMDb list of Top 250 Movies. This list is constructed from IMDb users’ votes and so represents a fairly democratic and egalitarian perspective. Working through a list of movies seems to me to be a lot easier than a list of books, so this appealed to my inner sloth. And gave me an idea for a quick little R script.

We will use the XML library to retrieve the page from IMDb and parse out the appropriate table.

> library(XML)
>
> url <- "http://www.imdb.com/chart/top">
> best.movies <- readHTMLTable(url, which = 2, stringsAsFactors = FALSE)
>
> head(best.movies)
  Rank Rating                                 Title     Votes
1   1.    9.2       The Shawshank Redemption (1994) 1,065,332
2   2.    9.2                  The Godfather (1972)   746,693
3   3.    9.0         The Godfather: Part II (1974)   484,761
4   4.    8.9                   Pulp Fiction (1994)   825,063
5   5.    8.9 The Good, the Bad and the Ugly (1966)   319,222
6   6.    8.9                The Dark Knight (2008) 1,039,499

The output reflects the content of the rating table exactly. However, the rank column is redundant since the same information is captured by the row labels. We can remove this column to make the data more concise.

> best.movies[, 1] <- NULL
>
> head(best.movies)
  Rating                                 Title     Votes
1    9.2       The Shawshank Redemption (1994) 1,065,332
2    9.2                  The Godfather (1972)   746,693
3    9.0         The Godfather: Part II (1974)   484,761
4    8.9                   Pulp Fiction (1994)   825,063
5    8.9 The Good, the Bad and the Ugly (1966)   319,222
6    8.9                The Dark Knight (2008) 1,039,499

There are still a few issues with the data:

  • the years are bundled up with the titles;
  • the rating data are strings;
  • the votes data are also strings and have embedded commas.

All of these problems are easily fixed though.

> pattern = "(.*) \\((.*)\\)$"
>
> best.movies = transform(best.movies,
+                       Rating = as.numeric(Rating),
+                       Year   = as.integer(substr(gsub(pattern, "\\2", Title), 1, 4)),
+                       Title  = gsub(pattern, "\\1", Title),
+                       Votes  = as.integer(gsub(",", "", Votes))
+ )
>
> best.movies = best.movies[, c(4, 2, 3, 1)]
>
> head(best.movies)
  Year                          Title   Votes Rating
1 1994       The Shawshank Redemption 1065332    9.2
2 1972                  The Godfather  746693    9.2
3 1974         The Godfather: Part II  484761    9.0
4 1994                   Pulp Fiction  825063    8.9
5 1966 The Good, the Bad and the Ugly  319222    8.9
6 2008                The Dark Knight 1039499    8.9

I am happy to see that The Good, the Bad and the Ugly rates at number 5. This is one of my favourite movies! Clearly I am not alone.

Finally, to gain a little perspective on the relationship between the release year, votes and rating we can put together a simple bubble plot.

> library(ggplot2)
>
> ggplot(best.movies, aes(x = Year, y = Rating)) +
+   geom_point(aes(size = Votes), alpha = 0.5, position = "jitter", color = "darkgreen") +
+   scale_size(range = c(3, 15)) +
+   theme_classic()

When I have some more time on my hands I am going to use the IMDb API to grab some additional information on each of these movies and see if anything interesting emerges from the larger data set.

 Read more

Flushing Live MetaTrader Logs to Disk

The logs generated by expert advisors and indicators when running live on MetaTrader are displayed in the Experts tab at the bottom of the terminal window. Sometimes it is more convenient to analyse these logs offline (especially since the order of the records in the terminal runs in a rather counter-intuitive bottom-to-top order!). However, because writing to the log files is buffered, there can be a delay before what you see in the terminal is actually written to disk.

 Read more

Clustering Lightning Discharges to Identify Storms

A short talk that I gave at the LIGHTS 2013 Conference (Johannesburg, 12 September 2013). The slides are relatively devoid of text because I like the audience to hear the content rather than read it. The central message of the presentation is that clustering lightning discharges into storms is not a trivial task, but still a worthwhile challenge because it can lead to some very interesting science!

 Read more

Categorically Variable