Package party: Conditional Inference Trees
I am going to be using the party package for one of my projects, so I spent some time today familiarising myself with it. The details of the package are described in Hothorn, T., Hornik, K., & Zeileis, A. (1999). “party: A Laboratory for Recursive Partytioning” which is available from CRAN.
The main workhorse of the package is ctree, so that is where I will be focusing my attention. The online documentation for ctree is, to be honest, like much of the R documentation: somewhat dense. Or maybe it is just me being dense? Anyway, the examples provided do illustrate what ctree can do, but do not give too much in the way of explanation about just what it is doing. So I am going to try and unpack those examples in detail.
The air quality data set looks like this.
The first thing that we will do is remove all of the records with missing ozone data.
Next we use ctree to construct a model of ozone as a function of all other covariates.
The textual description of the model gives a lot of detail, but it is a little difficult to get the big picture. A plot helps. This is essentially a decision tree but with extra information in the terminal nodes.
This tells us that the data has been divided up into five classes (in nodes labelled 3, 5, 6, 8 and 9). Let’s consider a measurement with temperature of 70 and wind speed of 12. At the highest level the data is divided into two categories according to temperature: either ≤ 82 or > 82. Our measurement follows the left branch (temperature ≤ 82). The next division is made according to wind speed, giving two categories according to whether the speed is ≤ 6.9 or > 6.9. Our measurement follows the right branch (speed > 6.9). Then we encounter the final division, which once again depends on temperature and has two categories: either ≤ 77 or > 77. Our measurement has temperature ≤ 77, so it gets classified in node 5. Looking at the boxplot for ozone in node 5, this suggests that we would expect the conditions for our measurement to be associated with a relatively low level of ozone.
We can look at the details of individual nodes in the tree, but this does not reveal much more information than was already given in the plot.
What about using this model on new data? First we construct a new data frame.
Note that since the classification model does not depend on solar radiation, month or day, we do not need to specify meaningful values for them (they will not have any impact on the outcome of the model!). One of the characteristics of party is that it does not include in the model any covariates which appear to be independent of the response variable.
It is very important to ensure that the column classes in the new data are precisely the same as those in the original training data. If you don’t then prediction will fail.
Now we can predict the ozone levels and category node numbers for these new data. The measurement used above to discuss the plot (temperature 70, wind speed 12) appears on row 2 of these new data.
Finally we can use the model to generate the category node numbers for the original data and add that as a new column to the data frame.
Like many things in R, you could have achieved this result by other means.
The iris data set gives data on the dimensions of sepals and petals measured on 50 samples of three different species of iris (setosa, versicolor and virginica).
We will construct a model of iris species as a function of the other covariates.
The structure of the tree is essentially the same. Only the representation of the nodes differs because, whereas ozone was a continuous numerical variable, iris species is a categorical variable. The nodes are thus represented as bar plots. Node 2 is predominantly setosa, node 5 is mostly versicolor and node 7 is almost all viriginica. Node 6 is half versicolor and half virginica and corresponds to a category with long, narrow petals. It is interesting to note that the model depends only on the dimensions of the petals and not on those of the sepals.
We can assess the quality of the model by constructing a confusion matrix. This shows that the model performs perfectly for setosa irises. For versicolor it also performs very well, only classifying one sample incorrectly as a virginica. For virginica it fails to correctly classify 5 samples. The model seems to perform well overall, however, this is based on the training data, so it is not really an objective assessment!
Finally, we can use the model to predict the species for new data.
The mammography data details a study on the benefits of mammography.
We will model the first field which indicates when last the sample had a mammogram.
Again the nodes in the model appear in the form of a bar plot since they represent a categorical variable. The model classifies the data according to SYMPT and PB, where the former is an ordinal variable which reflects agreement with the statement “You do not need a mamogram unless you develop symptoms” and the latter is an indicator of the perceived benefit of mammography.
German Breast Cancer Study Group
The GBSG2 data contains the data for the German Breast Cancer Study Group 2. Yes, I know, this is the second breast-related topic. It’s not a preoccupation: it’s in the documentation!
We will be focusing on the recurrence free survival time of the samples. This is censored data: for some of the samples the time to recurrence is known; for others there had been no recurrence at the time of the study.
The samples for which recurrence time is not known are indicated by a “+” in the survival data. Next we construct the model and generate a plot.
The model divides the data into four categories according to the values of three covariates: pnodes (number of positive nodes), horTh (hormonal therapy: yes or no) and progrec (progesterone receptor). The nodes reflect the distribution of survival times. For example, those samples with more than three positive nodes and progesterone receptor levels ≤ 20 had the worst distribution of survival times. Those samples with fewer than 3 nodes and that did receive hormone therapy had significantly better survival times.
I hope that you have found this informative. I know that I have learned a lot while putting it together.