evals is aimed at collecting as much information as possible while evaluating R code. It can evaluate a character vector of R expressions, and it returns a list of captured information while running those:

Besides capturing evaluation information, evals is able to automatically identify if an R expression is returning anything to a graphical device, and can save the resulting image in a variety of file formats.

Another interesting feature that evals provides is caching the results of evaluated expressions. For more details read the section about caching.

evals has a large number of options, which allow users to customize the call exectly as needed. Here we will focus on most useful features, but full list of options with explanation can be view by calling ?evalsOptions. Also evals support permanent options that will persist for all calls to evals, this can be acheived by calling evalsOptions.

Let’s start with basic example of evaluating 1:10 and collecting all information about it:

evals('1:10')
#> [[1]]
#> $src
#> [1] "1:10"
#> 
#> $result
#>  [1]  1  2  3  4  5  6  7  8  9 10
#> 
#> $output
#> [1] " [1]  1  2  3  4  5  6  7  8  9 10"
#> 
#> $type
#> [1] "integer"
#> 
#> $msg
#> $msg$messages
#> NULL
#> 
#> $msg$warnings
#> NULL
#> 
#> $msg$errors
#> NULL
#> 
#> 
#> $stdout
#> NULL
#> 
#> attr(,"class")
#> [1] "evals"

Not all the information might be useful, so evals makes it is possible to capture only some of the information, by specifying output parameter:

evals('1:10', output = c('result', 'output'))
#> [[1]]
#> $result
#>  [1]  1  2  3  4  5  6  7  8  9 10
#> 
#> $output
#> [1] " [1]  1  2  3  4  5  6  7  8  9 10"
#> 
#> attr(,"class")
#> [1] "evals"

One of the neat features of evals that it catches errors/warnings without interrupting the evaluation and saves them.

evals('x')[[1]]$msg
#> $messages
#> NULL
#> 
#> $warnings
#> NULL
#> 
#> $errors
#> [1] "object 'x' not found"
evals('as.numeric("1.1a")')[[1]]$msg
#> $messages
#> NULL
#> 
#> $warnings
#> [1] "NAs introduced by coercion"
#> 
#> $errors
#> NULL

Graphs and Graphical Options

As mentioned before, evals capture the output to graphical devices and saves it:

evals('plot(mtcars)')[[1]]$result
#> [1] "my_plots/test.jpeg"
#> attr(,"class")
#> [1] "image"

You can specify output directory using graph.dir parameter and output type using graph.output parameter. Currently it could be any of grDevices: png, bmp, jpeg, jpg, tiff, svg or pdf.

evals('plot(mtcars)', graph.dir = 'my_plots', graph.output = 'jpg')[[1]]$result
#> [1] "my_plots/test.jpeg"
#> attr(,"class")
#> [1] "image"

Moreover, evals provides facilities to:

Style unification

evals provides a very powerful facilities to unify with styling each of your images produced by different packages like ggplot2 or lattice.

Let’s prepare the data for plotting first:

## generating dataset
set.seed(1)
df <- mtcars[, c('hp', 'wt')]
df$factor <- sample(c('Foo', 'Bar', 'Foo bar'), size = nrow(df), replace = TRUE)
df$factor2 <- sample(c('Foo', 'Bar', 'Foo bar'), size = nrow(df), replace = TRUE)
df$time <- 1:nrow(df)

Now let’s plot the histograms:

evalsOptions('graph.unify', TRUE)
evals('histogram(df$hp, main = "Histogram with lattice")')[[1]]$result
#> [1] "my_plots/test.jpeg"
#> attr(,"class")
#> [1] "image"
evals('ggplot(df) + geom_histogram(aes(x = hp), binwidth = 50) + ggtitle("Histogram with ggplot2")')[[1]]$result
#> [1] "my_plots/test.jpeg"
#> attr(,"class")
#> [1] "image"
evalsOptions('graph.unify', FALSE)

Options for unification can be set with panderOptions, for example:

panderOptions('graph.fontfamily', "Comic Sans MS")
panderOptions('graph.fontsize', 18)
panderOptions('graph.fontcolor', 'blue')
panderOptions('graph.grid.color', 'blue')
panderOptions('graph.axis.angle', 3)
panderOptions('graph.boxes', T)
panderOptions('graph.legend.position', 'top')
panderOptions('graph.colors', rainbow(5))
panderOptions('graph.grid', FALSE)
panderOptions('graph.symbol', 22)

More information and example on style unification can be obtained from by Pandoc.brewing the tutorial about it which is available here.

Logging

To make execution and debug more understandable evals provides a possibility for logging with log parameter. Logging in evals relies on futile.logger package which provides logging API similar to log4j. Basic example:

x <- evals('1:10', log = 'foo')
#> INFO [2015-11-21 21:54:00] Command run: 1:10

futile.logger’s thresholds range from most verbose to least verbose: TRACE, DEBUG, INFO, WARN, ERROR, FATAL. By default threshold is set to INFO which will hide some unessential information. To permanently set the threshold for logger use flog.threshold:

evalsOptions('log', 'evals')
flog.threshold(TRACE, 'evals')
#> NULL
x <- evals('1:10', cache.time = 0)
#> INFO [2015-11-21 21:54:00] Command run: 1:10
#> TRACE [2015-11-21 21:54:00] Cached result
#> DEBUG [2015-11-21 21:54:00] Returned object: class = integer, length = 10, dim = , size = 88 bytes

futile.logger also provides a very useful functionality of write logs to files instead of printing them to promt:

t <- tempfile()
flog.appender(appender.file(t), name = 'evals')
#> NULL
x <- evals('1:10', log = 'evals')
readLines(t)
#> [1] "INFO [2015-11-21 21:54:00] Command run: 1:10"          
#> [2] "TRACE [2015-11-21 21:54:00] Returning cached R object."
# revert back to console
flog.appender(appender.console(), name = 'evals')
#> NULL

Result Caching

evals is using a custom caching algorithm to cache the results of evaluated R expressions.

How it works

Examples

We will set cache.time to 0, to cache all expressions regardless of time they took to evaluate. We will also use the logging facilites described above to simplify the understanding of how caching works.

evalsOptions('cache.time', 0)
evalsOptions('log', 'evals')
flog.threshold(TRACE, 'evals')
#> NULL

Let’s start with small example.

system.time(evals('1:1e5'))
#> INFO [2015-11-21 21:54:01] Command run: 1:1e+05
#> TRACE [2015-11-21 21:54:01] Cached result
#> DEBUG [2015-11-21 21:54:01] Returned object: class = integer, length = 100000, dim = , size = 400040 bytes
#>    user  system elapsed 
#>   0.698   0.095   0.800
system.time(evals('1:1e5'))
#> INFO [2015-11-21 21:54:01] Command run: 1:1e+05
#> TRACE [2015-11-21 21:54:01] Returning cached R object.
#>    user  system elapsed 
#>   0.006   0.000   0.006

Results cached by evals can be stored in an environment in current R session or permanently on disk by setting cache.mode parameter appropriately.

res <- evals('1:1e5', cache.mode = 'disk', cache.dir = 'cachedir')
#> INFO [2015-11-21 21:54:01] Command run: 1:1e+05
#> TRACE [2015-11-21 21:54:02] Cached result
#> DEBUG [2015-11-21 21:54:02] Returned object: class = integer, length = 100000, dim = , size = 400040 bytes
list.files('cachedir')
#> [1] "969961637690311bb25db1992f93e5c52b368d0d"

Since the hash for caching computed based on the structure and content of the R commands instead of the used variable names or R expressions, evals is able to achieve great results:

x <- mtcars$hp
y <- 1e3
system.time(evals('sapply(rep(x, y), mean)'))
#> INFO [2015-11-21 21:54:03] Command run: sapply(rep(x, y), mean)
#> TRACE [2015-11-21 21:54:03] Cached result
#> DEBUG [2015-11-21 21:54:03] Returned object: class = numeric, length = 32000, dim = , size = 256040 bytes
#>    user  system elapsed 
#>   0.318   0.016   0.336

Let us create some custom functions and variables, which are not identical to the above call:

f <- sapply
g <- rep
h <- mean
X <- mtcars$hp * 1
Y <- 1000
system.time(evals('f(g(X, Y), h)'))
#> INFO [2015-11-21 21:54:03] Command run: f(g(X, Y), h)
#> TRACE [2015-11-21 21:54:03] Returning cached R object.
#>    user  system elapsed 
#>   0.006   0.000   0.006

Another important feature of evals is that it takes notes the changes in the evaluation environment. For example:

x <- 1
res <- evals('x <- 1:10;')
#> INFO [2015-11-21 21:54:03] Command run: x <- 1:10
#> TRACE [2015-11-21 21:54:03] Cached result

x <- 1:10 will be cached, so if the same assignment occurs again we won’t need to evaluate it. But what about the change of x when we get the result from the cache? evals takes care of that.

So in the following example we can see that x <- 1:10 is not evaluated, but retrieved from cache with the the change to x in the environment.

evals('x <- 1:10; x[3]')[[2]]$result
#> INFO [2015-11-21 21:54:03] Command run: x <- 1:10
#> TRACE [2015-11-21 21:54:03] Returning cached R object.
#> INFO [2015-11-21 21:54:03] Command run: x[3]
#> TRACE [2015-11-21 21:54:03] Cached result
#> DEBUG [2015-11-21 21:54:03] Returned object: class = integer, length = 1, dim = , size = 48 bytes
#> [1] 3

Also evals is able to cache output to graphical devices produces during evaluation:

system.time(evals('plot(mtcars)'))
#> INFO [2015-11-21 21:54:03] Command run: plot(mtcars)
#> TRACE [2015-11-21 21:54:03] Image file written: my_plots/test.jpeg
#> TRACE [2015-11-21 21:54:03] Cached result
#>    user  system elapsed 
#>   0.223   0.002   0.227
system.time(evals('plot(mtcars)'))
#> INFO [2015-11-21 21:54:04] Command run: plot(mtcars)
#> TRACE [2015-11-21 21:54:04] Image found in cache: my_plots/test.jpeg
#>    user  system elapsed 
#>   0.006   0.000   0.006