Multi-machine logtailing

A while back I watched a lecture which gave a crash course on using SSH in the context of remote working. While there wasn't a great deal of new material in there for me personally, there was a claim about the average frequency of brute force attacks against machines on the public internet, as a motivation for key-based authentication. The details of the claim aren't important here, but it got me thinking about how I might follow the access logs on multiple machines at once in real time. I've previously run one or two medium-traffic websites which got a non-trivial number of requests per minute, and I did occasionally find it interesting to casually watch the rate at which the logs scrolled passed on my terminal.

I had better things to be doing at the time, but I decided to see how easy it would be to come up with a little script to log into all of my machines running HTTP daemons, and follow all the access logs simultaneously.

multitee

In the middle of this, I needed something which can read from multiple data sources and multiplex them into a single stream of data. My tool of choice here is a program called multitee. It was originally written by Dan Bernstein in the 90's, according to the manual page of the version in Debian, and Paul Jarc has a reimplementation whose information page can be found here. It gets its name from the tee program, which copies its standard input to standard output and to a file. multitee is parameterisable in which file descriptors it handles, however — it takes a list of file descriptors on which to read input from, and for each input descriptor a list of output file descriptor to copy the input data to. A given output file descriptor may appear multiple times, which means that multiple input streams will be copied to that output.

Here's an example, taken from the manual page:

$ multitee 0-1,4,5 4>foo 5>bar

The 0-1,4,5 argument means "read input on file descriptor 0 (which is standard in), and write any data there to file descriptors 1, 4, and 5", with the shell then redirecting descriptors 4 and 5. You can give multiple specifications like this, for example:

$ multitee 7-3,4 8-3,4 9-3,4

This will copy any data read on file descriptors 7, 8, or 9 to both file descriptors 3 and 4.

So in my case, I needed to arrange for each of my data sources to arrive on a different file descriptor, and then get multitee to combine them onto standard output. One way to do this is to create a named pipe in the filesystem for each data source; I could then log into each remote system using ssh, with standard output redirected into the pipe, and then pool all the pipes into multitee using shell redirections.

But pipes don't necessarily need to exist in the filesystem, thanks to the pipe(2) system call. It's therefore also possible to create a number of anonymous (unnamed) pipes within a single process, spawn a series of ssh processes with the file descriptors all arranged correctly, and then hand off to multitee to do the rest.

An execline primer

Creating some pipes and forking some processes should be relatively straightforward to put together in your language of choice. For this kind of ad-hoc, fine-grained process control, I'm quite fond of a little scripting language called execline.

Execline is based around the idea of command chaining which is found in commands like nice, wherein each command permutes its environment in some manner, and then executes the rest of its command line arguments, using the execve() system call. For example, in the following command, nice would first change the process priority, and then execute into ls:

$ nice -n5 ls -la

Execline has a number of commands for manipulating process state such as file descriptors and environment variables, and they all operate by making a change to the process state, and then executing into another program. This has a certain elegance which I find appealing. For example, here's a pair of snippets of code in POSIX shell and in execline, which are more or less equivalent.

$ grep -r foo >results 2>/dev/null                         # bourne-ish syntax
$ redirfd -w 1 results redirfd -w 2 /dev/null grep -r foo  # execline syntax

Both of these redirect standard output to a file called results, standard error to /dev/null, and then "run" grep -r foo. (I use quotes with "run" here, as a POSIX shell will first fork a child process and then execute the grep command in the child, whereas grep will be executed directly in the execline example.)

Execline also has a script launcher, execlineb, which understands a concept of "blocks", delimited with curly braces, which permits the construction of more complex logic, such as conditional execution and pipelines (similar to what one can do in the shell). For example, there's an if command, which forks a child to execute a series of commands (provided in a block), and then executes into commands given in the rest of its argument vector if the child exits with a success code:

$ # only attempt to remove the file if we know it exists
$ execlineb -c 'if { test -f unwanted_file } rm unwanted_file'

(The if command doesn't process the curly braces directly; the script launcher has to be used to convert the brace-delimited block into an internal representation of blocks used by the execline commands.)

There is also a pipeline command, which forks a child command to run a command line given in a block, and then executes into a second command line given in the rest of its argument vector, with the former's standard output piped to the latter's standard input.

$ # counting the number of times i've been mentioned on irc
$ execlineb -c 'pipeline { grep -r sysvinit irclogs/ } wc -l'

Execline provides a number of other useful tools which can be composed as part of this multi-machine logtailing script:

All of these commands execute into the command line given in (the rest of) their arguments once they've done their job.

Putting it all together

Now that we have something which can copy data between arbitrary file descriptors, and some tools to create and arrange some processes and pipes, we can put them together into an execline script.

In the first instance, let's create a pipe and then fork a process to ssh to a remote machine, with the pipe's writing end attached to ssh's standard output.

#!/usr/bin/execlineb -P

# the -P flag above disables argument handling, which we don't need here

# create a pipe, with fd 4 for reading, and fd 3 for writing
piperw 4 3

# now, let's fork a child for running ssh
background {
    # in the child process, we don't care about the reading end of the pipe
    fdclose 4
    
    # move the writing end onto standard output
    fdmove 1 3
    
    # invoke ssh, with standard input closed
    ssh -n root@alpha tail -f /var/log/nginx/access.log
}

# we're in the parent process here. we don't care about the writing end of the pipe
fdclose 3

Seeing as we're want to tail the logs for multiple machines at once, we should fork some more processes for other remote systems.

# create another pipe, with a different reading fd this time
piperw 5 3

# fork and ssh, as before
background {
    fdclose 5
    fdmove 1 3
    
    # this machine runs apache instead of nginx
    ssh -n root@beta tail -f /var/log/apache2/access.log
}

# close the writing end of the pipe
fdclose 3

# and same again for our third machine...
piperw 6 3
background {
    fdclose 6
    fdmove 1 3
    ssh -n root@gamma tail -f /var/log/nginx/access.log
}
fdclose 3

Now we've forked some processes, with some pipes back to the parent open, we should now execute multitee to perform the data processing

multitee 4-1 5-1 6-1

And that's it! Putting this code into a script and then running it should give a single, unified stream of log lines from the HTTP daemons on all three remote systems alpha, beta, and gamma. It should hopefully also be obvious how this can be extended to further machines — copy, paste, change the file descriptor numbers, and then add an extra argument to multitee.

More elaborate things can certainly be achieved in execline, making use of its variable substitution facilities for dynamically modifying a command line before executing it (which I haven't shown here), but it also does simple things like this sort of low-level process state control really well.


Changelog