May 18st, 2005 Edition

by Adam Israel (


We're back with an action-packed edition of Linux.Ars. This week, we take a look at a tool to help you monitor, analyze and react to log events (such as login failures) more effectively using the Simple Event Correlator. We also investigate how XAMPP, an Apache/MySQL/PHP/Perl compilation for multiple platforms, will help you get your web developer groove on.

Developer's Corner


The idea of open source bounties isn't new. Someone offers money in exchange for a feature they would like to see implemented.

There are bounties offered by GNOME (, Mark Shuttleworth (, Ubuntu (, KDE (, Mozilla (, and a list of various others can be found here (

Even Computer Associates got in on the action ( last year with announcement of a US$1 million bounty for the creation of migration tools from five popular databases to their own Ingres package, open-sourced last year. The winners of the CA bounty were just announced (

The bounty system has been subjected to a fair bit of controversy and concern ( How do you make sure that efforts aren't wasted by multiple teams working on the same goal? Who makes sure that patches make it upstream to the maintainer?

Bounties aren't necessarily bad. They provide an outlet for feature requests that otherwise might languish. It's also a way for non-programmers to stimulate the development of features that they want but otherwise couldn't develop themselves. I don't think there is anything wrong with offering money in bounty form in exchange for writing a particular piece of code. Even open source programmers need to eat.

Getting involved

So you want to involve yourself in open source development but you don't know where to start? Never fear, you aren't alone. Trying to contribute can be a little daunting at first. Havoc Pennington has a nice primer on working on free software ( The key is to start small and pay attention. Pick a project that interests you. Join the mailing list -- that's where most of the development discussion likely occurs. Idle in the project's IRC channel. Write a patch; fix a bug; add a feature.

Finding a project that suites you and will hold your interest may be more challenging. Most desktop environments, such as KDE (, Gnome (, and Xfce ( have pages dedicated to getting involved. Individual projects usually have some standardized way for developers to communicate, most frequently an IRC channel or a mailing list.

Once you find the project developers, lurk, reading mailing list archives and watching conversations in the IRC channel. Every project has a slightly different pulse. Get to know how the team works before you jump into the fire.

It's not always easy knowing where to begin or who to talk to. Don't be afraid to ask questions. By and large the open source community is friendly and sociable, and most frequently more than willing to assist with development, answer questions and discuss various approaches in solving a problem.

Tools, Tips, and Tweaks

Simple Event Correlator

by Tatsuya Murase

Frequently, it is useful for security professionals, network administrators and end users alike to monitor the logs that various programs in the system write for specific events -- for instance, recurring login failures that might indicate a brute-force attack. Doing this manually would be a daunting, if not infeasible, task. A tool to automate log monitoring and event correlation can prove to be invaluable in sifting through continuously-generated logs.

The Simple Event Correlator ( (SEC) is a Perl script that implements an event correlator. You can use it to scan through logfiles of any type and pick out events that you want to report on. Tools like logwatch can do much the same thing, but what sets SEC apart is its ability to generate and store contexts. A context is an arbitrary set of things that describe a particular event. Since SEC is able to essentially remember (and even forget) these contexts, the level of noise generated is remarkably low, and even a large amount of input can be handled by a relatively small number of rules.

For instance, let's start with something basic, like looking for direct ssh root logins to a machine (security best practice is to completely not allow such logins, but let's not follow that for the sake of this example):

Feb  1 11:54:48 sshd[20994]: [ID 800047] Accepted publickey for root from port 33890 ssh2

Ok, so we can create an SEC configuration file (let's call it root.conf) that contains the following:

pattern=(^.+\d+ \d+:\d+:\d+) (\d+\.\d+\.\d+\.\d+) sshd\[\d+\]: \[.+\] Accepted (.+) for root from (\d+\.\d+\.\d+\.\d+)
desc=direct ssh root login on $2 from $4 (via $3) @ $1
action=add root-ssh_$2 $0; report root-ssh_$2 /usr/bin/mail -s "Direct root login on $2 from $4"

This is an example of a rule in SEC. The first line describes the type, in this case, "Single" which tells SEC that we just want to deal with single instances of this event. The second line, ptype, tells SEC how we want to search for patterns. In this case we've chosen "RegExp" which says to use Perl's powerful regular expression engine. We can choose other types of matches, such as substring matches, tell the rule the utilize a Perl function or module, or tell it to look at the contents of a variable you can set.

The next line in this rule, the pattern in this case, is a big regular expression (regex) that would match on log entries where someone is logging in directly as root. We've grouped the timestamp, the IPs for both the source and destination and the method used to login for us to use later in an email. (If you're familiar with Perl, you can see SEC uses a similar regex grouping.)

The next line is the description of this rule. The final line is the action we intend to take. In this case, we add the entire log entry to a context called root-ssh_$2, where $2 will expand out to be the IP address of the machine being logged into. Finally, the rule will send mail out to with the contents of the context, which will include the matching log entry.

To run this thing we do:

sec -detach -conf=root.conf -input=/var/log/messages

It will start up and start looking for direct root logins in the background. We can tell SEC to watch multiple files (using Perl's glob() function):

sec -detach -conf=root.conf -input=/var/log/incoming/logins*

Say this rule chugs away and sends you email every morning at 5 AM when your cron job from some machine logs into another machine (as root!) to run backups. You don't want to get email every morning, so we can suppress those using the aptly named suppress rule type. To do that, we insert the following rule above our existing "look for root logins" rule:

pattern=^.+\d+ \d+:\d+:\d+ \d+\.\d+\.\d+\.\d+ sshd\[\d+\]: \[.+\] Accepted .+ for root from

Then we can send SIGABRT to the sec process we started previously:

kill -SIGABRT `ps ax | grep sec | grep root.conf | awk '{print $1}'`

which will tell that SEC process to re-read its configuration file and continue.

Now let's look at using SEC to watch for a brute force attack via ssh:

# create the context on the initial triggering cluster of events
pattern=(^.+\d+ \d+:\d+:\d+) (\d+\.\d+\.\d+\.\d+) sshd\[\d+\]: \[.+\] Failed (.+) for (.*?) from (\d+\.\d+\.\d+\.\d+)
desc=Possible brute force attack (ssh) user $4 on $2 from $5
action=create SSH_BRUTE_FROM_$5 60 (report SSH_BRUTE_FROM_$5 /usr/bin/mail -s "ssh brute force attack on $2 from $5"; add SSH_BRUTE_FROM_$5 5 failed ssh attempts within 60 seconds detected; add SSH_BRUTE_FROM_$5 $0
# add subsequent events to the context
pattern=(^.+\d+ \d+:\d+:\d+) (\d+\.\d+\.\d+\.\d+) sshd\[\d+\]: \[.+\] Failed (.+) for (.*?) from (\d+\.\d+\.\d+\.\d+)
desc=Possible brute force attack (ssh) user $4 on $2 from $5
action=add SSH_BRUTE_FROM_$5 "Additional event: $0"; set SSH_BRUTE_FROM_$5 30

This actually specifies two rules. The first is another rule type within SEC: SingleWithThreshold. It adds two more options to the Single rule we used above: window and thresh. Window is the timespan this rule should be looking over and thresh is the threshold for number of events that need to appear within the window to trigger the action in this rule. We're also using the context option, which tells this rule to trigger only if the context doesn't exist. The rule will trigger if it matches 5 failed login events within 60 seconds. The action line creates the context ($5 representing the IP of the attacker) which expires in 60 seconds. Upon expiration it sends out an email with a description and the matching log entries. The second rule adds additional events to the context, and extends the context's lifetime by 30 seconds, as long as the context already exists; otherwise it does nothing.

The creation and handling of these contexts which are created dynamically are at the heart of SEC's power and what set it apart from other "log watcher" style programs.

For example, a printer having a paper jam may issue a lot of incessant log messages until someone gets over to the printer to deal with it, and if a log watcher was set to send an email every time it matched on the paper jam message, that's a lot of email, most of which will get deleted. It would be worse if it was an email to a pager. SEC can create a context stating, "I've seen a paper jam event and have already sent out a page", which the rule can check for in the future and suppress further emails if the context already exists.

Another good example of this is included with SEC (, a simple horizontal portscan detector, which will trigger an alarm if 10 hosts have been scanned within 60 seconds, which has been traditionally a difficult thing to detect well.

John P. Rouillard ( has an extensive paper in which he demonstrates much of the power of SEC's contexts and we highly recommend reading paper to see much more of the gory details on log monitoring in general and SEC in particular.

In addition to contexts, SEC also includes some handy rule types beyond what we've shown so far (from the sec manual page):

SingleWithScript - match input event and depending on the exit value of an external script, execute an action.
SingleWithSuppress - match input event and execute an action immediately, but ignore following matching events for the next t seconds.
Pair - match input event, execute an action immediately, and ignore following matching events until some other input event arrives. On the arrival of the second event execute another action.
PairWithWindow - match input event and wait for t seconds for other input event to arrive. If that event is not observed within a given time window, execute an action. If the event arrives on time, execute another action.
SingleWith2Thresholds - count matching input events during t1 seconds and if a given threshold is exceeded, execute an action. Then start the counting of matching events again and if their number per t2 seconds drops below the second threshold, execute another action.
Calendar - execute an action at specific times. 

The Calendar rule type, for instance, allows us to look for the absence of a particular event (e.g. a nightly backup being kicked off). Or, you can use it to create a particular contexts, like this example from the sec man page:

time=0 23 * * *
action=create %s 32400

This way, you can have your other rules check to see if this context is active and take different actions at night versus during the day.

More examples

Let's say we want to analyze Oracle database TNS-listener logs. Specifically, we want to find people logging into the database as one of the superuser accounts (SYSTEM, SYS, etc), which is a Bad Thing (tm):

24-FEB-2005 00:26:52 * (CONNECT_DATA=(SID=fprd)(CID=(PROGRAM=O:\FPRD\FS750\bin\CLIENT\WINX86\PSTOOLS.EXE)(HOST=PSRPT3)(USER=report))) * (ADDRESS=(PROTOCOL=tcp)(HOST= * establish * fprd * 0

In my environment, we chop up the listener logs everyday and we run the following rules on each day's log:

pattern=^(\d{2}-\p{IsAlpha}{3}-\d{4} \d{1,2}:\d{1,2}:\d{1,2}).*CID=\((.*)\)\(HOST=(.*)\)\(USER=(SYSTEM|INTERNAL|SYS).*HOST=(\d+.\d+.\d+.\d+).*
desc=$4 login on $5 @ $1 from $3 ($2)
action=add $4_login $0; create FOUND_VIOLATIONS
desc=Write all contexts to stdout
action=eval %o ( use Mail::Mailer; my $mailer = new Mail::Mailer; \
$mailer->open({ From => "root\@syslog", \
                               To => "admin\", \
                               Subject => "SYSTEM Logins Found",}) or die "Can't open: $!\n";\
while($context = each(%main::context_list)) { \
        print $mailer "Context name: $context\n"; \
        print $mailer '-' x 60, "\n"; \
        foreach $line (@{$main::context_list{$context}->{"Buffer"}}) { \
        print $mailer $line, "\n"; \
        } \
        print $mailer '=' x 60, "\n"; \
} \

We run this configuration using the following Perl script that will pick out today's logfile to parse:

use strict;
use Date::Manip;
my $filedate = ParseDate("yesterday");
my $fileprefix = UnixDate($filedate, "%Y-%m-%d");
my $logdir = "/var/log/oracle-listener";
opendir(LOGDIR, $logdir) or die "Cannot open $logdir! $!\n";
my @todaysfiles = grep /$fileprefix/, readdir LOGDIR;
if (scalar(@todaysfiles) > 1 ) { print "More than one file matches for today\n"; }
closedir LOGDIR;
foreach (@todaysfiles) {
    my $secout = `sec -conf=/home/tmurase/sec/oracle.conf -intevents -cleantime=300 -input=$logdir/$_ -fromstart -notail`;
       print $secout, "\n";

The Perl script invokes SEC with the -intevents flag which generates internal events that we can catch with SEC rules. In this case, we wish to catch that SEC will shutdown after it finishes parsing the file. Another option, -cleantime=300 gives us 5 minutes of grace time before the SEC process terminates.

Here we are using the first rule to simply add events to an automatically named context, much as we did above, and creating the context FOUND_VIOLATIONS as a flag for the next rule to evaluate. The second rule will check for the existence of FOUND_VIOLATIONS and the SEC_INTERNAL_EVENT context which is raised during the shutdown sequence, and we look for the SEC_SHUTDOWN event come across input using a simple substring pattern. (This technique of dumping out all contexts before shutdown is pulled from SEC FAQ 3.23.)

As you can see, the action line of the second rule has a lot going on. What we're doing is calling a small Perl script from within SEC that will generate an email with all of the database access violations the first rule collected.

Another thing that we often wish to monitor closely are the nightly backups. Namely, we want to make sure they've actually started, and that they actually managed to finish.

Say that a successful run looks like this in the logs:

Apr  9 00:01:10  localhost /USR/SBIN/CRON[15882]: (root) CMD ( /root/bin/ / )

time passes...

Apr  9 03:14:15  localhost[15883]: finished successfully

An unsuccessful run would be, for our purposes, the absence of these two log entries. We can kick off a Calendar rule to set a context that indicates we are waiting for the first log entry to show up:

time=55 23 * * *
action=create %s 3600 shellcmd /usr/local/scripts/

Here we create the context "Wait4Backup" and set it to expire at 55 minutes after midnight, whereupon it executes a shell script that will presumably do some cleanup actions and notifications. The time parameter for the calendar rule uses a crontab-esque format with ranges and lists of numbers allowed.

We'll want to delete the Wait4Backup context and create a new context when the log entry for the start of the backup shows up:

pattern=(^.+\d+ \d+:\d+:\d+) (.+?) .*CRON[(.+?)]: (root) CMD ( /root/bin/ / )
desc=Nightly backup for $1 starting at $0 pid $2
action=delete Wait4Backup; create BackupRun_$1_$2 18000 shellcmd /usr/local/scripts/

With this rule, we've created a 5 hour window in which the backup should finish before this new context expires and reports a failure.

Now for the last part: what to do when the backup finishes.

pattern=(^.+\d+ \d+:\d+:\d+) (.+?) .*[(.+?)]: finished successfully
action=delete BackupRun_$1_$2; shellcmd /usr/local/scripts/
pattern=(^.+\d+ \d+:\d+:\d+) (.+?) .*[(.+?)]: error: (.*)
action=delete BackupRun_$1_$2; shellcmd /usr/local/scripts/ $4

The first rule takes care of what to do when it does finish successfully. The latter takes care of what happens when the backup script has errors. With these four rules, we have SEC covering the various possible states of our simple backup script, catching even the absence of the script starting on time.

Go forth and watch logs!

SEC is a powerful tool that builds on simple statements to handle the type of application monitoring and log monitoring that rivals commercial tools such as Tivoli or HP Openview. It does not have a GUI frontend or convenient reports, however, so a little more time must be spent on generating and formatting output of SEC's information. For those looking for more examples, a new rules collection has been started up at

Cool App of the Week

by Paul Ehrenreich

XAMPP is a software bundle made by [] that takes Apache, PHP, MySQL, Perl and other web technologies and wraps it into one neat, ready to run installation package. With XAMPP, there's no need to configure or tweak everything to make it work. You just download it, unpack it, run it and you have a full blown environment ready to use. 

XAMPP currently is supported on four different platforms: Linux (tested on SUSE, ReD Hat, Mandrake and Debian), Windows (98, NT, 2000, XP), Solaris and MacOS X.

XAMPP comes with a wide range of applications and components that can be used to build and test your own applications. Here is a list of some of the major software packages that comes with a typical XAMPP install:

Linux Version

Windows version:

If you are interested in Python and Java web development instead of Perl or PHP, you can download Python and Java add-ons for the Windows version of XAMPP. The included version of Python 2.3.3 and mod_python is 3.1.3. For Java development there is an add-on for using Tomcat 5.0.28 with mod_jk2/2.0.4. There is also an add-on that will let you use Cocoon, but you will need to have the Tomcat add-on installed in order to use it.

Installation is a snap no matter what platform you are using. If you are using the Windows platform you have two options: Download and run the installer, or they have a zip archive that you just unzip into a directory of your choice and run. If you are using Linux, just download the tarball, extract it to a directory and run the startup script as root. After the services start, open up a browser and point it to http://localhost and you will be greeted with the XAMPP home page that looks like this:

From here you can click on your language of choice and it will open XAMPP's control panel:

This is where you can view all sorts of information about your XAMPP environment, like the status of all the services that are running. You can also view security information to make sure you have passwords and permissions set correctly.

One thing you should be aware of is that there are no passwords set for things like MYSQL or phpMyAdmin by default, so you will need to set these manually. To set the passwords follow the link http://localhost/xampp/xamppsecurity.php. This will set your MySQL, phpMyAdmin password and htaccess file to restrict access.

If you are using this on a Linux platform you can start XAMPP with the security option "lampp security", which will go through and check that passwords are indeed set and will ask if you want to change them if they are set to the default.

Here you will also find links to tools such as phpMyAdmin so that you can administer MySQL through a nice web interface, Webalizer to track traffic statistics, and a PHP switch script. The PHP switch script toggles the active version of PHP between 4.3.x and 5.0.x. This gives a developer the ability to test compatibility between PHP versions to make sure the hosted application will work after an upgrade.

Also on this page you will find links to sample applications that make use of the components that XAMPP ships with. For example, there is a CD collection database that uses PHP, MySQL and a class that generates PDF's.

Overall itís a nice easy-to-use package that will quickly get you up and running with a ready to use AMPP application and development environment.