January 22, 2004 Edition

By Jorge Castro (mailto:jorge@whiprush.org), Amit Gurdasani (mailto:amit@arslinux.com)

 

Red Carpet

Welcome back to another issue of Linux.Ars. This week we are going to tackle two major applications. The first is for desktop users with Red Carpet. You will probably need a helmet for this section, since we're smashing stereotypes and common misconceptions along the way. The second is for you System Administrator types that want to get your logging under control and spit out in nice, easy-to-digest bites. Yummy.

Installing stuff in Linux. We've all been there, especially when we first started with Linux. We have tarballs, RPMs, DEBs, ebuilds, and so on, and so forth. We have half a dozen ways to install an application, and even more Linux users telling us about the "proper" or "preferred" method of doing so.

As a new user to Linux you might find yourself overwhelmed by the sheer magnitude of information (and misinformation). Uncertainty settles in, and next thing you know, you are recompiling your kernel so you can install an mp3 player. It need not be this hard. In fact, this solution is so simple and distribution-agnostic that you are going to want this tool for all your installations.

The solution that we speak of is Ximian's Red Carpet tool (http://www.ximian.com/products/redcarpet/). This tool will handle all your installation and upgrading needs, through a nice, easy to use graphical tool. We originally covered Red Carpet in our forums, but now that we have gotten used to it, we thought it would be nice to let you know.

While Red Carpet is offered via Ximian in commercial form, for hobbyists, we recommend the Open Carpet variety. Upon browsing the download site, we can see that a large number of distributions are supported. We think this is a great idea, since this means you can use this tool from anything from Red Hat 7.3, to SUSE 9.0, to Mandrake. Notably absent are Slackware, Gentoo and Debian. Users of these distros are more inclined to use those distros for their specific package management tools and probably wouldn't find much value in Red Carpet to begin with.

After finding your distribution in the list, you're going to download all the RPMs in the specific directory, and install them with a simple command, as root:

[root@trunks]# rpm -ivh *.rpm

The RPMs should install with no issues. Don't worry, that was the hard part. In your System Menu you will now see an icon for Red Carpet, or you can launch it manually with red-carpet. From within this GUI, you will subscribe to the open-carpet service, which will automagically find you channels with software in it, specific to your distribution, so there is no chance of you making mistakes. We have catalogued (http://episteme.arstechnica.com/eve/ubb.x?a=tpc&s=50009562&f=96509133&m=51300801855) a good deal of the installation details, as well as gathered some collective tips. The thread is Fedora-specific, but the Red Carpet portions work on all supported distributions.

Missing image
Redcarpetorig.png
Description

Finding and installing software becomes easy, and consistent across Linux distributions.

So why do we care about such a tool when solutions like up2date, apt, yum, and yast2 exist? For one thing, we like that it supports multiple distributions. This means that your Red Hat 7.3 box and SUSE 9.0 box can be updated the exact same way. We also like that you do not need to tweak an /etc/apt/sources.list, or /etc/yum.conf. This is extremely useful for new users that would rather spend time downloading software than hunting for mirrors.

Another important piece of functionality is the Red Carpet Daemon feature. You can execute this daemon on your servers, for example, and use this X client to connect to it remotely to update it. This is useful for servers that do not have X11 installed. Do not worry, command-prompt diehards: it comes with a command-line tool as well.

We have one concern with Red Carpet. It is not a good idea to use it for a kernel upgrade, since it upgrades your existing kernel, instead of the more conservative (and rightfully so) method of installing a new kernel and leaving the old one behind (just in case). Red Carpet upgrades it outright, so if your new kernel ends up not working for you, it might be rescue disc time.

All in all, we really think that new users need to take a look at Red Carpet right away. It should solve a good number of headaches that you encounter, and you can spend less time wrestling with packages and more time learning about Linux. We hope that distributions consider adopting Red Carpet to come with their offerings to begin with and make Linux even easier to use.

 

Tools, Tips, and Tweaks

This week, we introduce the system and network logging system, syslog.

Lately, reports of security lapses and vulnerabilities in software, both in the Windows and *nix worlds have become all to common. It has become increasingly apparent that it is absolutely necessary for everyone who is connected to a network to be aware of what is happening on their network and on the Internet. In the Windows world, Microsoft is busy readying Service Pack 2 for Windows XP that is, among other things, intended to involve the user in maintenance of security and decisions pertaining to system and network security.

In the Linux world, we have seen a steady stream of patches and fixes in everything from the OpenSSL libraries to the Linux kernel itself in the past year. It has become increasingly important to keep monitoring system and network activity. One of the more important components of this is keeping and maintaining logs of activity on the system and network.

Additionally, if there is a software or hardware failure and you would like to know about it, or if you are troubleshooting a network service you installed on a computer, logs can be very important. Chances are that the service found that something was wrong, and complained about it in the logs. Keeping and checking these logs can help you keep your *nix system in good shape.

Fortunately, much of the work is already done for you: *nix systems already come with a tool to collect and distribute log messages, syslog. This simple, yet remarkably powerful, logging system is flexible enough to accommodate most needs — everything from simple logging to disk files to network logging (and with the right tools, even logging to a database or line printer) is supported. And it is useful for more than just keeping track of what is happening on your Linux machine: Windows, VMS, Mac OS X, even routers and switches can all send their event information to a single computer on the network for record keeping, which makes managing, backing up, and analyzing the logs so much simpler.

Though often neglected, keeping and analyzing logs is an important administrative task, for professional system administrators and Linux desktop users alike, from both security and diagnostic standpoints. The key tool to do this is syslog, a protocol for system and network logging, and implemented notably by sysklogd (http://www.infodrom.org/projects/sysklogd/) and BalaBit Computing's syslog-ng (http://www.balabit.hu/products/syslog_ng/).

syslog was invented at the University of California at Berkeley, for the Berkeley Software Distribution (BSD). Since then, it's become a de facto standard for logging systems, for UNIX and UNIX-Like systems as well as for networking equipment such as routers and managed switches. The protocol is described in RFC 3164 (http://www.ietf.org/rfc/rfc3164.txt). (While Windows NT has its own event logging system, there are several tools (http://www.loganalysis.org/sections/syslog/windows-to-syslog/) that enable Windows systems to log via the syslog protocol.) Practically every major distribution comes out-of-the-box with a syslog daemon running and logging to files in /var/log for the various services that the computer is running.

How it works

On a Linux system, there is usually a syslog daemon running, configured to collect log requests from local applications (via a UNIX socket) and from other hosts on the network (via UDP datagrams on port 514) and then to log it in a disk file, a serial console, a printer, a database, another syslog host, etc. depending on the configuration. For instance, klogd (that comes as part of the sysklogd package) will send kernel messages (pulled from /proc/kmsg, the kernel's message ring buffer) to syslog to be logged.

The syslog daemon opens a special file (a so-called UNIX-domain socket) variously called /dev/log (primarily Linux) or /var/run/log and waits for other programs to send it messages using the syslog protocol. Applications and services usually have a programmatic routine available to them in standard code libraries, which they use to log messages to the syslog daemon. (For instance, the C library usually has the syslog() function call and the Perl distribution provides the Sys::Syslog Perl module; most programming languages that provide such a logging routine will base it on the syslog() function provided by the C library). The routine's job is to connect to the syslog daemon and to send it log messages using the syslog protocol.

Once the syslog daemon receives such a message, it can examine it and decide what to do with it — whether to ignore it, to write it to disk, to a log host, etc. This decision is usually facilitated by a configuration file (/etc/syslog.conf for BSD syslogd and sysklogd; /etc/syslog-ng/syslog-ng.conf for syslog-ng).

syslog messages are accompanied by selectors that identify the kind and urgency of the message, so that messages can be dispatched appropriately based on the selector. The selectors consist of a facility field and a priority field. The facility identifies the kind of sender:

The priority field indicates the urgency of the message:

The syslog daemon then looks up its configuration file or files and then sends the message to a log host (server accepting syslog event notifications from other network hosts), a specified log file, a fifo, a character device (such as the system console, a virtual terminal or a printer), all users logged into the system, a database, etc. If the syslog daemon is configured to accept event notifications over the network, the UDP datagrams coming in are handled in a similar fashion.

 

Configuring syslog

We will cover configuring the syslog daemon that comes with sysklogd, since that is the most commonly-found syslog daemon on computers running Linux. Note that syslog-ng is available for most major distributions, is about as easy to configure, and a lot more flexible in its logging capabilities.

The configuration file, /etc/syslog.conf, consists of entries that look like this:

# This is a comment.
# Long lines can be broken by placing \ characters before the breaks.
 
facility.priority;facility.priority;\
facility.prioritydestination
 
# The following sends kernel messages with notice priority and above to
# /dev/tty12 (typically the 12th virtual console in Linux)
kern.crit/dev/tty12
 
# The following line will dump daemon logs to /var/log/daemon; errors and over are logged
# synchronously, and others are buffered. The notation that uses = signs is a sysklogd extension
# over the original BSD syslog. The - sign before the filename indicates that the messages can
# be buffered.
 
daemon.err/var/log/daemon.log
daemon.=debug;daemon.=info;daemon.notice;\
daemon.warning;daemon.err-/var/log/daemon.log
 
# Send all authentication messages to a loghost, 192.168.0.253.
 
auth.*;authpriv.*@192.168.0.253
 
# Send significant stuff to a fifo called /dev/xconsole.
 
auth.*;authpriv.*;mail.*;lpr.info;\
kern.info;*.notice|/dev/xconsole

In order to enable receiving logs over a network, invoke syslogd with the -r option. You can usually do this through the initscript that starts syslogd. Do note that while a loghost adds security by making it harder for an intruder to cover his tracks, it must itself be kept very secure.

Applications that cannot, by themselves, use syslog programmatically, can instead use the logger tool that comes with BSD syslog and derivatives to send their messages to syslog. Alternatively, logger can be used to log from e.g. a fifo:

# mkfifo /var/run/f-prot.log
# logger -f /var/run/f-prot.log -p user.info -t F-PROT
# f-prot /usr/local/bin/f-prot -silent -ai -wrap -report=/var/run/f-prot.log 
  -append -disinf -auto -type -removenew /

Log file rotation — conserving disk space

Often, it is desirable to keep only recent logs, discarding older ones to conserve disk space. The logrotate tool can be used to maintain logs, renaming and discarding logs as necessary. You can configure logrotate on a per-logfile basis, specifying how often logs should be rotated (as when logrotate is run daily through cron), how many previous log files should be kept, whether old logs should be compressed, any commands that should be run at rotate time, whether old logfiles should be sent via email to any account, where old logs should be placed, how big a file can get before it is rotated, and so on. Here's an example configuration:

/var/log/cron.log {
   rotate 4
   weekly
   missingok
   notifempty
   compress
}

This will keep four weeks' worth of logs (rotate 4), and rotation will occur on a weekly basis (even if logrotate is itself run every night). logrotate will not give up if the log file is missing and will not rotate the log if it is empty. Finally, it will compress old logs.

Analyzing logs

Wading through thousands to millions of log messages can be incredibly tedious (and often unworkable), so there are log analysis tools available to parse and correlate information from log files. LogSentry (http://sourceforge.net/projects/sentrytools) and Swatch (http://swatch.sourceforge.net/) are popular tools used to extract interesting log entries from BSD syslog–compatible log files. The Simple Event Coordinator (http://simple-evcorr.sourceforge.net/) can correlate various log entries pertaining to a variety of system activities to provide an idea of what has happened, by applying administrator-defined rules that define contexts in which other rules can be applied and trigger commands. Lire (http://logreport.org/lire) is one of the most flexible log analysis tools for Linux. It can parse a large variety of log file formats, can compute various statistics specific to the service whose logs are being examined, and can produce reports in various different formats.

For a production environment, having log analysis facilities is essential in order to ensure system and network health and security.

 

/dev/random