December 9th, 2004 Edition

By Jorge "whiprush" Castro (mailto:jorge@whiprush.org), Charles "ctkrohn" Krohn (mailto:ctkrohn@lycos.com)

Welcome to this week's Linux.Ars. Today we bring you coverage of Progeny's announcement to provide support for Red Hat Linux users, information about the rdiff-backup utility, and a quick overview of the Concurrent Version System, or CVS. Rdiff-backup is a backup system which makes incremental backups easy by saving the differences between successive backups, it allows for the recovery the file as it existed at any point in the past without saving different copies for different versions. It supports synchronization of multiple directories and is designed to be especially useful for developers who need to collaborate on a single project. We'll dive into all three topics, plus dish up our usual bits of Linux news.

 

Progeny to support legacy Red Hat Linux distributions

As you no doubt know by now, Red Hat will not be releasing any more distributions in the Red Hat Linux line, in favor of a focus on the Red Hat Enterprise Linux line and the community-supported Fedora project. In addition, they announced that they will stop maintaining Red Hat Linux 8.0 and earlier at the end of the year, and version 9 as of April 30, 2004. While Fedora may be a worthy successor to the Red Hat Linux line, its lack of for-pay support makes it irrelevant for most corporate users, especially small businesses. Clearly, many people would be pleased if a reputable company decided to offer support for older Red Hat products.


Enter
Progeny (http://progeny.com/index.html), the one-time Linux distributor now known as the "Linux Platform Company." Progeny was founded by Ian Murdock, the founder of Debian GNU/Linux (http://www.debian.org/). On October 24 of this year, Progeny, announced (http://lists.debian.org/debian-devel/2003/debian-devel-200310/msg01880.html) that it would port Red Hat's Anaconda installer to Debian. This caused much commotion at Ars; we covered the release here (http://arstechnica.com/news/posts/1067056799.html). Progeny became a company to watch: not because of Platform Services but because of what it might do for the average Debian user.


Progeny is now stepping into the void left by Red Hat,
announcing (http://www.linuxplanet.com/linuxplanet/newss/5137/1/) that they will be supporting the Red Hat Linux 7.x series. Today, on the Progeny Transition Services page (http://progeny.com/products/transition/), they announced that they would support Red Hat Linux 8.0 and 9. The price for the support will be US$5 per server per month, or US$2500 a month for an unlimited amount of servers. Hopefully their actions will alleviate some of the ire caused by Red Hat's decision to discontinue support for its legacy products.

 

Developer's corner

Managing projects with CVS (Concurrent Versions System)

Whenever one works on a large software project with many other developers, it becomes necessary to have a system for making sure that all developers have the proper version of all source files. This is extremely difficult to do without some central synchronization system. There are several source code repository systems in existence; CVS (http://www.cvshome.org/) is probably the most commonly used for open-source projects. CVS is not difficult to learn, and does its job well. A knowledge of CVS can save a lot of time if you are a lone developer, as it provides a mechanism for maintaining different versions of your code and "rolling back" to old versions if necessary. It is even more useful for larger groups; users can edit each other's files without conflict. In this section, we're going to walk you through importing a simple project into CVS and show you how to do some simple code management.

Before you get started, it is always a good idea to have a reference handy. Our favorite is the excellent Open Source Development with CVS (http://cvsbook.red-bean.com/cvsbook.html). For this tutorial we will have a project named "project". Assume that all the files for "project" are contained in my home directory under /home/jorge/project. The first thing I need to do is import my project into a CVS repository; but before we do that, we need to prepare CVS itself. To use CVS, you must specify the location of the project; this location is called the "repository." The repository can be on a local machine or a remote host. While it is possible to specify repository information on the command line, it is much easier to put it in the CVSROOT environment variable, so CVS automatically knows where to look. CVS relies on an environment variable called CVSROOT to tell it where the CVS repository is. A repository is a location where the files will be kept, this can be on the same machine, or it can be on a remote machine. Let's look at my example CVSROOT to see how it works. I'm using the bash shell; users of other shells may have to use different commands to set their environment variables.

jorge@piccolo:~$ export CVSROOT=:ext:jorge@cvs.whiprush.org:/var/lib/cvs

This command exports the variable CVSROOT; in plain English, the variable means "use ext to talk to cvs.whiprush.org with the account jorge, the repository is in /var/lib/cvs." Most CVS installations default to /var/lib/cvs, but we need to specify it. The :ext: parameter specifies to use ssh for the connection. While pserver is popular (most projects use it), for our purposes we want ssh-only to keep prying eyes away from our industry-shaping "project!" Now that CVSROOT is set, we can import our project into the repository. We can do this by moving into the project's directory, and typing the import command:

 

jorge@piccolo:~$ cd project
jorge@piccolo:~project$ cvs import -m "Initial import" project ars start

The -m is an initial message that you'll want to leave, project is the name, and the "ars" and "start" are the vendortag and releasetag. Don't worry about the last two, save them for your guru days. CVS will spit out some importing commands, and that's it! You've successfully imported your first project into CVS. Now, head off to another machine, and set the CVSROOT. Now what we want to do is "checkout" the project on this new machine, this is done with the, you guessed it:

jorge@gohan:~$ cvs checkout project

CVS will then checkout the code to the new machine, copying all the files from the CVS server into a local directory. Now, you can go about doing your normal work on your project. At the end of the day, you want to save your work, so you'll do one thing before you finish:

jorge@gohan:~$ cvs commit

This commits your changes to the repository. CVS will then open your text editor, specified in the EDITOR environment variable, and ask for a log entry. This is important, especially if you are working with a group of people. Just like when commenting code, we always want to add descriptive, useful comments. "Refactored function foo to not be broken" is better than "checking in so I can go home." We'll see why logging is important in a second.

To complete this cycle we then head back to our original machine, and issue one last command.

jorge@piccolo:~project$ cvs update

Since the project already exists on piccolo there is no need to recheck out the project, an update merely synchronizes our local files with the updated ones in the repository.

This is a rudimentary introduction to CVS; however, we hope that it is complete enough to spark some discussion. One day, when you stupidly mess up a whole bunch of code, you will be glad when the CVS rollback feature saves the day. It is also a decent way to keep track of projects via the web; here is an example from Mozilla.org (http://webtools.mozilla.org/bonsai/cvsblame.cgi?file=mozilla/configure.in&rev=&root=/cvsroot). Heck, we here at Linux.Ars even use CVS to manage our Linux.Ars (http://www.whiprush.org/viewcvs.cgi/linux.ars/volume-13.html) production.

If you dislike the command line, you can find plenty of graphical clients here (http://www.onlamp.com/pub/a/onlamp/2002/11/21/cvs_third_party.html). TortoiseCVS (http://www.tortoisecvs.org/) is our favorite for our Windows-using friends, while CVS is included with the Mac OS X Developer Tools.

 

Cool app of the week

Backups are always an important topic; many people don't give them the attention they deserve. Rdiff-backup (http://rdiff-backup.stanford.edu/) is a program which makes it easier to back up your files. Not only does it store the backed-up files, but it also stores the diffs to previous backups in a special subdirectory, so previous backups will also be accessible. Thus, in one package, rdiff-backup provides full data mirroring and incremental backups. In addition to working on local disks and mounted NFS and SMB shares, rdiff-backup works over ssh. This makes it possible to securely back up your files over public networks such as the Internet itself.

While rdiff-backup is a command-line application, it is not difficult at all to use. The various options and commands are logical and intuitive. The simplest possible command is to back up one local directory to another local directory:

rdiff-backup /path/to/source-directory /path/to/backup-directory

Backing up a local directory to a remote destination:

rdiff-backup /path/to/source-directory user@remote-host::/path/to/backup-directory

One can also backup a remote source to a local destination, or even from one remote machine to another. This allows us to use rdiff-backup to back up directories on a number of different machines.

rdiff-backup user@remote-host-1::/path/to/source-directory user@remote-host-2::/path/to/backup-directory

Restoring data is just as easy. The simplest way is to copy the backup directory over to the destination.

cp -a /path/to/backup-directory /path/to/destination

Of course rdiff-backup also allows you to restore the data using itself.

rdiff-backup -r now /path/to/backup-directory /path/to/destination

The -r (or --restore-as-of) option allows you to not only restore the latest backup, but previous backups as well. For example, restoring a file to what it was 7 days ago is very simple:

rdiff-backup -r 7D /path/to/backup/file /path/to/destination/file

It also has other important backup features, such as deleting all backups that are older than a given time span or that are from before a set number of incremental backups. You can also tell rdiff-backup to ignore certain files, so that files that aren't supposed to be backed up, such as temporary files.

The developers of rdiff-backup offer source tarballs as well as Red Hat RPMs, and it's included in many major distributions, such as Debian unstable, Gentoo, Fedora, as well as in the FreeBSD and NetBSD ports collection. ~write-up by windi

 

/dev/random