All-In-One Configuration Tools

As I mentioned previously, I run many sites on my web server. Yesterday I decided to clean up some sites that their owners had neglected or not used. One such site was running Apache Tomcat Java Server, which I did not care to leave running.

Now I, like many users of commercial hosting plans, pay for cPanel/WHM which includes a myriad of options/configurations/settings to do almost everything on the server. Back in 2007, I had used the cPanel Addon to install Tomcat. It was an incredibly easy “1-click Install”. I never checked, but I just assumed it worked. Similarly I thought it would be just as easy to uninstall Tomcat. I clicked “Uninstall” and all went well and I didn’t see any immediate problems. Or so I thought …

Last night the Apache Webserver failed. I did not realize till this morning (6 hours later). After some digging I found that it was because Apache could not find some Tomcat/Java module. So much for a proper uninstall. I did not have time to debug the issue, so what did I do? I simply re-installed Tomcat. I just could not afford any more downtime! … I know, I know: Shame on me!

This incident is like many commonly seen in the Linux world: An all-in-one graphical configuration tool can do wonders, but somewhere due to interaction between components it can causes all sorts of unforeseen problems. The root problem here is that it is incredibly difficult to know all the intricacies and nuances for administrating multiple software systems. Add to that the occasional need to manually edit config files, and you create an unmanageable mess.

Do you remember linuxconf? … Back in the day (pre-2002) Red Hat included a configuration tool called linuxconf which could manage multiple system options using a variety of graphical and non-graphical interfaces. While this worked wonders for novices performing simple tasks (mounting disk partitions, adding users, setting network addresses), it caused all sorts of issues for more complex services (web server, mail server, samba). Unfortunately at that time, there were very few complete comprehensive tools for configuring complex servers. Users who got burned using linuxconf, eventually learned that the only guaranteed way to setup things was to read man pages and documentation, and then editing config files manually.

Redhat did eventually abandon linuxconf with RH8.0. And while many users did complain, ultimately it was a smart decision. Software projects cannot be held accountable if some 3rd-party tool mangled their config files. Even more importantly, how can someone be certain the tool made the change they requested without looking at the config output? You can’t.

Sadly even though I expected cPanel to do its job (considering it is not free), I should have been more careful on a live production server. While I’m not saying that every single “all-in-one” tool is a failure, I am saying that trusting any tool without validation is a very poor choice.

SSH Client Configuration

I have a VPS which is host to many websites. Some of those sites are permitted ssh access for their admins. However I am the admin to several sites myself. Each site has a different username (login/password) for administration. Additionally I have changed the ssh port to a different number (instead of the default 22) to avoid some script/bot attacks.

All of this makes for very inconvenient ssh usage and plenty of typing errors. For example:

# ssh -p33333 username_site1@site1domain.com
# ssh -p33333 username_site2@site2domain.net

Fortunately ssh provides a client configuration file to make “shortcuts” for things like this.
If you start by reading the ssh_config man page:

# man ssh_config

It will reveal 4 useful options:

  • Host – A “shortcut” name which can be used instead of the full hostname address.
  • Hostname – The real host name which is the actual server to log into.
  • Port – Port number on the host server.
  • User – The username used to log in. Typically ssh will use the current unix username if not specified.

So using the above example. I created the the file: ~/.ssh/config:

[mirandam@atlas ~]$ cd .ssh
[mirandam@atlas .ssh]$ touch config

with the following contents:

Host site1
Hostname site1domain.com
Port 33333
User username_site1

Host site2
Hostname site2domain.net
Port 33333
User username_site2

Now I can ssh to either site with a simpler command. These do exactly the same as the previous ssh commands:

# ssh site1
# ssh site2

NOTE: Read the man page carefully. If you see the following error:

Bad owner or permissions on /home/mirandam/.ssh/config

This means you did not properly set the permissions on the config file. To fix:

# chmod 600 ~/.ssh/config

There are many other options in the config file for users who might have more specific options (X11 Forwarding, Timeouts, Compression, etc.).
For anyone with multiple ssh accounts on different servers, this is very convenient to implement. Note this also works for scp and sftp.

Command Line DVD Burning

When I built my server, I only used a CD-RW/DVD-ROM combination drive. Whenever I remotely downloaded a ISO using wget or bittorrent, I would have to copy the 2-4GB file(s) from my server to either my desktop or laptop. I finally caved, and bought a DVD-RW drive for my server (even though it will get minimal usage).

(This was all on my CentOS 5 server, I executed these commands entirely remotely. )

After I installed the new drive, I ran dmesg to check how it was detected:

[mirandam@atlas ~]$ dmesg | grep DVD
hda: HP DVD Writer 1140d, ATAPI CD/DVD-ROM drive
hda: ATAPI 12X DVD-ROM DVD-R-RAM CD-R/RW drive, 2048kB Cache, UDMA(66)

The device is /dev/hda, which will be linked to the DVD device in /dev:

[mirandam@atlas ~]$ ls -l /dev | grep dvd
lrwxrwxrwx 1 root root         3 Aug 19 19:44 dvd -> hda
lrwxrwxrwx 1 root root         3 Aug 19 19:44 dvd-hda -> hda
lrwxrwxrwx 1 root root         3 Aug 19 19:44 dvdrw -> hda
lrwxrwxrwx 1 root root         3 Aug 19 19:44 dvdrw-hda -> hda
lrwxrwxrwx 1 root root         3 Aug 19 19:44 dvdwriter -> hda
lrwxrwxrwx 1 root root         3 Aug 19 19:44 dvdwriter-hda -> hda

Necessary Software

In order to work with DVD writing, you should have the dvd+rw-tools package installed. This will most likely be installed if you have Gnome or a CD/DVD app such as K3b. If not install via yum:

# yum install dvd+rw-tools

I wanted to erase a DVD+RW.

I had a junk DVD+RW (Memorex) laying around that I wanted quickly deleted. I used the dvd+rw-format command (it took less than 30 seconds):

[mirandam@atlas ~]$ sudo dvd+rw-format -force /dev/dvd
* BD/DVD+RW/-RAM format utility by <appro @fy.chalmers.se>, version 7.0.
* 4.7GB DVD+RW media detected.
* formatting 92.3\

I wanted to burn a downloaded ISO file onto the DVD+RW.

I used the growisofs command. I tried using sudo but it refused, so logged in directly as root.

[mirandam@atlas ISO]$ su -
Password:
[root@atlas ~]# cd ~mirandam/ISO/

[root@atlas ISO]# growisofs -dvd-compat -Z /dev/dvd=DVD_Image.iso -speed 2
Executing 'builtin_dd if=DVD_Image.iso of=/dev/dvd obs=32k seek=0'
/dev/dvd: restarting DVD+RW format...
/dev/dvd: "Current Write Speed" is 4.1x1352KBps.
   31490048/2530975744 ( 1.2%) @4.0x, remaining 7:56 RBU 100.0% UBU   2.4%
...
...
 2512584704/2530975744 (99.3%) @4.0x, remaining 0:03 RBU 100.0% UBU  99.8%
builtin_dd: 1235840*2KB out @ average 4.0x1352KBps
/dev/dvd: flushing cache
/dev/dvd: writing lead-out

As you can see, it took about 8 minutes (fast!) for this to finish. After it finished, I mounted the new DVD to test it (my image was udf, most linux CD/DVD images are iso9660):

[mirandam@atlas ~]$ sudo mount /dev/dvd /mnt/dvd -t udf

When I got back to the server, my DVD+RW was ready for me.

I plan to always leave a DVD+RW media in the drive so it can serve as an means for backup. Automating the process may also be a good idea.

More Info

http://fy.chalmers.se/~appro/linux/DVD+RW/
http://www.andrews-corner.org/burning.html

Supported Features

Sounds like something Fedora users might relate to…

I hear many of you finally have smooth Flash support, but me and my Intel card are still waiting on a kernel patch somewhere in the pipeline before we can watch Jon Stewart smoothly.

Google Apps for Domains

I spend way too much time and effort tweaking my SpamAssassin settings on my server just so I can get my email and spam situation manageable.

Anyways, I’m getting sick of the trouble so I am trying out Google Apps for Domains. This allows me to use my own domain name, but using Gmail for email and other Google web based applications (such as “Docs” and “Calendar”) all for free. It is basically the whole set of Google applications made to work from your own domain. The best part is that it can be configured to work without interfering with your actual website. So you can still run your blog, web page or forum.

There are some significant benefits since Google is managing a lot of the software on their side.

Email
In Gmail I can create easily email address aliases or use “subadressing” without messing with things like CPanel or Exim. This is very useful for mailing lists among other things.

Calendar
Even though Google Apps was designed for multiple users, it is just affective for a single user. The Calendar feature can be used online or it can be made to work with desktop applications like Evolution.

Setup
If you want to use this free service, all you need is a domain name (you don’t necessarily need hosting). I was a bit hesitant to mess my main server, so I decided use my unused mjmwired.com which I have through 1and1. Google does a very good job providing information for configurations through some of the most popular domain name providers. Using 1and1 config options, I can redirect subdomains such as mail.mjmwired.com directly to the Gmail login for my domain.

Google Apps for Domains can be used for individuals or even communities or groups (of up to 50 people) for free. The enterprise options provide even more features (at a cost). If you ever considered trying it out, it is not too expensive to get a $7 domain name and the setup takes merely a few hours.

So far I’ve found it quite convenient, and I might migrate further to Google Apps in the future. Even though I too have my reservations about Google’s Privacy issues, this feature is too nice to ignore.

SELinux – the good

There is brief intro presentation on SELinux for “everyday” users. The 12th slides is titled “SELinux – the good”. It has quoted someone by the name Larry Loeb:

“Let me assure you that this action by the NSA was the crypto-equivalent of the Pope coming down off the balcony in Rome, working the crowd with a few loaves of bread and some fish, and then inviting everyone to come over to his place to watch the soccer game and have a few beers. There are some things that one just never expects to see, and the NSA handing out source code along with details of the security mechanism behind it was right up there on that list.”

I just found that hilarious so I had to pass it on.

I had been disabling SELinux, since it was released in FC2, but as of Fedora 9 I leave it on. For the most part now, it operates pretty transparently.

(Presentation link via James Morris)

Google Chrome on Fedora

Try out Chromium. Courtesy of T ‘spot’ Callaway:

Using your favorite text editor (as root), create chromium.repo in /etc/yum.repos.d/, with the following contents:

[chromium]
name=Chromium Test Packages
baseurl=http://spot.fedorapeople.org/chromium/F$releasever/
enabled=1
gpgcheck=0

Then run (as root):

# yum install chromium

From spot’s blog:

The packages are i386/i586 only (and the i586 chromium is a bit of a lie, it isn’t compiled with the correct optflags yet) because chromium depends on v8, which doesn’t work on 64bit anything (yet). Also, plugins don’t work at the moment and some of the tab functionality doesn’t work right, but as a general web browser, it seems functional enough. (And, it seems to pass the Acid3 test, which isn’t surprising at all, since WebKit does and Chrome uses WebKit.)

Looks interesting!

64-bit Linux

There was some discussion on the fedora-devel list about changing the default architecture for 32-bit Fedora. Which would mean that users running 32-bit Fedora with modern CPU’s will see some improvement, while older hardware will need to be supported by some secondary means or not at all.

There are some good points in the thread, but the question I found myself asking was: “Why do I still run 32-bit Linux at all?”. For 3 years, all of my computers have been 64-bit hardware.

I thought it was the web, however …

  • Flash: Runs great using 32-bit software in 64-bit Fedora (the native 64-bit plugin is currently in beta).
  • Java: OpenJDK has had a native 64-bit browser plugin for Java for over a year (the official native 64-bit plugin for Sun Java was released almost 6 months ago) .
  • I don’t even use things like RealPlayer anymore, and most websites no longer bother to embed video directly.

I often would recommend to people that multimedia had limitations or would require work in 64-bit Linux, but all my DVD’s, music and collected media work perfectly fine! And if you’ve looked at tutorials for media playback, there is little or no difference in the work required. (FYI: I have not missed anything for NOT having the win32 binary dll’s).

The only insignificant difference is the (sometimes) 10-15% size increase in downloads and applications for using 64-bit software. However for the performance gain, the cost in hard disk or download time is well worth it.

I feel silly for installing CentOS 32-bit on my personal server last year. That is not even used for multimedia or web. I think may upgrade it.

I admit I’ve been a luddite for far too long. If your hardware supports it (almost no new hardware is pure 32bit), then you should be using 64-bit Linux. In your next update or install cycle, skip the i386 and go download the X86_64!