Automated FTP Script

I’ve been doing some development for some embedded Linux devices which only support ftp and telnet. In order to automate transfer of binaries from my Fedora development host to the target embedded device I’m using this handy ftp script: pushftp.sh

#!/bin/bash

# arg1 = dest hostname
# arg2 = local file
# arg3 = dest dir

HOST=$1
SRCFILE=$2

USER=root
PASSWD=password
#DESTDIR=/usr/local/data
DESTDIR=$3

ftp -inv $HOST<<ENDFTP
user $USER $PASSWD
cd $DESTDIR
bin
put $SRCFILE
chmod 755 $SRCFILE
bye
ENDFTP

Then I added something like this to my Makefile after cross-compiling:

push:
        sh ~/bin/pushftp.sh $(TEST_BOX) $(APP_BIN) /usr/local/data

So after I run make all, I run make push. I still need to telnet to the host to test/debug the application, but this makes the process much faster (and less error-prone).

I’m just noting this for myself, hopefully it’s useful to someone else.

Downside to Enterprise Linux

(Note: For the purpose of this post CentOS is equivalent to RHEL)

It has been about 2 and 1/2 years since I built a dedicated server and chose CentOS instead of Fedora. Since I installed CentOS 5.1 I have used the upgrade process 4 times with no problems bringing me to version 5.5 with practically no re-installation, re-configuration or troubleshooting upgrade issues. This is the goal of enterprise linux. A long term stable solution with no major changes to preserve compatibility with every piece of software that was provided since it was released (in this case April 2007). In the time I’ve been on CentOS I’ve upgrade/replaced Fedora at least 5 times on other machines. Each time learning the changes to software, languages, security and many other components.

I’m very pleased with CentOS knowing that after the next yum update all my software will keep working. And everything is 100% secure.

However the exact reason enterprise linux is so great is also the exact reason why it can be a major pain. Once a main component is locked down, Red Hat will not provide updates unless needed for security or stability.

An objective for my server was for web development. The state of web as defined in 2007 when RHEL was created is coming close to obsolete. CentOS 5 includes PHP version 5.1.6. However PHP 5.2, which was released before RHEL 5, has become the default standard requirements for many PHP applications. I was updating some code to utilize JSON when I realized I would have to deviate from standard updates to install PHP 5.2 on CentOS. (This wasn’t too bad)

Another objective was a file server and backups. I’ve been playing with DropBox (*) as means of an off-site backup solution. What makes it great is it’s support for Linux! Even text-based linux which is what my server is. However the first requirement is Python 2.5. CentOS uses 2.4, and you can’t do a major update of Python in a CentOS/Fedora install without breaking many things since this is a critical component. You can do a parellel install for Python 2.5 but this is a bit annoying to maintain as you have 2 versions of python installed. (I have yet to get Dropbox working well on my server)

I also have been writing C++ software using boost. I recently realized the asio library was standard in boost 1.37 and later. I was locked to 1.33 in CentOS 5. No big deal since, the boost package was not critical for me in CentOS and it could be easily replaced. So I decided to recompile a newer Fedora boost src.rpm. However I would see errors like this:

error: unpacking of archive failed on file /home/mirandam/rpmbuild/SOURCES/boost-1.41.0-iostreams-zlib.patch;4c7880e5: cpio: MD5 sum mismatch

The above error is simply because Fedora 12 changed the RPM compression algorithm used and rendered older versions of RPM incompatible with newer packages. I don’t dare meddle with RPM as it is a core component, so I ended up compiling an older Fedora 11 version of boost 1.37 src.rpm which did the job.

Overall I’m still happy with my setup, but slowly I’m spending a great deal of time patching different pieces as my needs have slowly evolved. Interestingly Red Hat recently announced extended support lasting up to 10 years. That seems way too long considering the state of software (although everyone still uses Windows XP – now 9 years old).

I’m getting a little antsy running 3 year old software. The good news is that RHEL 6 is in beta, which means that soon after release the totally free CentOS 6 will follow as well. Which I’m eagerly waiting for, because all my issues will be addressed … at least for the time being.

(*) Affiliate link – I highly recommend Dropbox.

All-In-One Configuration Tools

As I mentioned previously, I run many sites on my web server. Yesterday I decided to clean up some sites that their owners had neglected or not used. One such site was running Apache Tomcat Java Server, which I did not care to leave running.

Now I, like many users of commercial hosting plans, pay for cPanel/WHM which includes a myriad of options/configurations/settings to do almost everything on the server. Back in 2007, I had used the cPanel Addon to install Tomcat. It was an incredibly easy “1-click Install”. I never checked, but I just assumed it worked. Similarly I thought it would be just as easy to uninstall Tomcat. I clicked “Uninstall” and all went well and I didn’t see any immediate problems. Or so I thought …

Last night the Apache Webserver failed. I did not realize till this morning (6 hours later). After some digging I found that it was because Apache could not find some Tomcat/Java module. So much for a proper uninstall. I did not have time to debug the issue, so what did I do? I simply re-installed Tomcat. I just could not afford any more downtime! … I know, I know: Shame on me!

This incident is like many commonly seen in the Linux world: An all-in-one graphical configuration tool can do wonders, but somewhere due to interaction between components it can causes all sorts of unforeseen problems. The root problem here is that it is incredibly difficult to know all the intricacies and nuances for administrating multiple software systems. Add to that the occasional need to manually edit config files, and you create an unmanageable mess.

Do you remember linuxconf? … Back in the day (pre-2002) Red Hat included a configuration tool called linuxconf which could manage multiple system options using a variety of graphical and non-graphical interfaces. While this worked wonders for novices performing simple tasks (mounting disk partitions, adding users, setting network addresses), it caused all sorts of issues for more complex services (web server, mail server, samba). Unfortunately at that time, there were very few complete comprehensive tools for configuring complex servers. Users who got burned using linuxconf, eventually learned that the only guaranteed way to setup things was to read man pages and documentation, and then editing config files manually.

Redhat did eventually abandon linuxconf with RH8.0. And while many users did complain, ultimately it was a smart decision. Software projects cannot be held accountable if some 3rd-party tool mangled their config files. Even more importantly, how can someone be certain the tool made the change they requested without looking at the config output? You can’t.

Sadly even though I expected cPanel to do its job (considering it is not free), I should have been more careful on a live production server. While I’m not saying that every single “all-in-one” tool is a failure, I am saying that trusting any tool without validation is a very poor choice.

SSH Client Configuration

I have a VPS which is host to many websites. Some of those sites are permitted ssh access for their admins. However I am the admin to several sites myself. Each site has a different username (login/password) for administration. Additionally I have changed the ssh port to a different number (instead of the default 22) to avoid some script/bot attacks.

All of this makes for very inconvenient ssh usage and plenty of typing errors. For example:

# ssh -p33333 username_site1@site1domain.com
# ssh -p33333 username_site2@site2domain.net

Fortunately ssh provides a client configuration file to make “shortcuts” for things like this.
If you start by reading the ssh_config man page:

# man ssh_config

It will reveal 4 useful options:

  • Host – A “shortcut” name which can be used instead of the full hostname address.
  • Hostname – The real host name which is the actual server to log into.
  • Port – Port number on the host server.
  • User – The username used to log in. Typically ssh will use the current unix username if not specified.

So using the above example. I created the the file: ~/.ssh/config:

[mirandam@atlas ~]$ cd .ssh
[mirandam@atlas .ssh]$ touch config

with the following contents:

Host site1
Hostname site1domain.com
Port 33333
User username_site1

Host site2
Hostname site2domain.net
Port 33333
User username_site2

Now I can ssh to either site with a simpler command. These do exactly the same as the previous ssh commands:

# ssh site1
# ssh site2

NOTE: Read the man page carefully. If you see the following error:

Bad owner or permissions on /home/mirandam/.ssh/config

This means you did not properly set the permissions on the config file. To fix:

# chmod 600 ~/.ssh/config

There are many other options in the config file for users who might have more specific options (X11 Forwarding, Timeouts, Compression, etc.).
For anyone with multiple ssh accounts on different servers, this is very convenient to implement. Note this also works for scp and sftp.

Google Apps for Domains

I spend way too much time and effort tweaking my SpamAssassin settings on my server just so I can get my email and spam situation manageable.

Anyways, I’m getting sick of the trouble so I am trying out Google Apps for Domains. This allows me to use my own domain name, but using Gmail for email and other Google web based applications (such as “Docs” and “Calendar”) all for free. It is basically the whole set of Google applications made to work from your own domain. The best part is that it can be configured to work without interfering with your actual website. So you can still run your blog, web page or forum.

There are some significant benefits since Google is managing a lot of the software on their side.

Email
In Gmail I can create easily email address aliases or use “subadressing” without messing with things like CPanel or Exim. This is very useful for mailing lists among other things.

Calendar
Even though Google Apps was designed for multiple users, it is just affective for a single user. The Calendar feature can be used online or it can be made to work with desktop applications like Evolution.

Setup
If you want to use this free service, all you need is a domain name (you don’t necessarily need hosting). I was a bit hesitant to mess my main server, so I decided use my unused mjmwired.com which I have through 1and1. Google does a very good job providing information for configurations through some of the most popular domain name providers. Using 1and1 config options, I can redirect subdomains such as mail.mjmwired.com directly to the Gmail login for my domain.

Google Apps for Domains can be used for individuals or even communities or groups (of up to 50 people) for free. The enterprise options provide even more features (at a cost). If you ever considered trying it out, it is not too expensive to get a $7 domain name and the setup takes merely a few hours.

So far I’ve found it quite convenient, and I might migrate further to Google Apps in the future. Even though I too have my reservations about Google’s Privacy issues, this feature is too nice to ignore.

CentOS 5.3 Released

For those who might not be familiar with enterprise linux distributions, CentOS is a rebranded free version of Red Hat Enterprise Linux (RHEL). For enterprise usage Red Hat supports each release for 7 years, while carefully updating packages for backwards compatibility. Each .1 “point release” is an Service Pack update. RHEL 5.3 was released at the end of January.

Typically it takes a few weeks for the CentOS team to repackage, build and distribute the source of RHEL into a CentOS release. Last night CentOS 5.3 release was announced. The seemingly long delay was due primarily to some personal issues within the CentOS team.

I have been running a personal server on CentOS for 1 year now and I could not be more pleased with the results. I plan to update my server tonight when I am at the console. The following are some tips I’ve read online for a smooth (and fast) upgrade:

# yum update glibc

The glibc update is due to a RHEL 5.3 known issue.

After that, I would generally do the following. This basically updates the YUM installation system first to take advantage of any improvements in a newer YUM release. :

# yum update yum rpm
# yum clean all
# yum -y update

Even though past updates have been flawless for me, please do make proper backups, and read the Release Notes for more information.

Wikipedia Migrates from Fedora to Ubuntu

The admins running Wikipedia are almost complete in migrating their servers from a mix of RedHat and Fedora to Ubuntu. The primary reasons behind the switch, according to Brion Vibber (Wikimedia CTO), were personal preference, Ubuntu availability on the desktop and better support/stability compared to Fedora. As a server, one might think that an enterprise option like RHEL or CentOS might make for a better choice, however both of these lack the appeal of Ubuntu and the flexibility in support.

Regardless of the reasons for the switch, this is another opportunity for people to question Fedora’s fast moving development pace (i.e. “bleeding edge”). Fedora user know that Fedora requires constant updating/upgrading and Fedora developers are obviously quite accustomed this and welcome it. An interesting thread on the Fedora development mailing list raised this topic and spawned a great deal of discussion. Some users/developers think that if Fedora provided a LTS stable release then perhaps situations like Wikipedia’s could have been avoided. Jesse Keating, Fedora Release Engineer, chimed in with a very well worded point:

Given the amount of churn we allow maintainers to introduce into our
“stable” releases, I highly doubt Fedora would be suitable for any
situation where a “LTS” was desired. There is just too much major
version upgrading
, behavioral changes, massive amounts of updates,
rapidly invalid documentation, and high chance of regression in the
“stable” updates. We should address *that* problem before ever thinking
about extending the life.

Even if Fedora could address that problem, big organizations most likely won’t change their opinions. However if those issues could be addressed, many users probably wouldn’t be migrating away, and more importantly they would just have a much better operating system!

(As a personal point, I no longer use Fedora as a server. I recommend CentOS.)

Local Server Hardware

After identifying exactly what my local linux server requirements were, I decided to take a good look at my hardware options. From the start, I wanted to (1) save money, (2) save electricity/power and (3) minimize noise.

To address these issues, there were many decisions I made. First of all I did not need any peripherals. The server could be entirely “headless” (i.e. no monitor, no keyboard, no mouse, etc). The primary things of value are: disk storage, memory, cpu and network. Hence the following are my requirements:

  • Processor – CPU speed should be about 1Ghz. Preferably a low power processor (ex: Geode-NX, Pentium-M, etc.). I prefer a CPU without a fan and that could be cooled with the heat-sink alone. Although I do not plan to do major computational critical tasks, in the past I have seen certain processors (Via) have difficult with SSH (something that will be critical to performance for me).
  • Memory – Minimum memory: 256MB. From my experience using Fedora as a server and building other minimalistic Linux machines, I know that all my tasks/processes should be fine with about 150-200MB of memory after some tweaking. For safety, about 512MB will be perfect.
  • Disk – Minimum storage space: 80GB. All of my critical documents, development, music (mp3’s, etc) and server backups could be fit easily into 20-30GB. I think double that amount should be adequate. However, realistically, since I most likely would want a SATA drive for performance, most drives are at 160GB for the best price.
  • Video – Don’t care. On board. The most I would need to do is run a remote X-server session, which does not benefit from better video hardware.
  • Sound – Don’t care. On board. Would never be used.
  • Network – Standard networking is now 100Mbit, however it would be a nice to have 1Gbit. Although both my desktop and laptop are both 1Gbit, I do NOT have a gigabit switch.
  • DVD – For installation, a DVD drive would be needed, but once complete, everything would be done over the network. However, there is a practical advantage to having a DVD-RW. Even if the hard drive was very large, a 4GB DVD backup could still be useful for incremental long term backups.

What did I end up getting?

  • Motherboard/CPU – Intel Celeron 220 1.2Ghz. I found a mini-ITX CPU/board combination. My primary reasons for purchase were (1) price – less than $80 USD, (2) no noise – fanless, (3) power consumption was only 27W. The other features of the board were SATA, Onboard video/sound, ethernet and USB2.
  • Memory – 1GB 533Mhz. I did not need 1GB, but the price was only $25 USD and since the motherboard only had 1 slot which supported maximum 1GB, I decided that it was best to just max it out for now.
  • Storage – Seagate Barracuda 7200.10 SATA 250GB. I did not need the space either but the price was only $70 USD which for the features (16MB Cache SATA 3.0Gb/s) was a good deal. I currently recommend Seagate Barracuda drives, they are great performance and reliability.
  • Case – Antec NSK MicroATX Cube (350W). I thought it would be best to minimize the space the system would take. Also this series of cases is known to be absolutely silent as well as power efficient. This was the most expensive part: $100 USD.
  • DVD – I recycled an old DVD-ROM I was not using.

Total spent: ~$280 USD. This was well within my budget of $300-400.

There were some things I learned and I did run into some issues. I should first mention that everything did “work” correctly.

  • Motherboard SATA – I realized later that the motherboard only supported 1.5Gb/s SATA. So my drive is not operating as fast as it can be, but this is acceptable since majority of my file transfers will be somewhat smaller and the standard SATA performance is very good.
  • Motherboard Video – I knew that onboard video is pretty low performance (in this case SiS Chipset). However for whatever reason the VGA-out on this board was especially noisy. While this would be unacceptable for normal use, I never intended to have a monitor on this machine.
  • Motherboard BIOS – This board had an Intel BIOS. I am most familiar with Asus motherboards and in the BIOS setup there is an option to manually set the startup time (fixed, daily, etc.). The nvram-wakeup utilizes the setting to wake-up the computer. However this board does NOT have that option. I was hoping the machine would sleep during in-activity and wake automatically. I have not yet decided how to work around this other than manually starting it each morning.
  • Celeron CPU – While this CPU met my needs perfectly, I was a little disappointed it did not power-throttle (i.e. reduce clock speeds to save power). The ACPI information is a little bit confusing, however given that it is low power anyways and it does support 64-bit, this is acceptable.
  • Case – Any “smaller” case typically has problems. Antec is usually known for their quality, however I found my case incredibly noisy. The power supply fan rattled too much. I was thinking I would have to RMA to Newegg, but luckily I eliminated the noise by taking everything apart and re-tightening every single screw. Very frustrating but it worked.

Overall I am happy with the purchase. I did end up saving quite a bit, but I can’t help thinking that perhaps spending a little bit more would have allowed for some slight benefits.

I guess I was a little bit wordy, but I wanted to explain very clearly my thought process in assembling a computer with a very specific purpose.

More on this after the software is up and running!