Local Server Hardware

After identifying exactly what my local linux server requirements were, I decided to take a good look at my hardware options. From the start, I wanted to (1) save money, (2) save electricity/power and (3) minimize noise.

To address these issues, there were many decisions I made. First of all I did not need any peripherals. The server could be entirely “headless” (i.e. no monitor, no keyboard, no mouse, etc). The primary things of value are: disk storage, memory, cpu and network. Hence the following are my requirements:

  • Processor – CPU speed should be about 1Ghz. Preferably a low power processor (ex: Geode-NX, Pentium-M, etc.). I prefer a CPU without a fan and that could be cooled with the heat-sink alone. Although I do not plan to do major computational critical tasks, in the past I have seen certain processors (Via) have difficult with SSH (something that will be critical to performance for me).
  • Memory – Minimum memory: 256MB. From my experience using Fedora as a server and building other minimalistic Linux machines, I know that all my tasks/processes should be fine with about 150-200MB of memory after some tweaking. For safety, about 512MB will be perfect.
  • Disk – Minimum storage space: 80GB. All of my critical documents, development, music (mp3’s, etc) and server backups could be fit easily into 20-30GB. I think double that amount should be adequate. However, realistically, since I most likely would want a SATA drive for performance, most drives are at 160GB for the best price.
  • Video – Don’t care. On board. The most I would need to do is run a remote X-server session, which does not benefit from better video hardware.
  • Sound – Don’t care. On board. Would never be used.
  • Network – Standard networking is now 100Mbit, however it would be a nice to have 1Gbit. Although both my desktop and laptop are both 1Gbit, I do NOT have a gigabit switch.
  • DVD – For installation, a DVD drive would be needed, but once complete, everything would be done over the network. However, there is a practical advantage to having a DVD-RW. Even if the hard drive was very large, a 4GB DVD backup could still be useful for incremental long term backups.

What did I end up getting?

  • Motherboard/CPU – Intel Celeron 220 1.2Ghz. I found a mini-ITX CPU/board combination. My primary reasons for purchase were (1) price – less than $80 USD, (2) no noise – fanless, (3) power consumption was only 27W. The other features of the board were SATA, Onboard video/sound, ethernet and USB2.
  • Memory – 1GB 533Mhz. I did not need 1GB, but the price was only $25 USD and since the motherboard only had 1 slot which supported maximum 1GB, I decided that it was best to just max it out for now.
  • Storage – Seagate Barracuda 7200.10 SATA 250GB. I did not need the space either but the price was only $70 USD which for the features (16MB Cache SATA 3.0Gb/s) was a good deal. I currently recommend Seagate Barracuda drives, they are great performance and reliability.
  • Case – Antec NSK MicroATX Cube (350W). I thought it would be best to minimize the space the system would take. Also this series of cases is known to be absolutely silent as well as power efficient. This was the most expensive part: $100 USD.
  • DVD – I recycled an old DVD-ROM I was not using.

Total spent: ~$280 USD. This was well within my budget of $300-400.

There were some things I learned and I did run into some issues. I should first mention that everything did “work” correctly.

  • Motherboard SATA – I realized later that the motherboard only supported 1.5Gb/s SATA. So my drive is not operating as fast as it can be, but this is acceptable since majority of my file transfers will be somewhat smaller and the standard SATA performance is very good.
  • Motherboard Video – I knew that onboard video is pretty low performance (in this case SiS Chipset). However for whatever reason the VGA-out on this board was especially noisy. While this would be unacceptable for normal use, I never intended to have a monitor on this machine.
  • Motherboard BIOS – This board had an Intel BIOS. I am most familiar with Asus motherboards and in the BIOS setup there is an option to manually set the startup time (fixed, daily, etc.). The nvram-wakeup utilizes the setting to wake-up the computer. However this board does NOT have that option. I was hoping the machine would sleep during in-activity and wake automatically. I have not yet decided how to work around this other than manually starting it each morning.
  • Celeron CPU – While this CPU met my needs perfectly, I was a little disappointed it did not power-throttle (i.e. reduce clock speeds to save power). The ACPI information is a little bit confusing, however given that it is low power anyways and it does support 64-bit, this is acceptable.
  • Case – Any “smaller” case typically has problems. Antec is usually known for their quality, however I found my case incredibly noisy. The power supply fan rattled too much. I was thinking I would have to RMA to Newegg, but luckily I eliminated the noise by taking everything apart and re-tightening every single screw. Very frustrating but it worked.

Overall I am happy with the purchase. I did end up saving quite a bit, but I can’t help thinking that perhaps spending a little bit more would have allowed for some slight benefits.

I guess I was a little bit wordy, but I wanted to explain very clearly my thought process in assembling a computer with a very specific purpose.

More on this after the software is up and running!

Local Server Requirements

I response to my previous post, I started defining a set of requirements for a separate local server. By separate, I mean specifically a physically different machine than my desktop. Previously I had my desktop PC running Linux (Fedora) with all desktop applications, as well as several server applications in the background.

The following is list of ideas, requirements and plans I hope to achieve with this server.

1. Provide file access locally (on the LAN) and remotely (over the internet).
Samba is needed for file access for Windows machines and SSH will provide both remote file access via scp and sftp. I considered using NFS additionally for Linux file access, however Samba works efficiently both Linux-to-Linux as well as Linux-to-Windows and multiple services seemed redundant.

2. Provide backup to local data (multiple Windows and Linux machines) and for remote servers (web servers).
While this may seem similar to the previous requirement, it adds the need for services like time based schedulers and remote access applications like Rsync. This may impact how/when this local server should be started.

3. Provide a web development environment.
This can easily be summarized as L.A.M.P. development which stands for “Linux, Apache, MySQL and PHP/Perl/Python”. I need a webserver (Apache), a database server (MySQL) and at least one server side scripting language (PHP). Seeing as I do have multiple websites which I develop this provides for me a both a staging area for new ideas as well as active work on “mirrored” versions of live sites. The only concern here is that it is very difficult to duplicate exact the same versions of software that is running on commercial web hosts. Issues like PHP backwards compatibility do raise issues in this planning, more on this later.

4. Provide version control and change management.
While many will recognize this as either using a version control like CVS or SVN, the truth is that using tools like this become increasing difficult when you do things like dual boot or develop on different machines. Having a dedicated server makes this functionality more practical.

Outside of some miscellaneous odds and ends, the above is exactly what I want. Note that I did not add things like DVR or HTPC. Basically, all my server should be is: Data Development and Dependability, anything else will be a waste of time/resources for me.

Going forward, I can use the specific details of my requirements to pick the proper hardware to meet my needs.

Fedora Makes a Terrible Server

for me.

I am finally giving up on Fedora as a server. I find it just too unreliable. I have been using Fedora since FC1 (and been on Redhat since RH6.0), but for the most part I only used it as a desktop operating system.

When I was using FC3, I found it very helpful to mirror my website(s) on my local machine. This worked great, however with each new Fedora release I found more things breaking with my scripts and setup. For a while I put up with the struggle by doing things like re-compiling PHP4, but even that became quite tiring. I concluded that Fedora just moved too fast for my server development needs. I cannot fault Fedora, rather that it did not meet my needs. However I still used it.

As recently as FC6, I found the need to access my computer remotely. I started tweaking my desktop for some security and making it more “remote-access” friendly. This did work for a while. All I was doing was using SSH and working on my Apache mirrors. My frustration began with the security. I took a long hard attempt at trying to secure the system with SELinux, however for multiple reasons I abandoned it.

Later when moving to Fedora7 I was pleasantly surprised with the performance. The major problem arose when I updated my 2.6.21 kernel to 2.6.23. Some nasty bug was killing my SSH performance. Basically it left me dead in the water. I accepted to fall back on the older kernel, however soon enough I came to install Fedora8 which also had the bug. For the most part I didn’t use Fedora8 for 2 months since I was out of the country. However when I finally did, I got so frustrated that I ended up installing the old F7 kernel on F8. Of course I broke sound, PulseAudio, ACPI and the X-server, but I REALLY needed SSH to work. While this again is not Fedora’s fault, I find the following comment pretty helpful from the kernel mailing list: “Please note that 2.6.23 kernel has a lot of bugs and we don’t recommend using it…” – Regardless, it WAS a kernel that Fedora used.

I personally do not think that a given Fedora release is very stable and/or reliable. Until I started expanding my usage I really did not mind, however now I am finding it more difficult to handle. While some people may suggest I upgrade less frequently, that’s fine and all, but it does NOT change the quality of the release. Furthermore, as my desktop I like to have a new release. … And some people may tell me that I should have been more pro-active and filed bug-reports and other information. Well, I do agree with that, but unfortunately I just did not have the time.

So what’s the solution? Simple: don’t use Fedora as a server. For myself, I bought a brand new computer. I built a low powered file/web server and put CentOS 5.1 on it. In the next few weeks I will migrate all my development onto this machine. Hopefully I won’t need to upgrade it for a very long time.

Will I stop using Fedora? No. It will still be on my desktop. However I will have more realistic expectations from software considered totally “free”.

Sorry for the rant.

Should I Migrate to PHP5?

I noted some sites started pushing to PHP5 with the announcement last year that PHP4 would be EOL (end-of-life) in 2007. In truth I understand that there is no longer a compelling reason to remain with PHP4. The biggest obstacle was older software that did not support PHP5 (since version 5 is incompatible with version 4 in some respects). However there is no reason why most of that software cannot be updated, and if so I am pretty sure that some alternate version 5 compatible software exists. I also read some claims that in simpler configurations PHP4 could be faster/less memory than PHP5. I don’t know if that’s true or not. Finally the biggest problem: most web hosting providers are content with PHP4 meeting all their needs and have no reason to upgrade. I do agree to this in some ways.

My biggest problems with PHP in general are its very poor track record for security, its flaws in design/configuration and finally it’s tendency to break software on updates. In some ways: its just a nuisance to maintain!!! PHP4 will be supported for critical security issues till 2008-08-08 by the PHP developers. However being open source there is no reason why someone else could not support it after that. Redhat Enterprise Linux still supports a PHP 4.3.9 package. I was supporting a 4.3.11 package for Fedora which I updated last in 2007 for Fedora 7. However I only installed it on my personal web server on a development box. I guess it would be really easy to finally abandon supporting old packages and just move to version 5, however I don’t know the effect it may have on my public server with a dozen or so websites.

For now I will set a deadline for myself to migrate to PHP5 by the August deadline. However for the time being, if people would find it useful I am considering repackaging the RHEL PHP 4.3.9 for Fedora 8. My 4.3.11 package is greatly out of date. On my development server, I’ll just go ahead and install the PHP5.2 included in Fedora 8. That will be my testing ground for my server updates coming soon.

Realistically in the long run I should just slowly stop using PHP altogether, given that PHP6 will be another mess very soon. Perhaps I will look into Python or J2EE options, not sure yet.

Daylight Savings Time Change RedHat 8.0

In the past I’ve never actually changed my time settings on my computer, usually when booting into Linux the NTP (Network Time Protocol) server does the trick. However the local operating system (whether Linux or Windows) usually retains timezone settings in some way. I do not know if the RedHat/Fedora method is consistent with other Linux distributions. My personal desktop is running Fedora, Ubuntu, Windows 2000 and XP – all rather modern software with updates, so I wasn’t the least bit worried. However I seem to have forgotten my PVR (Personal Video Recorder) computer.

In 2004 I built a home theater type PC to play and record digital media (DivX, MP4, MPEG2, MP3, etc.) and set it up with my television and my amplifier. I had made the original draft of the idea in 2003, and even though RedHat 9.0 was available I had built my design on RedHat 8.0. So essentially I forgot about the DST change, until today, when I found out some TV shows were all 1 hour off.

I really did not do any form of investigation on how to fix this. My first thought was that I needed to update the NTP rpm and that would fix it. So I foolishly uninstalled the previous RPM and pulled a RHEL (Red Hat Enterprise Linux) source rpm and installed it. That’s when it occurred to me it had nothing to do with NTP. I knew that NTP uses UTC (Universal Time Coordinates), but I wasn’t thinking. So a quick look on the web tells me that timezone data in RedHat is directly handled by glibc. How nice, one of the core parts of the operating system. I wasn’t in the mood to do that much updating. So I followed the instructions provided here. Basically all I needed to do was replace the timezone data filestzdata and restart the NTP daemon.

Worked for me.
Good thing I’m not a server administrator.

Mar 17, 2007: Looks like Jason had the same issue on his Myth box. :-)

Virtualization and Emulation Choices in Linux

I have a decent computer with a large hard drive. My initial intent was to boot multiple different operating systems (as can be seen by the 6+ EXT3 partitions). However the latest technology hype is the need for virtualization or emulation. In effect, this would allow loading one operating system inside another without any reboot.

I use the term “hype” because the consumer demand (i.e. non admins, non developers, people with real lives) is very low. My personal issue, as is shared by Linux users in general, is often the need to boot into windows for a single website or for developers testing a webpage. Additionally some users prefer to test a new Linux without having to lose their primary system. Not everyone has a convoluted partition scheme like I do.

So I did some hunting (well not much). Being the frugal fedora friendly fellow that I am, I wanted a general purpose low cost solution. The choices I have are VMware Server, Xen and QEMU.

VMware Server

VMware is a professional NON-open source software. It provides virtualization so that the client operating system running inside the server can see the resources of the host directly. This is good.

The good:

  • Full virtualization
  • Free download
  • Provides prepackaged RPM’s
  • Can run Windows inside Linux

The bad:

  • Registration key requires full address, email, phone and other information
  • Not open source, EULA, etc.
  • Known to have problems with system updates


Xen is an fully open source package for unix-based operating systems. It provides virtualization with support from the kernel.

The good:

  • Supported directly through Fedora
  • Kernel support drastically improves performance

The bad:

  • Requires separate kernel
  • ACPI does not properly work in xen-enabled kernels. Hence no cpu frequency scaling or suspend. (very bad)
  • Some drivers have problems running inside a Xen client operating system.
  • Cannot run Windows on all hardware


QEMU is an open source CPU emulator. Emulators have to re-interpret every instruction from the client operating system, hence significant speed loss. It supports various cpu’s including x86, x86-64, ARM, PPC, etc. It has a kernel accelerator module which helps performance.

The good:

  • Supported directly through Fedora Extras
  • Can run Windows inside Linux
  • Can run 2 completely separate architectures together

The bad:

  • Very very slow
  • Kernel accelerator is proprietary with a restrictive license

Thoughts and Concerns

I have not yet tested VMware server because I personally think that the information collected is not worth it. I would rather pay a nominal fee ($10-20) for a key. I have tested WMWare Player (no registration required) which can load virtual machines created in the Server. I found it incredibly slow and some non-trivial setup steps.

I do not plan on bothering with Xen due to the loss in ACPI functionality. I don’t think I want my computer hardware limited just so I can load another operating system with less memory running slower which is already limited due to the virtualization. Some newer CPU’s from Intel and AMD have been reported to support features that will allow Windows to run in Linux in Xen. Of course, obviously will require new hardware. Additionally Xen still has quite a few bugs. Perhaps once it is running smoothly and the hardware becomes standard I will switch to that solution.

So far, I am using QEMU which seems to work well enough. I have tested Windows 98, NT4, 2000 and Fedora Core 3 all with adequate performance. I even once tested FC5-64bit inside of FC5-32bit (3 hour installation and 30 min boot is NOT worth the time) — (there is a Youtube video loading XP on the PlayStation 3 using QEMU). Even though QEMU can be horribly slow it does serve its purpose. I have seen IE6 in Windows 2000 emulated loading pages faster than Firefox 1.5 native in FC6!!!

I have a working solution currently. I am keep a watch on the emulation and virtualization technologies hitting the market. Not having to reboot to test something would be wonderful. My ultimate configuration would be a 64-bit host Linux with a virtualized Windows XP 32-bit and virtualized Linux 32-bit. … The time will come eventually.

PHP4 on Fedora Core 5 x86_64

I do not know how many people require PHP4 on Fedora Core 5. However since I find that I use it, I am providing PHP4 binary RPMs.

Since I made the files available I did receive some complaints. Primarily a compile failure on x86_64 architecture and a compile failure on PPC architecture. I have no means to test PPC, however I have tested with x86_64 and had success.

For x86_64 architecture, I have tested the src.rpm against the default FC5 rpm’s and the latest updated rpm’s (as of 08 July 2006), both work without problem.

PHP4 on Fedora Core

New Hardware: 64 Bit

AMD released their first X86 based 64 bit processor over 3 years ago. While the original processors, Opterons, were for servers, the desktop variants, Athlon 64, soon followed. Due to the open nature of the Linux kernel, distributions of Linux supporting the 64 bit architecture were readily (and freely) available before Windows.

In the past year, the Athlon 64 has made its way into laptops and more users are finding 64 bit versions of their favorite Linux distributions satisfactory for their needs. Hence I finally upgraded to 64 bit hardware. My delay in upgrading was primarily due to the common problems faced by early adopters. Often Linux users are hurt much worse than Windows users in this arena.

In any event, the original hardware for the Athlon 64 included features such as AGP 8X Video, Serial ATA (SATA) and Gigabit Ethernet. All of which are no longer considered “new”. The advent of Dual Core Processors, PCI Express and DDR2 Memory has further lowered the cost of older hardware.

I swapped out my previous motherboard (Asus A7V8X-X) with a Asus A8V. And the previous Athlon XP 2500+ CPU (1.8GHz) was replaced with a Athlon 64 3800+ CPU (2.4GHz). This decision allowed me to reuse the 1 gigabyte of Dual Channel DDR 400Mhz RAM and a Nvidia GeforceFX AGP Video Card.

Linux booted up perfectly fine on the first try – all drivers properly detected and there was absolutely no manual changes in configuration required. However for Fedora Core I may need to revisit the required running services. (Windows on the other hand required multiple reboots with uninstalls and re-installs of driver packages.)

I plan to revisit most of my guides and instructions in the next few weeks to address any differences between 32 bit and 64 bit Linux.