Pitfalls to Installing Everything

The purpose of this article is to explain the potential problems in installing every package that comes included in any given Linux distribution. For the most part, this is a bad practice and is not conducive to becoming proficient in Linux for either a seasoned professional or a newcomer (ie. “newbie”). It is my hope that this will help educate people on this subject matter.

There are some abundancy arguments that are commonly used and overstated. Specifically: Disk space, memory and bandwidth are all “cheap”. Technically none of these are always true. In fact these are almost always entirely false in third world countries.

There are some minimal advantages to installing everything. There will not be any dependency issues among software packages included in the software distribution. All software will be immediately available for use to try and test. Other advantages are possible, but these are the most relevant.

The problems I see are as follows:

  • Most software will never be used and is redundant. Many of these applications are designed for experienced users who know how to install them even though they are not included in the default install. Examples: Most newbies do not use ‘vi’ or ’emacs’. Most devel packages are only used for compilation.
  • Every software whether used or not must be maintained if they may be accessed by multiple users whether remotely or locally. A typical problem would be for security updates or bugs that you would not normally encounter with default settings.
  • Updates take longer and consume more resources. Everytime a system wide update is done (ex: yum update) it needs to download updates for every single package on the system. Even though you may not pay for your bandwidth, there is some cost to the provider and could serve someone else who could use it more appropriately.
  • (For new comers) You really do not learn anything. It is beneficial at times to understand how software dependancies work and to learn how to install software when needed. Needs change and are not the same for everyone.
  • There is more immediate drain on local resources. Most distributions package enough software to run as either a server and/or a desktop. It does not make sense to run multiple server applications on a desktop machine. Furthermore, most distributions package some packages with the knowledge that some should not run at the same time, i.e. the installer should know what they are doing. Additionally many services and daemons perform redundant tasks, i.e. multiple FTP servers are not typically required or recommended.
  • Although rare, some distributions may include conflicting versions of packages with the intention of the user selecting only one. This is typical of a distribution which may provide a new less popular version in addition to a widely used version. An example in past I’ve seen is (SuSE?) shipping both Apache 1.x as well as Apache 2.x.
  • There are hardware specific options that should not be on every machine and require extra steps to update. In the case of Fedora Core, some kernel packages (which a small population require) are not updated on the same frequency as more common packages. This has lead to some confusion and difficulty.
  • An additional note to Fedora Core users: Fedora Core has always been “bleeding edge” distribution, which basically means it will typically ship with the absolute latest (sometimes not adequately tested) software versions. Also there will always be some software included that may not make it into the next version or update.

Given these points, it is still entirely up to the end user as to what software they should install and use. However, it is very unlikely that anyone could potentially use every single included application. It is better to choose less than more and install as needed. Furthermore it is best to understand why something is needed as opposed to foolish assumptions that more unknown software is beneficial.

PHP4 RPMs for Fedora Core 4

EDIT (Dec 19, 2005):
I have written a formal guide on PHP4 on FC4.

As a followup to my previous post about PHP4 on FC4, I decided to abandon PHP5 altogether. I spent some time to try and get the PHP4 src.rpm from FC3 to compile correctly in FC4. As it turned out neither the GCC4 nor the GCC3.2 included in FC4 would compile everything properly. So I decided to try GCC3.4 (which I installed from source long ago when first tweaking with FC4).

Anyways it worked. I have 15 RPM files which I don’t think I will upload unless someone really cares for them. I’ve only done this in the process of seeing if there is an advantage in using my own compiled RPM’s as opposed to using the FC3 RPM’s. Right now I don’t think that there is.

Recommended Method:
In the end if you force uninstall all PHP5 RPM files in FC4, and then you install the FC3 PHP4 RPM’s, it does work.

[root]# rpm -e php-imap php-ldap php-mysql php-pear php

Install any PHP4 RPM you want from Fedora Core 3 Updates. Make sure to install the php and php-pear RPM files together.

Quake 4 for Linux

Linux gaming for the most part is non existent. There are a handful of games, but for all the bells and whistles that the gaming industry puts forth, not much of it makes it to Linux. Activision is a good exception to this, I’ve followed their gaming engines since Quake II in the late 90’s. It was with the release of Quake III, a fully native Linux version was available. A full featured FPS (first person shooter) that was a commercial release really showed off the potential of Linux gaming. Activision has designed their gaming engines such that ports to other operating systems (Linux, MacOS, Nintendo, etc.) should be much easier. Which is in their best interest money-wise — Licensing!!! Many games, Caste Wolfenstein, Elite Force and Doom 3 to name a few have all been released as such.

Quake 4 was released only 2 days ago (Oct 18) and the Linux installer is already available. The best thing about this deployment method is that if you buy the Windows version, the the Linux installer is available for download and uses the data files from your Quake 4 windows CD’s. Basically 2 for the price of one. Id Software and Activision really get a thumbs up for open minded design.

Now if only the rest of the gaming industry could follow suit. But for a incredibly small Linux gaming market and for the relatively higher development costs and minimal returns, I seriously doubt that Linux games will improve any where in near future.

Canon S500 in Fedora Core

I had written a really simple camera mini-guide for how I use my Canon S500 digital camera in Fedora Core 3. The other night I decided to update and make sure everything still works in Fedora Core 4 – and it did.

Guides like these, to me, are almost not necessary. I would tell someone, just make sure you have Gnome and gPhoto and your USB setup and your camera will “automagically” work. But that advice, to me, does not seem very tangible. In the Windows world, people are reassured by the fact that their hardware or peripheral comes with an installation CD. That seldom happens in the Linux world. People who are unsure whether or not hardware works correctly with Linux need some sort of valid proof that there is Linux support – this always seems to come from the community as opposed to the manufacturer. Personally, I research Linux compatibility for all of my hardware prior to purchasing. Guides like the one listed above, I hope, make someone feel more certain about their purchase.

MPlayer from CVS in FC4

I’ve been following along with the improvements made by the MPlayer development team through their mailing list. When Fedora Core 4 came out there were some (I think many) issues with the choice to use GCC4 (the GNU Compiler). Many applications, such as MPlayer, were not yet supported. There were patches from other groups, but the MPlayer team did not officially support it. As usual I compiled from source, but I used GCC3.2. I’ve never had any problems.

A week ago I pulled a CVS snapshot through their website and decided to compile and test a developmental version to see anything new. I installed along side my current version of MPlayer v1.0pre7. (I used ----prefix=/opt/mplayercvs/ during the configure step.) It all seemed to work perfectly with GCC4. Some basic things I noticed were better support for some media formats and full support for many output plugins that didn’t work with the GCC3.2 workaround. Finally they also have ported the GUI to GTK2. The forever old (and still very poor) gui was using GTK1.2 and has now been deprecated. Although I don’t see any new features in the GUI, it is nice to finally have a consistent GTK/Gnome interface – fonts, themes and all.

Basically good progress, but not recommended for average users. I am looking forward to the new release, even though lately it seems Xine has seemed like a better alternative.

Network Storage Device

As many have emphasized, Linux is exceptional in the server market. For consumers however, many do not realize servers can be very small scale and reduced to embedded devices. I’ve been working on a Linksys Network Storage Device for my home lan. Basically the device is a Linux server running on 133Mhz ARM Processor with 32MB of ram. However it comes with a 100Mbit network port and 2 individual USB2.0 ports. You can plug in harddrives or media devices into the USB ports and once you add it to your network you have a shared harddrive. Its a low power device which is very good alternative to building a full server doing the same work and much more affordable.

The folks at nslu2-linux.org have put together multiple extra software packages and guides on how to do much much more than just a file server. Some examples would be a Web server, Software Revision control (SVN), TV media recorder, iTunes music server, Windows Domain Controller, and the list goes on. My intended functions right now are a LAN DNS Server, basic webserver, revision control, NTP (network time server) on top of file serving. My idea is to relagate some functions I have on my computer and others on my network into 1 primary (low power) machine.

It’s not up and running yet. I found information to over clock the CPU to 266Mhz, and soder in a RS232 serial port so I can see booting information and access a command prompt. Lots to play with – more information when it’s all hooked up.

Fedora Help Forums

Quite possibly the most useful Linux and Fedora forums on the internet are LinuxQuestions.org and FedoraForum. I prefer the first one since it has been around much longer and has a great deal more content for Linux in general (not just Fedora). However the second one has been dubbed the “official support forum” for Fedora, so I guess it will have quite a good number of helpful people as well. Of course, my main gripe with forums in general is having to continuously checking for updates on threads, making sure you ask the “right questions” and dealing with arrogant admins. Oh well.

Anyways, as my first foray in the world of aggregation and RSS, I’ve setup a simple Fedora Help Page from the help forums which indexes new posts from the above sites. I set it up mostly as an experiment, but I found myself using it. I might add others, but for now I hope someone finds it useful.

Virtual Private Servers

One of Linux’s many strength’s is its highly suitable web hosting options. Primarily Apache web server on Linux with various open source applications can provide cheap solutions for hosting needs.

The most commonly used hosting option is Virtual Hosting through Apache. With a simple setup, hundreds of unique websites can be run with 1 single server machine. For about $100 (US) a year, you can get a good set of features from most providers. However, most providers limit your options (minimal email, limited databases, no Java App Server, etc.).

Until recently, the next best solution was Dedicated Hosting. This requires rental or ownship of a specific server machine and managing it yourself. Multiple virtual websites can be hosted and depending on the hardware it can have other services as well. However the cost is significantly higher. Most providers change at least $50 per month for basic hardware/features and it is fairly typical to see prices of $100-200 (plus fees) per month for competitive features.

The technology has been around for quite some time, but Virtual Private Servers (VPS) are recently becoming more popular. This is the process of running multiple instances of Linux and Apache on the same machine. Every VPS on the machine gets a percentage of CPU, disk space, etc. Then each VPS can then host whatever they want without the need to maintain server hardware. When they need to be rebooted, the whole machine is not rebooting bringing down other VPS’s on the same machine – more of software reset than hardware reboot. Software such as Virtuozzo is becoming a popular product from many providers. You can find hosting plans offering VPS from $20-40 per month.

Once I hear some good reliable reviews on VPS services I plan to migrate to that option. I’d welcome any comments on how well these services work.