This is a generic description of how to set up a Tiny Core based web server. It has been updated a number of times over the years as I've chopped and changed between various thin clients as I've upgraded the software.
Previous incarnations of the web server (which have all performed flawlessly) have been:
Currently I've picked on a Wyse C90LEW for the hardware, partly because it is fitted with a Wireless network card. Tiny Core has advanced to 7.2 at the time of writing.
My aim was to set up a personal local web server on cheap low-powered hardware to be used for local testing of web sites prior to uploading any changes to the actual hosts on the internet. Obviously the Web Server could be used as a more public server, either on a local intranet or the internet, but, in those cases, you will need to take extra steps to ensure that your server is adequately secured against attack. My small home network is a benign environment with no external access, and only my family can access the LAN. So I've gone for simplicity in how certain aspects are configured. YOU HAVE BEEN WARNED.
I use this to test the web site before uploading any changes I've made to the live web site so it is important that my test environment is close to that of the hosting company. I don't want to end up trying to debug web site problems on the live system - something I've (almost) managed to achieve to date.
The last time I went through this exercise I thought I'd see if I should change the Web Server software I was using.
Earlier incarnations of my test setup had been running Apache 2.2.21. Perhaps I should install Apache 2.4.9 which was the most up-to-date build offered in the Tiny Core repository? After I'd installed and configured it I found that my web site was significantly broken. This made me look further into what was going on...
The reason for the problem turned out to be that, with verison 2.4, Apache had changed the syntax in the evaluation of expr used in Server Side Includes (SSI). I had three ways out of this:
Before I went any further I decided to check exactly what my hosting companies were using these days. browserspy provided a quick and easy way of doing this. At that time for this site it turned out to be nginx/1.4.6 (Ubuntu) but now is reported as Apache/2.2.3 (CloudLinux). The other site reported nginx but now reports as unknown. So at that time both were using nginx (engineX) but no indication of the version number in the second case. A quick check of the Tiny Core repository showed that nginx v1.7.2 was available so maybe a switch from Apache to nginx would be appropriate?
It only took a few minutes to download, install and configure nginx. Once again it turned out I had an issue - but not nearly as bad as before. This time I tracked it down to the SSI <!--#if expression. According to the nginx manual 'Only one level of nesting is currently supported'. On checking my code I found the section that generated the header and left-hand menu was wrapped in an 'if' clause so that it could be omitted to produce a more printer friendly page. Removing that 'if' clause (so dropping that feature) solved the problem locally but didn't answer the question as to why it worked on the live sites.
At the end of all of this I decided to stick with my current use of Apache 2.2.21 until such time as I found I had a problem with my code on the live web sites.
When I started on this exercise in the dim and distant past a basic assumption I made was that we'd have limited hardware resources on hand. That's not so much the case these days as today's cheap thin clients are to a significantly higher spec, but there's nothing wrong in remembering that some thin clients come with a fixed amount of RAM - typically 64MB or 128MB - so we may need to take an approach which isn't profligate with memory.
My previous investigations have demonstrated that virtually all thin client hardware is suitable for this task. My first web server ran on a 300MHz Geode GX1 based system. For a while I did have a trial system running on Neoware CA5 which has a SiS 200MHz processor - which once again was perfectly adequate for the task.
The system will be based on flash memory (Compact Flash, DOM, pen drive, whatever) and not a hard drive. This primarily means we'll need to make sure that any applications aren't writing unnecessary log files to the flash memory - or to RAM disk come to that as we may be short of RAM.
In terms of storage I find that Tiny Core and the various Apps take up 80MB of the flash memory. My website (as of March 2017) has about 870 pages (550 in the 'thin' section) and takes up a further 340MB of memory. So we're looking at a minimum of say 512MB for the flash memory.
My main desktop machine runs Windows (now Windows 10). The local web server we're building here sits on my local LAN and runs headless (no screen or keyboard). I need access to the files from my Windows desktop so the drive they sit on needs to be visible on the LAN. Running samba on the web server provides this visibility. I occasionally need to logon to the server in order to run various tools and carry out odd bits of maintenance. For example I have a perl script that will report any differences between the live server and my mirror. That way I'm sure all my latest changes have been uploaded. In order to be able to connect to it I run dropbear - an SSH server.
Being around at the birth of microcomputers and spending my early years fitting programs in 256 byte and 512 byte EPROMs and working with maybe 8K of RAM I hate bloat. As a result my way of developing/maintaining my web pages is to edit the web pages by hand as that gives me full control of the file content. (Have you ever looked at the amount extra baggage that most web site building software produces?). For editing the pages I use a very nice syntax highlighting editor (EditPlus) on my Windows desktop. I also use Total Validator to check the HTML syntax.
When things are ready I use FileZilla on my Windows desktop to copy the updated website files from the shared drive to the live website.
To logon to the webserver from my desktop I use putty an SSH client.
The hosting services I use all used to use Apache as the web server. At one time it they switched to nginx (see above) and at least one is back to Apache. However, as noted above, I've found them to be 100% compatible with Apache 2.2.21 so I'm sticking with that.
My development approach - shared drive, remote logon, perl scripts - effectively determines what additional software we need to put on the server. The short list is: apache, samba, dropbear and perl.
Left to its own devices Tiny Core is RAM based - the "core" utilities are packed into the core.gz file which is then unpacked into RAM when the system boots and forms our primary filing system. After that Tiny Core searches any attached storage for the directory tce. This directory holds optional extensions to the operating system such as the graphical desktop and anything else we care to add. These days the default approach is that these extensions, unlike the core.gz file, remain on the storage medium. The extension files are actually compressed filing systems, and Tiny Core mounts these individually under /tmp/tcloop/[application_name] and then merges them with the main filing system.
For example on a system I have here if we look at the /tmp/tcloop/ directory we find...
root@box:~# ls /tmp/tcloop apache2 apr-util expat2 ncurses openssl-0.9.8 readline apr dropbear kmaps ncurses-common pcre
If we look into /tmp/tcloop/dropbear and see what's there in the way of files...
tc@snapper:/$ find /tmp/tcloop/dropbear -type f -print /tmp/tcloop/dropbear/etc/dropbear/banner /tmp/tcloop/dropbear/usr/local/etc/init.d/dropbear /tmp/tcloop/dropbear/usr/sbin/dropbearmulti tc@snapper:/$
We see there are three files one of which is usr/sbin/dropbearmulti. So if we look in /usr/sbin...
tc@snapper:/$ ls /usr/sbin/ cache-clear chroot fbset mklost+found rebuildfstab udhcpd cd_dvd_symlinks.sh crond fstype nbd-client taskset visudo chpasswd dropbearmulti inetd rdate tftpd tc@snapper:/$
...we find it there as well. The operating system has merged the application's directory tree with that of the main ram-based system. i.e. Everything from /tmp/tcloop/[application name]/ has been merged with the system /.
These tgz files are actually read only which could give problems when we edit an application's configuration file (eg /usr/local/apache2/conf/httpd.conf). The system copes with this by saving the changed file back into the RAM-based file system and, any time this file is accessed, the file system returns the latest version from RAM rather than the original from the .tgz file. However when we power off, that changed file - and all the work we put into getting the configuration right - will be lost unless we save it somewhere first. We want any added files to persist across reboots.
This is where Tiny Core's /opt/.filetool.lst file comes into play. This contains a list of all (potentially changed) RAM-based files or directories that we want to persist across reboots (which logically has to include itself). Tiny Core handles this with a script (filetool.sh). When run with the -b option this packages up all these files into a single file (mydata.tgz) that it saves in the tce directory. When run with the -r option it does the reverse and restores the saved files. The restore happens automatically as the system boots. Backup requires you to either shutdown the system via the GUI, or you can do it manually at any time. The way I use this system, I only need to create/update the mydata.tgz file if I make any changes to the configuration files, so I use the manual approach - edit the files and then type: filetool.sh -b.
By default the filetool.sh script includes the /home directory when backing things up. We don't really want this as this will include the large amounts of data of our website mirrors. Luckily Tiny Core supports a couple of command line parameters that lets us tell it that the /home directory and the /opt directory are to be on storage media rather than being RAM based. The boot parameters are: opt=sda1 and home=sda1. Obviously the storage device (sda1 here) has to match our particular setup.
So, when the system starts up, we lose memory to the kernel, the expanded core.gz filing system, any files included in mydata.tgz and then to the applications that we run. If memory is really tight then the only option open is remaster the core.gz file to strip out anything that isn't used (eg drivers for hardware you don't have), or to go for a more conventional disk-based architecture. With my current setup 512MB of RAM seems overkill as checking the system I find:
tc@snapper:~$ cat /proc/meminfo MemTotal: 496148 kB MemFree: 414224 kB .....
For now we'll accept the default installation mode of Tiny Core which is called frugal. This operates as described above with the RAM disk which does mean that we sacrifice some memory to the RAM disk.
Our aim is to install Tiny Core along with apache, dropbear and perl onto some (maybe) memory limited hardware but we won't be trying to to achieve the lowest possible memory footprint. (To do that we'd need to take the extra steps of a full disk install rather than frugal approach).
We'll also assume that the 'disk' is flash memory of some kind and so we need to make sure that we don't have things like log files being continually written to disk.
Anything written to /opt and /home directory trees will be written to the flash disk and so are persistent. Any files created elsewhere are volatile and need to be added to the file /opt/.filetool.lst if we want them to persist across reboots.
Any comments? email me. Last update March 2017