I got an error doing Buildworld in s_fma.c which caused an error code 1 stop on an AMD64 build on a quad EM64T Xeon Cranford-based IBM x366.
The compile flags I used were:
CFLAGS= -O2 -fno-strict-aliasing -pipe -funroll-loops -ffast-math
/usr/src/lib/msun/src/s_fma.c:187: error: unrecognizable insn:
(insn 439 438 440 54 (parallel [
(set (reg:SI 168)
(fix:SI (reg:XF 171)))
(clobber (reg:CC 17 flags))
]) -1 (nil)
Change the flags to:
CFLAGS= -O2 -fno-strict-aliasing -pipe -funroll-loops -Wall
Build world runs to completion. -O2 sets most of the go-fast options anyway and AMD64 builds turn on more by default. The x366 runs nice and fast.
Today I’m updating an IBM 366-8863 to be a new home server because not having a quad 64 bit Xeon box with 24G of RAM and 6 x 72G SAS RAID 10 in your house would be like watching tv on a black and white CRT or something… and it was $350 on eBay so who could resist? It will replace the old 5500 M20 and save 3U in the rack and probably a lot of power for a decent NAS box.
Unlike the 335, the 366 does not have a floppy drive. There’s actually room in the cas right behind the IBM logo, next to the lightpath diagnostics and above the optical drive… maybe I should get out my dremel and start looking for a 266Mhz 64bit PCI-X floppy controller.
The 366 is supported by the IBM Bootable Media Creator, which is a new thing for me. This tool gathers all of the most recent firmware updates for the servers you specify (or all supported ones) and creates a single bootable disk (the 335 is not supported). The tool found 23 updates for the 8863, though the versions are not all the same as you get doing the one-by-one download (there’s an option to select manually, but the integrated one-click approach is much easier).
All you do is download the creator tool for the OS of your choice, execute it, specify the systems you want to support, let it gather the updates and build the disk and it will even burn the disk for you. Once the update disk is burned, you simply boot with it into a GUI (which supports normal mouse keyboard) and a few restarts later you have a fully patched machine.
The only thing left is to use the latest ServeRAID disk to update your ServeRAID configuration.
Nice job IBM! This sort of thing is why I like IBM machines. Plus they’re black. And they have the built in KVM/console controller over IP (remote supervisor II).
The first step is updating the machine:
- BIOS to 1.15: download the flash image, it writes itself to a floppy, boot with that floppy and flash the BIOS. I had to go through a bunch of 1990’s era software disks until I found a few floppies that would format without errors. This also updates the LSI 1030 disk controller.
- Internal Diagnostics to 1.07: these are disk images (.img) diskcopy didn’t seem to do the right thing on my XP box, so I used diskwriter 0.9 to create the disks. You boot off the BIOS update disk then select update diagnostics.
- Configure the disks with ServeRAID. I didn’t flash the BIOS on the controller, but I did reformat the disks and set them up as RAID 1.
- Update the System Management Processor to 1.06. This is a self-booting floppy.
- Update the Broadcom NetXtreme NICs to 209h. This is a self-booting floppy that creates a RAM disk then runs the update. The command for the 335 is
This gets the core hardware up to date. You might also want to flash the firmware in the disks, though I did not as my box is loaded with unsupported disks. Plus 36GB SCSI disks aren’t exactly going through a lot of teething pains these days.
Then I installed pfSense from the LiveCD (verify the hash). This is pretty effortless. The only important bit of data is to set up the NICs: in the 335 under FreeBSD bge0 is the lower port and bge1 is the upper port.
At a later date I will install a 73P9265 Remote Supervisor II adapater, but the cable I have (73P9312) is for newer boxes. The 335 needs the 02R1661: oddly it is cheaper to buy the cable with a card than just the cable. This will probably need flashing of the firmware, but is a nice tool with remote KVM and a lot of other slick features.
Getting timely search engine coverage of a site means people can find things soon after you change or post them.
Linked pages get searched by most search engines following external links or manual URL submissions every few days or so, but they won’t find unlinked pages or broken links, and it is likely that the ranking and efficiency of the search is suboptimal compared to a site that is indexed for easy searching using a sitemap.
There are three basic steps to having a page optimally indexed:
- Generating a Sitemap
- Creating an appropriate robots.txt file
- Informing search engines of the site’s existence
It seems like the world has settled on sitemaps for making search engine’s lives easier. There is no indication that a sitemap actually improves rank or search rate, but it seems likely that it does, or that it will soon. The format was created by Google, and is supported by Google, Yahoo, Ask, and IBM, at least. The reference is at sitemaps.org.
Google has created a python script to generate a sitemap through a number of methods: walking the HTML path, walking the directory structure, parsing Apache-standard access logs, parsing external files, or direct entry. It seems to me that walking the server-side directory structure is the easiest, most accurate method. The script itself is on sourceforge . The directions are good, but if you’re only using directory structure, the config.xml file can be edited down to something like:
<?xml version="1.0" encoding="UTF-8"?> <site base_url="http://www.your-site.com/" store_into="/www/data-dist/your_site_directory/sitemap.xml.gz" verbose="1" > <url href="http://www.your-site.com/" /> <directory path="/www/data-dist/your_site_directory" url="http://www.your-site.com/" default_file="index.html" />
Note that this will index every file on the site, which can be large. If you use your site for media files or file transfer, you might not want to index every part of the site. In which case you can use filters to block the indexing of parts of the site or certain file types. If you only want to index web files you might insert the following:
<filter action="pass" type="wildcard" pattern="*.htm" /> <filter action="pass" type="wildcard" pattern="*.html" /> <filter action="pass" type="wildcard" pattern="*.php" /> <filter action="drop" type="wildcard" pattern="*" />
Running the script with
python sitemap_gen.py --config=config.xml
will generate the sitemap.xml.gz file and put it in the right place. If the uncompressed file size is over 10MB, you’ll need to pare down the files listed. This can happen if the filters are more inclusive than what I’ve given, particularly if you have large photo or media directories or something like that and index all the media and thumbnail files.
The sitemap will tend to get out of date. If you want to update it regularly , there are a few options: one is to use a wordpress sitemap generator (if that’s what you’re using and indexing) which does the right thing and generates a sitemap using relevant data available to wordpress and not to the file system (a good thing) and/or add a chron script to regenerate the sitemap regularly, for example
3 3 * * * root python /path_to/sitemap_gen.py --config=/path_to/config.xml
will update the sitemap daily.
The robots.txt file can be used to exclude certain search engines, for example MSN if you don’t like Microsoft for some reason and are willing to sacrifice traffic to make a point, it also points search engines to your sitemap.txt file. There’s kind of a cool tool here that generates a robots.txt file for you but a simple one might look like:
User-agent: MSNBot % Agent I don't like for some reason Disallow: / % path it isn't allowed to traverse User-agent: * % For everything else Disallow: % Nothing is disallowed Disallow: /cgi-bin/ % Directory nobody can index Sitemap: http://www.my_site.com/sitemap.xml.gz % Where my sitemap is.
Telling the world
Search engines are supposed to do the work, that’s their job, and they should find your robots.txt file eventually and then read the sitemap and then parse your site without any further assistance. But to expedite the process and possibly enhance search results there are some submission tools at Yahooo, Ask, and particularly Google that generally allow you to add meta information.
Ask.com allows you to submit your sitemap via URL (and that seems to be all they do)
Yahoo has some site submission tools and supports site authentication, which means putting a random string in a file they can find to prove you have write-access to the server. Their tools are at
with submissions at
you can submit sites and feeds. I usually use the file authentication which means creating a file with some random string (y_key_random_string.html) with another random string as the only contents. They authenticate within 24 hours.
It isn’t clear that if you have a feed and submit it that it does not also add a site, it looks like it does. If you don’t have a feed you may not need to authenticate the site for submission.
Google has a lot of webmaster tools at
is all you need to get the verification file up. You submit the URL to the sitemap directly.
Google also offers blog tools at