Sunday, November 28, 2010

It's alive

Well the toast-oven computer came about a lot faster than I figured.  Tracy worked all day yesterday getting the case prepared, by cutting a hole in the back for the motherboard connects, mounting the power supply on the back, and attaching the LEDs to the front cover.  The red H.D. light fit nicely in the old 'ON' light spot, and we drilled a new hole for the green power LED.  The rough edges on the back were sealed with weather striping, and the motherboard was mounted to the oven grate with a series of rubber feet attached to the bottom of the motherboard.

After everything was put together, we flipped on the power, and it came alive.  We moved it into the office so it would have a direct connection to the internet during install.  We installed a few times, but each time the install crashed with I/O errors.  Eventually we just assumed it was the CD drive, which is proably about 10 years old. The install got far enough to allow us to boot, and we did an update from there.  It was getting pretty hot inside the case (it's an oven, it's supposed to retain heat) so we put a northbridge fan over the southbridge (the northbridge already had a heat sink, so it would have been hard to install the fan on it).    That helped to get the heat moving out of the case.

We removed the CD drive and put everything in place, and rebooted again, but this time the HD wasn't recognized.  We tried another HD, and it wasn't recognized either.  Just to make sure it wasn't the IDE controllers or southbridge, we hooked up the CD to the IDE, and that worked.  It seemed like we had fried both hard drives, and now nothing worked.  I tested the hard drives with my external IDE-USB connection, and it wasn't recognized.  It was close to midnight, so we decided to wrap it up.

The next morning, I got up and was looking at the hard drives to see if there were any instructions on the label that we hadn't followed.  The jumper was set to master, so I figured that was all that was needed to get it recognized.  Looking at the label on the hard drive, I noticed that no jumper was the prefered setting for single drive configuration, so I tried it again hooking it up to my IDE/USB connector.  This time it worked.  So we wasted no time getting it back inside our little monster, and finally we got it to boot up.

The next problem was the wireless network dongle.  We tried using on we had from our samsung blue-ray player, but it wasn't recognized.  I did some research, and found that it had a Ralink 3572 chip, and there was a driver from Ralink, and it just had to be compiled and installed as a mod.  Unfortunately, the compile failed, as there was no kernel headers to compile against.  Apperently the update failed when downloading the kernel headers, and now it was running linux 2.6.32-23, but only had the headers for linux 2.6.32-22.  And without a network, we couldn't fix the kernel source very easily.  I tried tarring it up from pearl, and transfering it via a thumb drive, but it's still not working.  It might simply be easier to get a new wireless dongle then fight with a half updated system to get this driver compiled.

As promised, here's some pictures:


Friday, November 26, 2010

Building a kitchen computer

Tracy had the idea to build a computer using an old toaster oven as a case.  So we dug through my pile of junk to see what parts we had, and what parts were missing.  We found an old motherboard that I had in a previous incarnation of my main server, 'servy'.  I suspected that the CPU died, because the fan wasn't well attached, and fell off at some point.  It still had 512M of DDR ram in it, so the the only thing we needed to bring it back to life was a CPU.  The problem was that it was about 6 years old, which is ancient history in the computer world, and no one sold the type of CPU it needed.

The motherboard is a PC-Chips M863G which has a Socket A type 462 slot for the CPU which means an old AMD Athalon/Duron chip.  I found several options on E-bay for buying cheap used ones that were pulled from recycled computers.  Then after some more research, I found RE-PC which is a local PC recycler, and has a store in Seattle.  Since our tradition for several years has been to go up to Seattle for black friday, we figured we'd stop in and have a look.

The RE-PC store was a computer hobbyist's paradise filled with tubs of spare parts, and old cases.  I probably could have spent several hours in there just browsing through all the bins.  They also have a computer museum, with all of my childhood computers.  One other thing we needed for the toaster oven computer, was a power supply, as it wouldn't fit the normal size power supply that I had on hand.  Tracy got to dig through a bin of power supplies looking for just the right fit, and I went to ask about CPUs.  They had a display case for the CPUs, and a box of about a dozen Athalon of various speeds.  I picked the cheapest one at $10, and Tracy found a slim Dell power supply in the as-is pile for $3.  So total spent so far is under 20 bucks.  Everything else came out of my spare parts pile.

So we put the parts on a small table, plugged all the cables in, and turned on the power.  Nothing.  That's when I noticed that I had forgot to plug in the power cable.  Duh.  Power cable plugged in, and now everything whirred to life, and we got a BIOS screen.  Next we wanted to try running some graphics on it. I was unsure whether at 512M it would run a full desktop.  We plugged in a CD drive, turned it back on, and set it to boot off the CD.  I couldn't find a Desktop ubuntu disk, but I got a disk in a recent copy of Linux developer that was bootable.  So we tried that, but never could get the CD drive to read the disk. I knew the disk was good, as I installed it's distribution on Art in a virtual machine a couple weeks ago.  I was ready to say the drive was bad and give up for now.  That's when I noticed that the disk was a DVD.  Double duh!  So I dug up an ubuntu desktop CD from my pile, and tried that.  We got a little further, except that it was a 64 bit version.  I had to finally download another distribution of the netbook remix from ubuntu (I figured the netbook remix would live most comfortably inside the limited space) and burn a new CD.  With that CD, we were finally able to boot to a desktop, and after hooking up a network cable, we could browse the internet.  The performance was surprisingly good.

We still need a wireless network dongle, and we need to modify the toaster oven case to expose the motherboard connection, and mount the power supply.  I'm reminded of many years back when I built my first frankenstein computer from the case of an old computer and a new motherboard, and had to use tin-snips to cut out the back so the connectors were all exposed.  Hopefully in a couple of weeks, we'll have the kitchen media-center up and running, and I'll post a picture of it then.

Sunday, November 21, 2010

Safety first

After last week's wind storm took down my servers, we decided to invest in a UPS to keep the servers up even when the lights are flickering.  So this morning I hooked up an APC UPS to the two servers, servy and phantom, and the network switch between them. But, as usual, nothing is ever easy, I had some trouble getting the computers to communicate with the UPS.

The basic idea is that when the UPS batteries get below a certain percent, the servers will do a clean shutdown.  But for that to work, the computers have to be able to communicate with the UPS.  The UPS only comes with Windows software, but, of course, that begs the question: why would anyone want to protect a Windows computer?  I guess they figure that anyone smart enough to use Linux is smart enough to download and install the software themselves.

So the first step at getting software, after some research, I discovered a product called NUT, which purported to do what I wanted.  So I installed first and asked questions later.  Once I installed, and discovered it wasn't plug-and-play, I went in search of documentation.  That's when I discovered that the official NUT site only had documentation for 2.2, and the official ubuntu repository download was 2.4.  Following the instruction got me nowhere, except a bunch of error messages that say 'this option no longer supported', with no indication what replaced it.

Plan B, I did a little more research and discovered acpupsd, which seemed semi-official (though not supported by ubuntu).  Configuration of that was a snap, and things seemed to be going good, except after starting it, it silently failed.  A quick tail of /var/log/syslog showed a nasty looking error message 'apcupsd fatal error in linux-usb.c at line 609'  A google of this message indicated that other people where getting it, but none of their solutions worked for me.  The last thing I tried was to mess around with the /etc/fstab to create a /proc/bus/usb, as it apparently used that as an interface for connecting to usb devices, and my system didn't have one.  Once I added it into the /etc/fstab I didn't know how to get the fstab reloaded, so I rebooted my computer.  That cause an error message because it didn't like the entry in /etc/fstab when it tried to load it.  So I took out the new entry.  I looked in syslog to see what might be the matter with the line in fstab, when I noticed that acpupsd was now running.  Running acpaccess gave a screen dump of the status of the UPS, so I guess rebooting the machine with the UPS usb cable plugged in caused everything to work.

Still to do, I have to get acpupsd to talk over the network.  I would like to be able to have a GUI tool that will show the status at a glance on one of my workstations, and also I need to get both of the computers hooked up to it to shutoff when the batteries get low.  But that is next weekend.  And then the big test...unplugging it from the wall, and seeing how long they stay up, and making sure that they shutdown correctly.

On another topic...I was looking around for set-top internet boxes that might allow me to cut my umbilical cord with the cable company.  While searching for one to buy, I came across a blog that set up and old Windows Laptop with Hulu Desktop and connected it to the TV instead of a brand-name set-top box.  And with the set-top boxes costing more than $300, that seems an economical solution.  Plus I don't like the idea of being locked-in to whatever software the set-top box deems I should run.  I like being able to install whatever software is available, especially in the dynamic first days of internet TV when everything is bound to change monthly if not weekly.  So I may see what kind of computer I can scrap together to run one of these things.  I tried the hulu desktop with my downstairs ubuntu, and it worked like a charm.  Plus I plugged in the remote that used to work with my HP media center PC, and that worked too.  So I'm already counting the money I can save canceling cable.  Maybe next week after I use Hulu Desktop and other media options for a week to try it out.

Sunday, November 14, 2010

At long last, success

Pal has been moved, kitchen sink and all, to Phantom.  I'm writing this from my new Ubuntu desktop 'Art' which has inhabited Pal's old body.  The transfer went smoothly, I had to change real Pal's IP address temporarily so that I could use it as a virtual machine console for virtual Pal without sucking down new mail from servy.  The issue I had last time with virtual Pal's IP address being assigned, was that when creating the virtual machine, it created a new virtual ethernet device called eth1 and set it from the DHCP server.  Once I had virtual Pal running I simply had to adjust the network settings to use the original IP address, and now virtual Pal is doing everything that real Pal did.

I rebooted Phantom, just to make sure that everything comes up correctly, and turned off real Pal.  After check that all of Pal's services were running inside of Phantom (with a little bit of disk swapping) I breathed a sigh of relief, and started in at creating my new desktop computer: art.

I got a couple giga-bit network cards yesterday as I try to upgrade all of my network.  Unfortunately when I opened up Pal's old body to stick in the card, I discovered that it didn't have any PCI slots.  So I guess I'm stuck with the on-board 100mb nic.  I'm half toying with the idea of getting another bare-bones to act as my backup virtual host/cloud hub.  But that's probably another couple paychecks away.  I put the computer back together, and installed Art as a Ubuntu 10.10 desktop.

Next I'm going to install virtual manager on Art, so that I can access virtual Pal via a virtual console.  I may try to migrate Cloudy over to Art as well, to take some of the load off of Phantom until I can get more memory.  And at some point I will need to stick this giga-bit network card in Phantom, and reconfigure Servy to use his faster nic (I don't know why I set him up on the slower one), then see how fast I can transfer files to see if it truly is giga-bit.  Having two virtual hosts and transferring entire virtual guests from one to the other will definitely need the speed.

I'll take a break from this in the afternoon, when I go up to Seattle to hear Tchaikovsky's first piano concerto.  Then tonight, some power leveling on Wow in anticipation of cataclysm which comes out in December.  Probably no more tinkering with the network until next weekend.

Tuesday, November 9, 2010

Even in the future nothing works

One of my favorite quotes from Space Balls the movie, because I live it every day.  Today I spent half a day just getting a basic GWT 2.1.0 build working.  Gwt 2.1.0 just came out, and I wanted to try it out in a virgin project.  There has always been a lag between GWT and the maven plugins that allow you to build GWT projects in maven.  The GWT team evidently hasn't moved to maven yet, so the maven GWT plugin team seems to always be playing catch-up, and the old version of the GWT maven plugin didn't work with GWT 2.1.0   The GWT team is still insistent on using eclipse as its main development platform, but I have yet to overcome my natural preference for NetBeans over Eclipse, and so value maven as an IDE independent project structure.

Last Friday, things were remedied with the 2.1.0 version of the GWT maven plugin.  Now the version of the plugin will be tied to the version of the SDK.  Looking on the GWT maven plugin page, I found this description of creating a new GWT project.  True to the title of this blog, it didn't work.  It was nice enough to quietly revert to the last version of the plugin, and create a GWT 2.0 project with the 1.2 GWT maven plugin.  I was surprised when my project that was supposed to use the latest code was using old code, until I looked back on the command line and found the single line about not being able to find the archetype, and reverting to the old version.  Some googling on the error message led me nowhere, I suppose I'm the first person trying to use the archetype feature of the released plugin.

I was able to change the versions in the maven pom.xml to the latest and get a good build.  Now to try out the new MVP model included in recent GWT.

I suppose this all adds to my recent post about the challenges facing developers.  The trick is being able to find the appropriate code to reuse, and being able to overcome any assumptions made by the reusable code developer to match my own.  At work, I'm often on the other side of the coin, producing code intended to be reused, and other developers pulling out their hair trying to match my assumptions with their own.

Monday, November 8, 2010

Journey of Discovery

In my last post, I was back to the drawing board on moving mail from pal to posty, due to a discrepancy in the Maildir between courier IMAP and dovecot IMAP.  My solution then was to install courier IMAP on posty and try again.  To make a long story short, that didn't work.  The folders were now present in the email webclient, but the receive times were all exactly the same.  I could have said, good enough, but I had no clue why the receive times were all the same, and I had a sneaky suspicion that there may be other issues lurking.  So once again, back to the drawing board.

My next idea was to try something that I've been wanting to try regardless, that is to take a physical machine and virtualize it.  So I did a little research on exactly how it could be done.  Along the way I also discovered a blunder I had made last week, when I set up all my servers to run ntp, the network time daemon that periodically check the time from some network time server and adjusts the computers clock.  It's obvious to me now, that virtual machines should not have ntp running, as they all will be trying to adjust the same clock, only the host OS should run ntp.  So I removed ntp for on all the virtual servers.

On to virtualizing a physical machine.  The idea is to boot a live cd on the computer to be virtualized.  Then make an image of the entire hard drive, and transfer it to the host OS.  I found some instructions that were in German here, and used Google translation to get the gist of it.  The instructions needed some interpretation, but the basic process is to use the 'dd' command piped through gzip, and sent to netcat to make a compressed image and transfer it across the network.  The commands presented on the page didn't work exactly, so I had to search through some man pages to find the exact syntax.

It took about 3 hours to transfer the image from pal to phantom, and another 2 hours to make a backup copy before I tried to install it as a virtual machine.  When that was finally done I did some more research on exactly how to start a live image as a virtual machine.  The German instructions used a -snapshot option for virt-install, but my virt-install didn't.  Looking through the virt-install man page, I found the --import option that booted from a live image rather than try to perform an install.  I ran that and it appeared to work, but I could not ping the virtual pal.  Unfortunetly the only computer that is setup to view a virutal console is real pal, and I didn't like the idea of both pals running at the same time on the same network, as servy might direct any mail that's queued up to either.

Eventually I guessed that virtual pal wasn't responding so it safe to start up real pal and use the virtual manager to view the console.  Viewing the console, I could see that it was running, and a prompt was up saying the display parameters had changed, and asking what to do about it.  After reseting the default video parameters, I had virtual pal running in a window from real pal, which was mildly mind-boggling. I discovered that virtual pal for one reason or another was assigned a dynamic IP address rather than getting a real one.  Perhaps it wasn't going through the bridged network that was set up on phantom, or perhaps network manager had somehow screwed up (it has done that in the past and overridden the values in /etc/network/interfaces).  Either way, by now real pal had been spooled all the email servy had been saving up for him, so the image on phantom was out of date.  I would have to repeat this several hour process next weekend.  But at least I know it works, and the basic process to use.

Saturday, November 6, 2010

The big move

Today my goal is to move all of the email services off of pal and onto cloudy/posty.  Cloudy will host the email webapps, and posty will be the IMAP server.  Servy is still the main email gateway and SMTP server, since he's the only computer in the DMZ (directly connected to the internet).

Last weekend I installed squirrel mail and ox6 onto cloudy and set up posty as the IMAP server and final distination for email on the LAN.  Today, I need to:

  1. Configure ox6 with the real context and users
  2. Disable on pal and posty so no new mail is delivered
  3. Move the Maildir directories from pal to posty
  4. Change the mail gateway on servy to point to posty as the final delivery point for kamradtfamily.net
  5. Change apache on servy to point to cloudy for the email webapps.
  6. Re-enable the mail on posty.

Configuring ox6 with a real context and users, is through a CLI, and I've read through several install guides to get a feel for exactly what's needed.  First of all, I deleted the test context I created last weekend.  Then I created the main context and users to match what is on Pal.  (I'm sure I could have imported the database from pal, but I wanted to go through the motions, so I knew what I was doing).  Then I can test it all out via a web browser pointed at cloudy.  So the login screen came up, but the login failed.  Time to debug.  The ox6 logs say they couldn't log on to IMAP, so I look in posty's mail log, and find this from wednesday:  Fatal: Time just moved backwards by 11 seconds. This might cause a lot of problems, so I'll just kill myself now.  So note to self, set up an IMAP ping in nagios which I use to monitor the network.  That clears up one error, but now I have another error in the ox6 log HTTP_JSON failed.  I had this all working, what could be different?  Apache's log shows access to /ajax/login, all with OK status, no errors in the error logs.  All /ajax requests are forwarded to tomcat via port 8009.  All services seem to be up and running now, but still no login.  There must be something different about how I created the context/users, since that all that is different between now and the previous setup.  So I found the exact same install guide that I used previously here, and lo-and-behold there were two options that I missed when creating the context according to the official install guide.  Deleting the context once again and creating it with the -L defaultcontext and --access-combination-name=all options now allows me to log on.

Time to shutdown the email on pal and posty and transfer the Maildir directories.  The Maildir directories hold your mail box, and any folders that you have created or that have been created for you.  Transferring them from on machine to another should be a simple scp, but we shall see...

So much for simple, on to plan b.  Pal has courier IMAP and posty has dovecot, and apparently they have different Maildir folders structures. I could see all mail in the INBOX, but no folders.  I will probably have to install courier IMAP on posty, but for now I'm going to restore the mail on pal, and take a break and think about it.

Wednesday, November 3, 2010

Code Reuse

Code reuse has been a buzzword for quite some time now, but it has never really been used as extensively as it should be.  At my work, I've been trying to get some level of code reuse for the past 15 years, but was often stymied by "if it ain't broke don't fix it" attitudes of copy and pasters.  They saw code reuse as introducing a dangerous level of complexity by exposing them to code that someone else has written, which could possibly be changed on a whim, or not changed to meet the needs of the reuser.  And they're right.  There was no clear path to the 'land of milk and honey' that code reuse promised.  Our CM systems and administrators went cross-eyed when I try to set up  some jar files that might be used by more than one project in a single place.  Eventually I turned my back and the jar files were duplicated for each project, and the administrators gave a sigh of relief, that such an abomination was corrected.  There was no simple way of saying, "here's some code that's used in two separate project, but should be maintained in one place", so the dream of code reuse was unfulfilled.

Then, around five years ago, the maven project was started.  As far as I can tell, it started as an attempt to formalize a development process that used to be done by ant.  It took the strategy of convention over configuration, that is, if you accept the convention, you don't need to configure.  But what happened that made maven an indispensable tool for me now, is that it froze in place their development processes that encouraged reuse.  Now there was a standard way of sharing compiled code, and tracking it with versions. There was a standard way of documenting a project.  And there is a standard way to develop, test, and release code.  The best part is that you don't need to jump through hoops to use the standard, you have to jump through hoops not to.  

I won't say that all is easy now, and that there aren't still issues that make code development real work.  But it takes code development to a new level, with new fresh challenges.  I suppose the biggest challenge facing developers now, isn't how to reuse code as maven solves that, it's how to write code that's reusable.  And really, not that many developers face that issue, as most development is done close to the end-product, that is it solves specific problems for specific users, and therefore can't be re-used.  But as the amount of reusable code piles up, that developer has to spend less and less of their time re-inventing the wheel, and more time solving specific problems.

So now that we have a path to real reuse, I thought it would be good to start writing some code that might be reused.  I could have a library of useful bits of code that I could put together for my bigger projects.  In order to do that, some initial footwork was involved setting up a few pieces.  

First and foremost behind every real development effort is version control.  I need a repository that will hold all my source code and keep track of it.  My repository of choice for this was Subversion (SVN) because it had a cute name, and it was well tied in with all of the rest of the tools in use.  I won't say that it's the best version control out there, I know that many people prefer git or cvs or others.  But I had to decide on one, and SVN was well supported by tools, and very stable.  Considering it's really the core of this system, stability is very important.

Now that I had a repository for code, I need a repository for compiled code (jar files in java-lingo).  One of the things that maven formalizes is a file-system based repository for compiled code.  You tell maven where it is, and it knows how to get compiled code needed for builds, and put the compiled output of your builds there.  But in order to have more than one development computer, and a single compiled code repository, you need a repository manager that will allow access to the repository through HTTP.  I had been using archiva, but I just switched to nexus as I moved the repository from one server to another.  They both seem to function very well, but nexus has a better UI, and functions well out of the box.  The only issue that I have with it is that it seems to have some trouble with my apache reverse proxy which I haven't been able to solve yet.  If I can't solve it, and it starts to get in the way, I may revert back to archiva.  Since the actual repository file-system is dictated by maven, it should be simple to switch back should I need to.

One final piece that needs setting up is continuous integration.  With code re-use, CI becomes very important, making sure that all the pieces of re-usable code work with all the places the code is re-used.  It works by compiling and testing code on a central server as soon as it's checked in.  If it is set up to compile all of your projects at least nightly, it will catch any places where a change in one project affects some other project down the re-use chain.  For CI I use hudson, and have it installed along side nexus.  When hudson compiles the latest code, it sticks it into nexus where any other project that's using it can see it.

So now that I have this whole setup, I need to start writing some re-usable code.  Now my biggest problem is every time I think of something that I can write as a re-usable component, it seems that someone else has already written it.  So it shifts my problem away from writing code to figuring out how to best re-use other's code.  That is the next big challenge to all developers, how best to find and reuse code that's already written, and how to survive the inevitable two steps forward, one step backwards refinement (which version should I use) of such code.

Tuesday, November 2, 2010

Enabling development

Two things I have on pal for development are a maven repository, and hudson continuous integration.  They have actually been broke for a while, as installing ox6 stepped on the tomcat configuration that was running these services.  I just noticed it today, as I was installing eclipse on pal to try it out, and once I downloaded the eclipse maven plugin, it tried to index my mirror repository, and told me it wasn't working.

After figuring out what had happened to tomcat on pal, I figured it was time just to move the maven repository and CI server to cloudy, rather than try to fix it on pal.  I also realized I'd have the same conflict on cloudy with ox6's use of tomcat, so I decided to run them as stand-alone on separate ports.  And one more change, my maven repository on pal was via archiva, but I wanted to try out nexus from the good folks at sonatype that created maven in the first place.

The nexus install was pretty much a snap.  Just untar the installation file in-place.   I changed the default install from /usr/local to /usr/share to match all the other installs.  I started it up, and I could browse to it.  The only configuration was to change the admin password, index the proxied repositories, and change the deployment password.  The default configuration had the standard repository setup for third-party, release, snapshots and the standard set of external repositories.  Editing my settings.xml to point to nexus, and now eclipse can index my repositories.    The only thing left was to move my released code to the new release repository on nexus.  The directory structure is identical, so a quick scp moved the whole thing.  Actually it wasn't that quick, so while that was transferring, I went out to get my flu shot.  I still need to setup my settings.xml to allow deployment from my ci, but I will do that later when I actually need to do a release.  One step that I'll put off until I need to restart is to actually create an /etc/init.d script so that it starts automatically.

For hudson I followed some instructions to allow me to install it as a normal ubuntu package.  After the install, it said that port 8080 was in use, find another port.  I figured out that the configuration is in /etc/default/hudson, changed the port to 8082 (8081 is used by nexus) and restarted it with the /etc/init.d/hudson script.

The last step is to expose it via my reverse proxy on servy.  Nexus was easy enough, but hudson proved a little more difficult.  It turns out (and I may have known this before, but forget) that reverse proxy must have the same path in the url both inside and outside the firewall.  Since the default for stand-alone hudson is to have no path, I had to add a parameter --prefix=/hudson  Then it worked from outside the firewall (I use my corporate vpn to check connections from outside the firewall).

One more step will be to recreate the jobs on cloudy's hudson.  For now I'll leave it alone, I just got a new geforce 9500 GT for  my birthday from Tracy :) which I want to try out on WoW.