Sunday, December 12, 2010

Talking to my toaster

I want my kitchen toaster/computer to be wireless, so it can be a media client.  My ultimate goal is to have my new 'mediamonster' computer be a media server that can send music/video to the various computers in the house.  But first I have to get toaster talking to the rest of the network.  I had tried a couple of wireless dongles, one that came with my Samsung blu-ray player, and another one that we purchased at a PC recycle place here in Tacoma (I can't find a URL, I'll post one later if I can find it).  Neither worked out of the box.

First, some background.  I haven't posted since November, but I've been busy with some new projects that I should fill everyone in on.  One of them is my new mediamonster, which is another barebones computer from tiger direct.  This one has a fancy case that makes it look more like a stereo than a computer, so it fits nicely in my media center.  I read a blog about a month ago about someone that created a media server from spare parts.  That makes a lot of sense, since if you do it correctly, you don't need a lot of fancy hardware to build a good server, and it has surprisingly minimal requirements as far as RAM and CPU speed.  Why pay a couple hundred dollars for one from the store when you can get the same thing for pennies and a little sweat equity, and get the pride of saying you made it yourself.  Ok, so I didn't pay just pennies, my last spare motherboard was used on the toaster, so I had to buy one, and I bought the fancy case so that I didn't have to hide it behind my TV.  Also I bought a cool wireless keyboard so that I could still couch surf with it.

Now the sweat equity part of the media monster.  There were a few issues getting the whole thing together, and a little more learning about how hardware has changed since I last dabbled seriously in computer building.  First off, the case didn't come with a power supply, but I had a spare power supply I figured I could use.  Wrong!  The new motherboard has a 24 pin ATX power connector, and my spare power supply came with a 20 pin connector.  So off to Best Buy to spend a fortune on a new power supply and a IDE ribbon cable which I also needed.  They had the power supplies for 60 some dollars, but they didn't have any IDE ribbon cables so we figured Radio Shack might have both for cheaper.  Fortunately for us, the Best Buy salesman was kind enough to point us to 'The Green PC' here in Tacoma.  We went searching for the place, and found it in a dark back street off South Tacoma Way.  It's a PC recycling place, and also sell bottom-of-the-line cheap hardware.  We got the ribbon cable, the power supply, a used TV tuner card, and a wireless dongle that said it was Linux compatible on the packaging.

Back home, we found more issues.  The case is a specially designed, fold-up case with the power supply and disk drives on the top half, and the motherboard on the bottom.  But the power supply cables didn't reach to the motherboard, even with the case closed.  The wires from the power supply came out on the opposite side as the connector plugs on the motherboard, and the distance was just long enough that it was clear it wouldn't work.  I had to flip the power supply upside down, which meant the fan would blow down into the lower half.  To do that I had to remove the fan grill which stuck out.  The screws that held the grill in place also held the fan in place, so it was a bit of a struggle getting the fan back against the side of the power supply so I could re-attach it.  That all done, the cables just barely reached the motherboard.

I had some issues installing software for it.  It's hooked up to our Samsung LCD TV, and it takes some time for the picture to show on the screen.  So I couldn't see the BIOS screen to set it to boot off the CD.  So I had to guess and just hit delete over and over when booting up, and I finally got to the BIOS screen.  After that, installation went without problem.  Now we have the mediamonster hooked up to the TV, and we can watch Hulu through the Hulu desktop, as well as stream music and video from our MyBook NAS.  No luck on the TV tuner card, it wasn't recognized by the standard set of drivers, so I will have to cobble together my own somehow.

Back to the toaster computer, I had the same trouble with the wireless dongles, they weren't recognized by the standard set of drivers.  Both dongles had drivers available, but in Linux tradition, it wasn't just a simple file you could plug in, but the source code that you had to build, and pray that it would compile without error.  Of course, neither did.  I looked around some more and found some how-to documentation that walked me through some of the trouble shooting steps.  To make the story short, I discovered that the wireless dongle that we purchased actually did have the drivers installed, but there were error messages in the syslog about not being able to load the firmware.  I'm not exactly sure what the story is with the firmware, and why it has to be separate from the driver.  Anyways, these wireless cards need to have firmware loaded into them in order to work, and the firmware is located in /lib/firmware.  Doing some investigation on in the Ubuntu forums, I found that the drivers installed in Ubuntu are looking for the firmware in the wrong place.  For this dongle they are looking in /lib/firmware/RTL8192SU, but there is only a /lib/firmware/RTL8192SE.  So I made a copy of RTL8192SE and called it RTL8192SU, rebooted, and the wireless dongle started blinking a happy green light.

Now I could see the list of wireless networks in the area, and our two were listed.  So I connected to our main wireless network, typed in the password, and it made it's little connecting animation.  After about a minute it re-prompted for the password, and repeated this cycle.  Again, I'll cut to the chase.  Apparently when the package says wireless-N it means it.  It was not able to connect with my wireless network because it was a B/G network, and wasn't backwards compatible.  I can't say that for certain, as I couldn't tell exactly what it was that the dongle didn't like, but I set it to talk to my secondary wireless network, which is N, and it was much happier (I did have to set the security protocol to WPA only rather than WPA/WPA2).  It's a long story as to why my primary network is a G network, and the secondary network is an N, but I'll probably revisit that whole setup soon anyways.

So now I can talk to my toaster, and it can stream music and video.  One issue still remains, it has occasional freezes, which is very unusual for a Linux system.  All of my other Linux systems can run continuously for months without rebooting.  To freeze like that almost certainly is either a driver issue or a hardware issue.  I'm assuming this network driver wasn't throughly tested by the Ubuntu people or the firmware wouldn't have needed to be monkeyed with.  But equally likely is the fact that it's running on a motherboard strapped to a toaster grill. But for now, I'm going to relax and stream my favorite radio station KING FM

One last thing, our latest visit to the Green PC landed us a cool looking 8 port gigabit network switch.  Our project yesterday before the concert was to re-wire the downstairs for gigabit networking.  We found a spot for the switch in the utility room, put a few holes in the wall, and reran the network cables. Now the whole house is wired gigabit.

Upcoming...I'm determined to get the TV tuner card in the mediamonster working, I'll have to look into troubleshooting capture device drivers next.  So stay tuned for the latest in my endless litany of frustration as I struggle with making my universe work.

Sunday, November 28, 2010

It's alive

Well the toast-oven computer came about a lot faster than I figured.  Tracy worked all day yesterday getting the case prepared, by cutting a hole in the back for the motherboard connects, mounting the power supply on the back, and attaching the LEDs to the front cover.  The red H.D. light fit nicely in the old 'ON' light spot, and we drilled a new hole for the green power LED.  The rough edges on the back were sealed with weather striping, and the motherboard was mounted to the oven grate with a series of rubber feet attached to the bottom of the motherboard.

After everything was put together, we flipped on the power, and it came alive.  We moved it into the office so it would have a direct connection to the internet during install.  We installed a few times, but each time the install crashed with I/O errors.  Eventually we just assumed it was the CD drive, which is proably about 10 years old. The install got far enough to allow us to boot, and we did an update from there.  It was getting pretty hot inside the case (it's an oven, it's supposed to retain heat) so we put a northbridge fan over the southbridge (the northbridge already had a heat sink, so it would have been hard to install the fan on it).    That helped to get the heat moving out of the case.

We removed the CD drive and put everything in place, and rebooted again, but this time the HD wasn't recognized.  We tried another HD, and it wasn't recognized either.  Just to make sure it wasn't the IDE controllers or southbridge, we hooked up the CD to the IDE, and that worked.  It seemed like we had fried both hard drives, and now nothing worked.  I tested the hard drives with my external IDE-USB connection, and it wasn't recognized.  It was close to midnight, so we decided to wrap it up.

The next morning, I got up and was looking at the hard drives to see if there were any instructions on the label that we hadn't followed.  The jumper was set to master, so I figured that was all that was needed to get it recognized.  Looking at the label on the hard drive, I noticed that no jumper was the prefered setting for single drive configuration, so I tried it again hooking it up to my IDE/USB connector.  This time it worked.  So we wasted no time getting it back inside our little monster, and finally we got it to boot up.

The next problem was the wireless network dongle.  We tried using on we had from our samsung blue-ray player, but it wasn't recognized.  I did some research, and found that it had a Ralink 3572 chip, and there was a driver from Ralink, and it just had to be compiled and installed as a mod.  Unfortunately, the compile failed, as there was no kernel headers to compile against.  Apperently the update failed when downloading the kernel headers, and now it was running linux 2.6.32-23, but only had the headers for linux 2.6.32-22.  And without a network, we couldn't fix the kernel source very easily.  I tried tarring it up from pearl, and transfering it via a thumb drive, but it's still not working.  It might simply be easier to get a new wireless dongle then fight with a half updated system to get this driver compiled.

As promised, here's some pictures:


Friday, November 26, 2010

Building a kitchen computer

Tracy had the idea to build a computer using an old toaster oven as a case.  So we dug through my pile of junk to see what parts we had, and what parts were missing.  We found an old motherboard that I had in a previous incarnation of my main server, 'servy'.  I suspected that the CPU died, because the fan wasn't well attached, and fell off at some point.  It still had 512M of DDR ram in it, so the the only thing we needed to bring it back to life was a CPU.  The problem was that it was about 6 years old, which is ancient history in the computer world, and no one sold the type of CPU it needed.

The motherboard is a PC-Chips M863G which has a Socket A type 462 slot for the CPU which means an old AMD Athalon/Duron chip.  I found several options on E-bay for buying cheap used ones that were pulled from recycled computers.  Then after some more research, I found RE-PC which is a local PC recycler, and has a store in Seattle.  Since our tradition for several years has been to go up to Seattle for black friday, we figured we'd stop in and have a look.

The RE-PC store was a computer hobbyist's paradise filled with tubs of spare parts, and old cases.  I probably could have spent several hours in there just browsing through all the bins.  They also have a computer museum, with all of my childhood computers.  One other thing we needed for the toaster oven computer, was a power supply, as it wouldn't fit the normal size power supply that I had on hand.  Tracy got to dig through a bin of power supplies looking for just the right fit, and I went to ask about CPUs.  They had a display case for the CPUs, and a box of about a dozen Athalon of various speeds.  I picked the cheapest one at $10, and Tracy found a slim Dell power supply in the as-is pile for $3.  So total spent so far is under 20 bucks.  Everything else came out of my spare parts pile.

So we put the parts on a small table, plugged all the cables in, and turned on the power.  Nothing.  That's when I noticed that I had forgot to plug in the power cable.  Duh.  Power cable plugged in, and now everything whirred to life, and we got a BIOS screen.  Next we wanted to try running some graphics on it. I was unsure whether at 512M it would run a full desktop.  We plugged in a CD drive, turned it back on, and set it to boot off the CD.  I couldn't find a Desktop ubuntu disk, but I got a disk in a recent copy of Linux developer that was bootable.  So we tried that, but never could get the CD drive to read the disk. I knew the disk was good, as I installed it's distribution on Art in a virtual machine a couple weeks ago.  I was ready to say the drive was bad and give up for now.  That's when I noticed that the disk was a DVD.  Double duh!  So I dug up an ubuntu desktop CD from my pile, and tried that.  We got a little further, except that it was a 64 bit version.  I had to finally download another distribution of the netbook remix from ubuntu (I figured the netbook remix would live most comfortably inside the limited space) and burn a new CD.  With that CD, we were finally able to boot to a desktop, and after hooking up a network cable, we could browse the internet.  The performance was surprisingly good.

We still need a wireless network dongle, and we need to modify the toaster oven case to expose the motherboard connection, and mount the power supply.  I'm reminded of many years back when I built my first frankenstein computer from the case of an old computer and a new motherboard, and had to use tin-snips to cut out the back so the connectors were all exposed.  Hopefully in a couple of weeks, we'll have the kitchen media-center up and running, and I'll post a picture of it then.

Sunday, November 21, 2010

Safety first

After last week's wind storm took down my servers, we decided to invest in a UPS to keep the servers up even when the lights are flickering.  So this morning I hooked up an APC UPS to the two servers, servy and phantom, and the network switch between them. But, as usual, nothing is ever easy, I had some trouble getting the computers to communicate with the UPS.

The basic idea is that when the UPS batteries get below a certain percent, the servers will do a clean shutdown.  But for that to work, the computers have to be able to communicate with the UPS.  The UPS only comes with Windows software, but, of course, that begs the question: why would anyone want to protect a Windows computer?  I guess they figure that anyone smart enough to use Linux is smart enough to download and install the software themselves.

So the first step at getting software, after some research, I discovered a product called NUT, which purported to do what I wanted.  So I installed first and asked questions later.  Once I installed, and discovered it wasn't plug-and-play, I went in search of documentation.  That's when I discovered that the official NUT site only had documentation for 2.2, and the official ubuntu repository download was 2.4.  Following the instruction got me nowhere, except a bunch of error messages that say 'this option no longer supported', with no indication what replaced it.

Plan B, I did a little more research and discovered acpupsd, which seemed semi-official (though not supported by ubuntu).  Configuration of that was a snap, and things seemed to be going good, except after starting it, it silently failed.  A quick tail of /var/log/syslog showed a nasty looking error message 'apcupsd fatal error in linux-usb.c at line 609'  A google of this message indicated that other people where getting it, but none of their solutions worked for me.  The last thing I tried was to mess around with the /etc/fstab to create a /proc/bus/usb, as it apparently used that as an interface for connecting to usb devices, and my system didn't have one.  Once I added it into the /etc/fstab I didn't know how to get the fstab reloaded, so I rebooted my computer.  That cause an error message because it didn't like the entry in /etc/fstab when it tried to load it.  So I took out the new entry.  I looked in syslog to see what might be the matter with the line in fstab, when I noticed that acpupsd was now running.  Running acpaccess gave a screen dump of the status of the UPS, so I guess rebooting the machine with the UPS usb cable plugged in caused everything to work.

Still to do, I have to get acpupsd to talk over the network.  I would like to be able to have a GUI tool that will show the status at a glance on one of my workstations, and also I need to get both of the computers hooked up to it to shutoff when the batteries get low.  But that is next weekend.  And then the big test...unplugging it from the wall, and seeing how long they stay up, and making sure that they shutdown correctly.

On another topic...I was looking around for set-top internet boxes that might allow me to cut my umbilical cord with the cable company.  While searching for one to buy, I came across a blog that set up and old Windows Laptop with Hulu Desktop and connected it to the TV instead of a brand-name set-top box.  And with the set-top boxes costing more than $300, that seems an economical solution.  Plus I don't like the idea of being locked-in to whatever software the set-top box deems I should run.  I like being able to install whatever software is available, especially in the dynamic first days of internet TV when everything is bound to change monthly if not weekly.  So I may see what kind of computer I can scrap together to run one of these things.  I tried the hulu desktop with my downstairs ubuntu, and it worked like a charm.  Plus I plugged in the remote that used to work with my HP media center PC, and that worked too.  So I'm already counting the money I can save canceling cable.  Maybe next week after I use Hulu Desktop and other media options for a week to try it out.

Sunday, November 14, 2010

At long last, success

Pal has been moved, kitchen sink and all, to Phantom.  I'm writing this from my new Ubuntu desktop 'Art' which has inhabited Pal's old body.  The transfer went smoothly, I had to change real Pal's IP address temporarily so that I could use it as a virtual machine console for virtual Pal without sucking down new mail from servy.  The issue I had last time with virtual Pal's IP address being assigned, was that when creating the virtual machine, it created a new virtual ethernet device called eth1 and set it from the DHCP server.  Once I had virtual Pal running I simply had to adjust the network settings to use the original IP address, and now virtual Pal is doing everything that real Pal did.

I rebooted Phantom, just to make sure that everything comes up correctly, and turned off real Pal.  After check that all of Pal's services were running inside of Phantom (with a little bit of disk swapping) I breathed a sigh of relief, and started in at creating my new desktop computer: art.

I got a couple giga-bit network cards yesterday as I try to upgrade all of my network.  Unfortunately when I opened up Pal's old body to stick in the card, I discovered that it didn't have any PCI slots.  So I guess I'm stuck with the on-board 100mb nic.  I'm half toying with the idea of getting another bare-bones to act as my backup virtual host/cloud hub.  But that's probably another couple paychecks away.  I put the computer back together, and installed Art as a Ubuntu 10.10 desktop.

Next I'm going to install virtual manager on Art, so that I can access virtual Pal via a virtual console.  I may try to migrate Cloudy over to Art as well, to take some of the load off of Phantom until I can get more memory.  And at some point I will need to stick this giga-bit network card in Phantom, and reconfigure Servy to use his faster nic (I don't know why I set him up on the slower one), then see how fast I can transfer files to see if it truly is giga-bit.  Having two virtual hosts and transferring entire virtual guests from one to the other will definitely need the speed.

I'll take a break from this in the afternoon, when I go up to Seattle to hear Tchaikovsky's first piano concerto.  Then tonight, some power leveling on Wow in anticipation of cataclysm which comes out in December.  Probably no more tinkering with the network until next weekend.

Tuesday, November 9, 2010

Even in the future nothing works

One of my favorite quotes from Space Balls the movie, because I live it every day.  Today I spent half a day just getting a basic GWT 2.1.0 build working.  Gwt 2.1.0 just came out, and I wanted to try it out in a virgin project.  There has always been a lag between GWT and the maven plugins that allow you to build GWT projects in maven.  The GWT team evidently hasn't moved to maven yet, so the maven GWT plugin team seems to always be playing catch-up, and the old version of the GWT maven plugin didn't work with GWT 2.1.0   The GWT team is still insistent on using eclipse as its main development platform, but I have yet to overcome my natural preference for NetBeans over Eclipse, and so value maven as an IDE independent project structure.

Last Friday, things were remedied with the 2.1.0 version of the GWT maven plugin.  Now the version of the plugin will be tied to the version of the SDK.  Looking on the GWT maven plugin page, I found this description of creating a new GWT project.  True to the title of this blog, it didn't work.  It was nice enough to quietly revert to the last version of the plugin, and create a GWT 2.0 project with the 1.2 GWT maven plugin.  I was surprised when my project that was supposed to use the latest code was using old code, until I looked back on the command line and found the single line about not being able to find the archetype, and reverting to the old version.  Some googling on the error message led me nowhere, I suppose I'm the first person trying to use the archetype feature of the released plugin.

I was able to change the versions in the maven pom.xml to the latest and get a good build.  Now to try out the new MVP model included in recent GWT.

I suppose this all adds to my recent post about the challenges facing developers.  The trick is being able to find the appropriate code to reuse, and being able to overcome any assumptions made by the reusable code developer to match my own.  At work, I'm often on the other side of the coin, producing code intended to be reused, and other developers pulling out their hair trying to match my assumptions with their own.

Monday, November 8, 2010

Journey of Discovery

In my last post, I was back to the drawing board on moving mail from pal to posty, due to a discrepancy in the Maildir between courier IMAP and dovecot IMAP.  My solution then was to install courier IMAP on posty and try again.  To make a long story short, that didn't work.  The folders were now present in the email webclient, but the receive times were all exactly the same.  I could have said, good enough, but I had no clue why the receive times were all the same, and I had a sneaky suspicion that there may be other issues lurking.  So once again, back to the drawing board.

My next idea was to try something that I've been wanting to try regardless, that is to take a physical machine and virtualize it.  So I did a little research on exactly how it could be done.  Along the way I also discovered a blunder I had made last week, when I set up all my servers to run ntp, the network time daemon that periodically check the time from some network time server and adjusts the computers clock.  It's obvious to me now, that virtual machines should not have ntp running, as they all will be trying to adjust the same clock, only the host OS should run ntp.  So I removed ntp for on all the virtual servers.

On to virtualizing a physical machine.  The idea is to boot a live cd on the computer to be virtualized.  Then make an image of the entire hard drive, and transfer it to the host OS.  I found some instructions that were in German here, and used Google translation to get the gist of it.  The instructions needed some interpretation, but the basic process is to use the 'dd' command piped through gzip, and sent to netcat to make a compressed image and transfer it across the network.  The commands presented on the page didn't work exactly, so I had to search through some man pages to find the exact syntax.

It took about 3 hours to transfer the image from pal to phantom, and another 2 hours to make a backup copy before I tried to install it as a virtual machine.  When that was finally done I did some more research on exactly how to start a live image as a virtual machine.  The German instructions used a -snapshot option for virt-install, but my virt-install didn't.  Looking through the virt-install man page, I found the --import option that booted from a live image rather than try to perform an install.  I ran that and it appeared to work, but I could not ping the virtual pal.  Unfortunetly the only computer that is setup to view a virutal console is real pal, and I didn't like the idea of both pals running at the same time on the same network, as servy might direct any mail that's queued up to either.

Eventually I guessed that virtual pal wasn't responding so it safe to start up real pal and use the virtual manager to view the console.  Viewing the console, I could see that it was running, and a prompt was up saying the display parameters had changed, and asking what to do about it.  After reseting the default video parameters, I had virtual pal running in a window from real pal, which was mildly mind-boggling. I discovered that virtual pal for one reason or another was assigned a dynamic IP address rather than getting a real one.  Perhaps it wasn't going through the bridged network that was set up on phantom, or perhaps network manager had somehow screwed up (it has done that in the past and overridden the values in /etc/network/interfaces).  Either way, by now real pal had been spooled all the email servy had been saving up for him, so the image on phantom was out of date.  I would have to repeat this several hour process next weekend.  But at least I know it works, and the basic process to use.

Saturday, November 6, 2010

The big move

Today my goal is to move all of the email services off of pal and onto cloudy/posty.  Cloudy will host the email webapps, and posty will be the IMAP server.  Servy is still the main email gateway and SMTP server, since he's the only computer in the DMZ (directly connected to the internet).

Last weekend I installed squirrel mail and ox6 onto cloudy and set up posty as the IMAP server and final distination for email on the LAN.  Today, I need to:

  1. Configure ox6 with the real context and users
  2. Disable on pal and posty so no new mail is delivered
  3. Move the Maildir directories from pal to posty
  4. Change the mail gateway on servy to point to posty as the final delivery point for kamradtfamily.net
  5. Change apache on servy to point to cloudy for the email webapps.
  6. Re-enable the mail on posty.

Configuring ox6 with a real context and users, is through a CLI, and I've read through several install guides to get a feel for exactly what's needed.  First of all, I deleted the test context I created last weekend.  Then I created the main context and users to match what is on Pal.  (I'm sure I could have imported the database from pal, but I wanted to go through the motions, so I knew what I was doing).  Then I can test it all out via a web browser pointed at cloudy.  So the login screen came up, but the login failed.  Time to debug.  The ox6 logs say they couldn't log on to IMAP, so I look in posty's mail log, and find this from wednesday:  Fatal: Time just moved backwards by 11 seconds. This might cause a lot of problems, so I'll just kill myself now.  So note to self, set up an IMAP ping in nagios which I use to monitor the network.  That clears up one error, but now I have another error in the ox6 log HTTP_JSON failed.  I had this all working, what could be different?  Apache's log shows access to /ajax/login, all with OK status, no errors in the error logs.  All /ajax requests are forwarded to tomcat via port 8009.  All services seem to be up and running now, but still no login.  There must be something different about how I created the context/users, since that all that is different between now and the previous setup.  So I found the exact same install guide that I used previously here, and lo-and-behold there were two options that I missed when creating the context according to the official install guide.  Deleting the context once again and creating it with the -L defaultcontext and --access-combination-name=all options now allows me to log on.

Time to shutdown the email on pal and posty and transfer the Maildir directories.  The Maildir directories hold your mail box, and any folders that you have created or that have been created for you.  Transferring them from on machine to another should be a simple scp, but we shall see...

So much for simple, on to plan b.  Pal has courier IMAP and posty has dovecot, and apparently they have different Maildir folders structures. I could see all mail in the INBOX, but no folders.  I will probably have to install courier IMAP on posty, but for now I'm going to restore the mail on pal, and take a break and think about it.

Wednesday, November 3, 2010

Code Reuse

Code reuse has been a buzzword for quite some time now, but it has never really been used as extensively as it should be.  At my work, I've been trying to get some level of code reuse for the past 15 years, but was often stymied by "if it ain't broke don't fix it" attitudes of copy and pasters.  They saw code reuse as introducing a dangerous level of complexity by exposing them to code that someone else has written, which could possibly be changed on a whim, or not changed to meet the needs of the reuser.  And they're right.  There was no clear path to the 'land of milk and honey' that code reuse promised.  Our CM systems and administrators went cross-eyed when I try to set up  some jar files that might be used by more than one project in a single place.  Eventually I turned my back and the jar files were duplicated for each project, and the administrators gave a sigh of relief, that such an abomination was corrected.  There was no simple way of saying, "here's some code that's used in two separate project, but should be maintained in one place", so the dream of code reuse was unfulfilled.

Then, around five years ago, the maven project was started.  As far as I can tell, it started as an attempt to formalize a development process that used to be done by ant.  It took the strategy of convention over configuration, that is, if you accept the convention, you don't need to configure.  But what happened that made maven an indispensable tool for me now, is that it froze in place their development processes that encouraged reuse.  Now there was a standard way of sharing compiled code, and tracking it with versions. There was a standard way of documenting a project.  And there is a standard way to develop, test, and release code.  The best part is that you don't need to jump through hoops to use the standard, you have to jump through hoops not to.  

I won't say that all is easy now, and that there aren't still issues that make code development real work.  But it takes code development to a new level, with new fresh challenges.  I suppose the biggest challenge facing developers now, isn't how to reuse code as maven solves that, it's how to write code that's reusable.  And really, not that many developers face that issue, as most development is done close to the end-product, that is it solves specific problems for specific users, and therefore can't be re-used.  But as the amount of reusable code piles up, that developer has to spend less and less of their time re-inventing the wheel, and more time solving specific problems.

So now that we have a path to real reuse, I thought it would be good to start writing some code that might be reused.  I could have a library of useful bits of code that I could put together for my bigger projects.  In order to do that, some initial footwork was involved setting up a few pieces.  

First and foremost behind every real development effort is version control.  I need a repository that will hold all my source code and keep track of it.  My repository of choice for this was Subversion (SVN) because it had a cute name, and it was well tied in with all of the rest of the tools in use.  I won't say that it's the best version control out there, I know that many people prefer git or cvs or others.  But I had to decide on one, and SVN was well supported by tools, and very stable.  Considering it's really the core of this system, stability is very important.

Now that I had a repository for code, I need a repository for compiled code (jar files in java-lingo).  One of the things that maven formalizes is a file-system based repository for compiled code.  You tell maven where it is, and it knows how to get compiled code needed for builds, and put the compiled output of your builds there.  But in order to have more than one development computer, and a single compiled code repository, you need a repository manager that will allow access to the repository through HTTP.  I had been using archiva, but I just switched to nexus as I moved the repository from one server to another.  They both seem to function very well, but nexus has a better UI, and functions well out of the box.  The only issue that I have with it is that it seems to have some trouble with my apache reverse proxy which I haven't been able to solve yet.  If I can't solve it, and it starts to get in the way, I may revert back to archiva.  Since the actual repository file-system is dictated by maven, it should be simple to switch back should I need to.

One final piece that needs setting up is continuous integration.  With code re-use, CI becomes very important, making sure that all the pieces of re-usable code work with all the places the code is re-used.  It works by compiling and testing code on a central server as soon as it's checked in.  If it is set up to compile all of your projects at least nightly, it will catch any places where a change in one project affects some other project down the re-use chain.  For CI I use hudson, and have it installed along side nexus.  When hudson compiles the latest code, it sticks it into nexus where any other project that's using it can see it.

So now that I have this whole setup, I need to start writing some re-usable code.  Now my biggest problem is every time I think of something that I can write as a re-usable component, it seems that someone else has already written it.  So it shifts my problem away from writing code to figuring out how to best re-use other's code.  That is the next big challenge to all developers, how best to find and reuse code that's already written, and how to survive the inevitable two steps forward, one step backwards refinement (which version should I use) of such code.

Tuesday, November 2, 2010

Enabling development

Two things I have on pal for development are a maven repository, and hudson continuous integration.  They have actually been broke for a while, as installing ox6 stepped on the tomcat configuration that was running these services.  I just noticed it today, as I was installing eclipse on pal to try it out, and once I downloaded the eclipse maven plugin, it tried to index my mirror repository, and told me it wasn't working.

After figuring out what had happened to tomcat on pal, I figured it was time just to move the maven repository and CI server to cloudy, rather than try to fix it on pal.  I also realized I'd have the same conflict on cloudy with ox6's use of tomcat, so I decided to run them as stand-alone on separate ports.  And one more change, my maven repository on pal was via archiva, but I wanted to try out nexus from the good folks at sonatype that created maven in the first place.

The nexus install was pretty much a snap.  Just untar the installation file in-place.   I changed the default install from /usr/local to /usr/share to match all the other installs.  I started it up, and I could browse to it.  The only configuration was to change the admin password, index the proxied repositories, and change the deployment password.  The default configuration had the standard repository setup for third-party, release, snapshots and the standard set of external repositories.  Editing my settings.xml to point to nexus, and now eclipse can index my repositories.    The only thing left was to move my released code to the new release repository on nexus.  The directory structure is identical, so a quick scp moved the whole thing.  Actually it wasn't that quick, so while that was transferring, I went out to get my flu shot.  I still need to setup my settings.xml to allow deployment from my ci, but I will do that later when I actually need to do a release.  One step that I'll put off until I need to restart is to actually create an /etc/init.d script so that it starts automatically.

For hudson I followed some instructions to allow me to install it as a normal ubuntu package.  After the install, it said that port 8080 was in use, find another port.  I figured out that the configuration is in /etc/default/hudson, changed the port to 8082 (8081 is used by nexus) and restarted it with the /etc/init.d/hudson script.

The last step is to expose it via my reverse proxy on servy.  Nexus was easy enough, but hudson proved a little more difficult.  It turns out (and I may have known this before, but forget) that reverse proxy must have the same path in the url both inside and outside the firewall.  Since the default for stand-alone hudson is to have no path, I had to add a parameter --prefix=/hudson  Then it worked from outside the firewall (I use my corporate vpn to check connections from outside the firewall).

One more step will be to recreate the jobs on cloudy's hudson.  For now I'll leave it alone, I just got a new geforce 9500 GT for  my birthday from Tracy :) which I want to try out on WoW.

Sunday, October 31, 2010

Mail call

Yesterday and today were spent trying to create a new mail server as a virtual machine on phantom.  Creating a virtual machine has become pretty routine now, this would be my sixth one (with two surviving).  The OS was standard Ubuntu 10.10 server which I used for my others, except this time I didn't install any packages except for the base system.  That lead to my first surprise: the base system doesn't have a SSH server, so I couldn't connect via a terminal.  That was easy enough to fix once I figured out why I couldn't log on, I reopened the console from the virtual machine manager on pal and installed the ssh server.

I've never had much luck installing mail servers, they're just too complicated, and not something everyone installs, so the documentation is pretty thin.  This was no exception as I spent much of Saturday trouble-shooting the mail system.  I have a pretty unusual setup as well, which doesn't help things.  Basically, I have an email gateway on servy that receives mail from the internet, and acts as a SMTP relay for my internal network.  Mail that servy receives from the internet it forwards on to pal which acts as the main mail system, accessible from within the network with IMAP.  Then pal also hosts two webmail apps, squirrel mail (webmail for nuts) and Open Exchange Community Edition.

The goal for the weekend was to transfer all of the mail system off of pal and onto the new mail server 'posty'.  Ubuntu 10.10 comes with postfix and dovecot for SMTP server and IMAP server.  The standard installation was filled with typing, and simply didn't work.  Perhaps I mistyped something, or missed a step,  or made some other mistake.  In the end I simply focused on getting the SMTP server postfix to work, and finally got that to work simply by copying the configuration files from pal and doing a little editing.

SMTP and IMAP have a per user mail box, so posty needs to have home directories for each mail user.  Currently the only people using the kamradtfamily.net email are me and my mom (and mom probably uses it because no one has told her she could just get a free gmail or yahoo mail account).  Last weekend's project was to set up LDAP for authentication, so that was already available.  I still had to set up posty to authenticate via LDAP.   That was pretty straightforward using the instructions from my last post when I setup pearl to be LDAP authenticated.  Once that was done, and I could logon as any of the users in my LDAP directory, I had to create home directories, by copying the /etc/skel directory to the /home directory with the users name.  That is where the mail box will go (in the form of a Maildir directory).

I ended Saturday's fun by finally getting SMTP on posty to deliver an email to a mailbox, and looking at the mail that was delivered with cat.  In order to simulate the actual routing, I updated the berkeley.local DNS on namer with an MX line, which says to what server mail with the address whatever@berkeley.local should be delivered to.  Then I setup postfix on posty to have berkeley.local as one of the addresses it should be authorized to deliver.  Then I sent an email from pal via squirrel mail to randysr@berkeley.local, and it was routed to servy, which relayed it back to posty as directed by the local DNS server, which accepted it and stuck it in my new mail box.  Here's the full header (with email addresses slightly altered):


Return-Path: <randysr(AT)kamradtfamily.net>
X-Original-To: randysr(AT)berkeley.local
Delivered-To: randysr(AT)berkeley.local
Received: from localhost (localhost [127.0.0.1])
     by posty (Postfix) with ESMTP id EA1AB2C094E
     for <randysr(AT)berkeley.local>; Sat, 30 Oct 2010 17:22:13 -0700 (PDT)
X-Virus-Scanned: Debian amavisd-new at posty.berkeley.local
Received: from posty ([127.0.0.1])
     by localhost (posty.berkeley.local [127.0.0.1]) (amavisd-new, port 10024)
     with ESMTP id cNbHn6HkjuK1 for <randysr(AT)berkeley.local>;
     Sat, 30 Oct 2010 17:21:30 -0700 (PDT)
Received: from servy (servy.berkeley.local [192.168.1.1])
     by posty (Postfix) with ESMTPS id 902472C094B
     for <randysr(AT)berkeley.local>; Sat, 30 Oct 2010 17:21:29 -0700 (PDT)
Received: from Pal (pal.berkeley.local [192.168.1.3])
     by servy (Postfix) with ESMTP id 21AF21FDBB
     for <randysr(AT)berkeley.local>; Sat, 30 Oct 2010 17:22:06 -0700 (PDT)
Received: from localhost (localhost [127.0.0.1])
     by Pal (Postfix) with ESMTP id F07D2808002
     for <randysr(AT)berkeley.local>; Sat, 30 Oct 2010 17:21:22 -0700 (PDT)
X-Virus-Scanned: Debian amavisd-new at kamradtfamily.net
Received: from Pal ([127.0.0.1])
     by localhost (mail.kamradtfamily.net [127.0.0.1]) (amavisd-new, port 10024)
     with ESMTP id gqpQi8C6sAVi for <randysr(AT)berkeley.local>;
     Sat, 30 Oct 2010 17:21:20 -0700 (PDT)
Received: from [192.168.1.3] (localhost [127.0.0.1])
     by Pal (Postfix) with ESMTP id A107F808001
     for <randysr(AT)berkeley.local>; Sat, 30 Oct 2010 17:21:20 -0700 (PDT)
Received: from 192.168.1.1 (proxying for 131.191.87.73)
     (SquirrelMail authenticated user randysr)
     by 192.168.1.3 with HTTP;
     Sat, 30 Oct 2010 17:21:20 -0700
Message-ID: <0e326e30d8386c0c9df3d6895539367d.squirrel@192.168.1.3>
Date: Sat, 30 Oct 2010 17:21:20 -0700
Subject: test
From: randysr(AT)kamradtfamily.net
To: randysr(AT)berkeley.local
User-Agent: SquirrelMail/1.4.20
MIME-Version: 1.0
Content-Type: text/plain;charset=iso-8859-1
Content-Transfer-Encoding: 8bit
X-Priority: 3 (Normal)
Importance: Normal

So that was about 10 hours just to get SMTP working.  Time for a break with a few hours playing WoW.

Sunday I was still hopeful that I could get everything transfered to posty.  I started by installing a basic dovecot system, after uninstalling the one I screwed up yesterday.  I just need a IMAP server to allow my webapps to be able to access the mail boxes through a standard protocol.  I'm not planning on exposing IMAP to the internet, so I wasn't too worried about security, since it would all be inside the firewall.  I discovered that the standard apt-get command that installs and removes packages from ubuntu systems doesn't remove the configuration by default on a remove command.  So after re-installing, my configuration was still hosed.  Finally I found the --purge option for apt-get that will remove configuration (sort of a 'terminate with extreme prejudice' command for packages), and started with a fresh installation.  Soon I had the evolution mail client on pal reading the IMAP servers on posty.  

Next step the mail webapps, which would need to be installed on the new LAMP server, cloudy.  Currently I use squirrelmail and open-exchange 6 as webmail on pal.  I installed squirrel mail, and after I found the installation directory in /usr/share/squirrelmail instead of /usr/local/squirrelmail like the instructions say, I had it hooked up to apache and able to serve up mail.  Open exchange is a little more complex, it has several layers that need to be set up, all via command line, so a fair amount of typing.  I'm sure the commercial version is easier to setup, but I'm way too cheap to shell out money for it (and if I had to, I'm sure I'd just switch back to squirrel mail).  The only thing is that after all this configuration, I didn't have the heart to tackle getting open exchange talking with LDAP, so I still need to configure users manually.  That's a project for another weekend.

After getting the webapps installed an running at a basic level, I decided to take a break and go visit the tiger cubs at the point defiance zoo.  I got there just in time for feeding java one of the big male tigers, who got a huge beef shank for lunch, which the wimpy zoo keeper could barely hurl across the moat.

Still to do for next weekend, the final move of mail from pal to posty.  I will have to move both mailboxes on pal to posty, so the mail boxes are unchanged, export the contacts in ox6 over to posty, setup all users in ox6 on posty, and finally set servy to forward all mail to posty, and all webmail to cloudy.

Saturday, October 30, 2010

The recovery

First some pictures.  Here's my new server 'phantom':
And my old server 'pal':

And, as promised, 'murp' my new tomato frog:

Isn't he cute?

So on with the recovery experience.  First some lessons learned.  When ubuntu says a user name is reserved, it means it.  So I switch all my 'admin' users to 'super'.  Second lesson, allocation more memory for the LAMP host as it also hosts apache tomcat which is a big memory hog.  DNS, DHCP, and LDAP all live very nicely in 256M, but tomcat pretty much requires 1G.  Considering that I only have 2G of memory to start, things are going to be tight until I can upgrade that.  I've been looking at DDR2 memory, but it's out of my budget to upgrade him for right now, and I don't want to upgrade until I can get him up to at least 8G (yes it's a 64-bit OS).

The reinstalls of phantom and the two virtual machines went without a hitch.  I setup SSH and certificates for the three systems as before.  My CA was setup on servy, so that didn't need to change, but I had to learn how to revoke certificates in order to reissue new certificates.   The CA keeps an index of the certificates issued, and a copy of the certificates.  Detailed instructions can be found here for revoking and renewing certificates.

I was able to reconfigure DNS easily, because I made servy a slave DNS server for my network, and it only required copying the cached files from servy back to namer, and doing a little editing.  DHCP was a breeze to configure as usual.  LDAP once again provided problems, and in fact the exact same issue as on the last install.  The instructions on the ubuntu website don't clearly demarcate what's boilerplate and what needs to be changed, and of the elements that need to be changed, the relationship between the elements.  In my case, I setup a basic server at dc=kamradtfamily,dc=net to match my domain (in hind site I should have made it match my local network berkeley.local)  Then it ask for dc again, and I wasn't sure what for, so I just put in kfn, my short name for the kamradtfamily.net.  It turns out it needs to match the first dc of the server name, 'kamradtfamily'.

My first attempt to use LDAP was as an authentication server.  My hope is to be able to log on to any of my computers with my normal user name and be authenticated via LDAP.  So my first victim was pearl, my little ubuntu netbook.  She's just a cloud access computer, so I'm not afraid to brick her.  Instructions for LDAP authentication can be found here.  Soon I was able to su randysr on pearl.  The only issue is that now it has trouble on the main logon screen.  I figured that I had bricked pearl, until I discovered that the 'other' option on the logon screen still lets me log on.   Also I have some scary looking messages that come up when restarting.   I still have to figure that out, but I suspect that it has something to do with the wireless network being unavailable prior to logon.  Another chicken-and-egg problem to make my head spin.

My second attempt to use LDAP was for web authentication.  Fortunately my main web server is on servy, which acts as a reverse proxy for all the internal sites that I want exposed to the internet.  His configuration wasn't lost, so it was just a matter of setting up a groupOfUniqueNames in LDAP.

Lastly I installed Nagios on cloudy to be able to monitor my network.  Instructions can be found here.

My next project is to move the main mail system from pal to a new mail virtual machine.

Saturday, October 23, 2010

The disaster

On to my next virtual machine.  This one would be 'namer' who hands out and checks names and identities.  It would have three services, DNS, DHCP, and LDAP.  The DNS server would give names to all the static IP computers on the local network 'berkeley.local'.  DHCP would hand out dynamic IP addresses to the rest of the computers and devices.  And LDAP would register people and their authorizations within the network.

DNS and DHCP would be pretty simple to install, the instructions here and here sufficed.  I set my main gateway computer 'servy' to be a secondary DNS and to automatically get the local DNS configuration from namer.  That proved to be a fortuitous decision. LDAP proved to be a bit of a bear as usual, resisting every effort to configure it.  Most of the problem stemmed from my own typos and confusing error messages.  I'm the first to admit my own fallibility, but it would be nice if the software could point out my mistakes in a more direct manner.  The instructions were taken from the usual source but I had to split apart the frontend directory configuration to find where the error was.  My mistake was not replacing some of the example terms with my own terms, as the distinction between what is template and what is example was not perfectly clear.  Eventually I had the LDAP server running and recognizing a few people.

In order to have LDAP authenticating people on the various machines, I still wanted to have a single admin user defined for each of the machines.  Normally the admin user is randysr, but I wanted to have that defined in LDAP so that it could be used by other applications.  So I started to create an admin user on each of my linux systems, synchronize their passwords, and use SSH keys to easy moving between machines as the admin user.   The admin user is already defined by ubuntu, but with no password and no home page, so I had to set those up manually.  At the same time I wanted to create a certificate authority (CA) on servy, and create certificates for each of my servers.  That was pretty easy, but a lot of typing.  Instructions for SSH keys are here and instructions for the certificates here  I had a little trouble with the SSH keys, as the instructions didn't work on some of the machines, which had a different public key file than was created by the ssh-keygen command, so I had to look through the man page for the ssh-copy-id command to find out how to direct it to the correct public key file.

My first use of the LDAP authentication was to secure various pages of my website.  Reading through the section on Apache integration I was able to configure Apache to read LDAP, but now I needed to configure LDAP to have a specific group of authorized users for Apache.  The documentation showed how to do that with a web-config tool phpLDAPadmin  After installing that I was able to create the group without too much trouble.  Now my webpages were secured by LDAP.

I added a few webapps to the LAMP server.  From the ubuntu server guide, I installed phpmyadminmoinmoin, and mediawiki.  The mediawiki looked nicer than moinmoin, so I decided to expose that and make my root URL redirect there as a nice home page.

Now my big mistake, I decided that on the new virtual machines and the virtual machine host, the randysr user was not necessary, so I removed it, and removed the home directories.  Not soon after I discovered the the img files that were being used as virtual hard drives for the two virtual machines were in the home directory that I just erased, and the virtual machines wobbled and fell down.  After a few minutes of trying to salvage what I could from the virtual machines, I decided it best just to start from scratch and go forward again.

But not before I took some time to relax and reflect on everything I had done so far.  I had installed and configured these systems in three days (two partial days, Friday most of my time was spent with my mom in Seattle at the art museum and the symphony, and Saturday most of my time was spent a the Washington state history museum).  Now it was Sunday and I had to relax a little.  To help relax, I went out an got a terrarium and a little tomato frog, 'murp' to go in him.  I'll post a picture in my next post.

Thursday, October 21, 2010

The gory details

So I'm sure that by now all my millions of readers are waiting with bated breath for the details of my new monstrosity. Start off with one of these: 'Biostar MCP6PB M2+ Triple-Core Barebones Kit - Biostar MCP6PB M2+ Motherboard, AMD Phenom X3 8250e, Ultra 2GB DDR2 667MHz RAM, Seagate 1TB LP HDD, PowerUp Black ATX-Mid Tower Case' from tigerdirect.com for $220 (cpu fan extra). Add in a few scrapped knuckles putting it together, and a few skipped heart beats when it wouldn't start up (until I seated the power supply connector better), and viola! I had a fully functioning computer with a BIOS configuration screen.

Next, time to put on some software. Thanks to the wonderful people at Ubuntu I can download a great CD installable OS for the price of the patience to wait for the download and burn a CD. Installation went without a hitch, and soon I had a command prompt from the newest member of my computer stable 'phantom'.

So what did I spend the great fortune and vast amount of time for? The buzz words virtualization and cloud computing keep popping up, so I had to make sure I wasn't missing out on anything important. I know that Ubuntu 10 server edition had both virtualization and cloud computing built in, but I didn't have a spare computer that was capable of running these new advanced features. So I went on a search for a bare-bones kit, and found a nice multi-core computer kit for less than $250, that had a nice fat 1T hard drive to boot (which I may use in a virtual NAS appliance).

My goal here is to pull all the 'mission-critical' services off of my current server 'pal'. That would include mail and my development build machine/server.  Once I put these servers into a 'virtual appliance', I can move them between pal and phantom as needed.  I can also upgrade or reconfigure these servers as needed, and if (when) something goes wrong, I can restore the old virtual machine in a matter of minutes.

To configure a virtual machine, I followed these instructions and pretty soon had my first virtual machine, 'cloudy'.  The same Ubuntu 10 server install disk was used to install the system onto cloudy.  Now the first problem, how to connect to the terminal.  The virtual terminal packages all required a GUI, which I didn't install on phantom.  So I install the virtual manager package on pal, and after struggling with the command line format, was finally able to connect to phantom, and then open a virtual terminal to cloudy.  Cloudy had the familiar ubuntu install screen, and I ran through the prompts to install a LAMP server.  Once it was installed, I got a command prompt, from which I set up a static IP address.  From there I simply used ssh to do the rest of the configuration.

In my next post I will describe installing my second virtual machine and the disaster with the *.img files.

Tuesday, October 19, 2010

My new tech blog

This is a blog that I will use to chronicle my technological frustrations and triumphs. People that dislike technical stuff can visit my more creative blog rainyday recompose

I just got a new Frankenstein computer with a phenom x3 processor a couple gigs of ram and a tera-byte hard disk. My usage for this computer is to try out virtualization with Ubuntu 10.

The computer itself was delivered on Thursday, however I had to wait until Friday to finish assembling it, as the cpu fan was delivered separate. By Saturday I had a base system install and two virtual machines running. However I accidentally deleted the image files that it was using for virtual hard drives, so I started over again on Sunday. Lesson learned, *.img isn't just some silly pictures to be removed.

My thinking for the virtualization is to create several 'virtual appliances' The first two that I created are a web server, and an internal DNS/DHCP host. I have in mind a few more for a NAS server, and a few desktop hosts to play with different desktop styles and remixes. Sunday I got the first two virtual machines working again, but I still have some tweaking to do. My plan is to move all important stuff off my current internal server 'pal' so that I can reinstall him as a back up virtual host. So I have to move all my sites, and my email to the virtual machine before that can happen.

It's dinner time now, I'll finish up with some details later tonight or tomorrow.