All Things New

This is my first post from my own website.  Pretty cool, huh?

In the last few weeks we got Comcast Internet here at the house, which gave me an opportunity to do a bunch of stuff related to my projects.  We were able to have Cat5e run along with the coax, so I now have a hard line to my bedroom so that should make things much nicer when the microwave is on, or trying to stream AirPlay, or managing the VM Host downstairs.

I also went ahead and hooked up the pfSense firewall I built a month or so ago.  Have that nicely configured with Squid (w/ filtering), setup a basic VPN (which I’ve been able to test quite easily since the DSL line is still functional).  Just have a decent network layout going.  I haven’t gotten the traffic shaping configured in a way I like yet.

After that project was done, I moved on to (hopefully) setting up a personal website/blog/email server on the home network.  I ran into some snags with Comcast blocking some ports, so I decided to finally bite the bullet and venture into the cloud.  Started by messing with a few Spot Instances on EC2 for a few pennies an hour, to try to figure things out.  Nothing overly complicated, but a fair number of big annoyances for even such a small player as myself.  Once I was comfortable, I purchased a Reserved Micro Instance, allocated an Elastic IP and started configuring email (not quite working) and got this blog installed and migrated.  And now I type this first post.

The next thing on my list is to install a wiki here so I can keep a record of projects, diagrams, and little tips I never have a use for at the moment I find them.

Edit: I had a Spot Instance running for a couple days which I started a wiki on (not critical) thinking I’d be able to migrate relatively easily if I ever went to a Reserved Instance.  When I went to move, I wasn’t getting anywhere and couldn’t mount it to save my life, but was just now able to mount it on my permanent instance and migrate all the old wiki data over.  Haven’t found any issues.  Sweet.

Parallel…this would have been useful

Found this tool the other day and made sure to firmly plant a note of it in my mind for next time I need to do a bunch of uncoupled tasks.

Watch the video and see for yourself.

There is some similarity to the ‘xargs -P ‘ command, but parallel seems to be much more powerful and pretty intuitive.

Next Project: Virtual Lab

I’ve finally moved the file server and VM host machine out of my room.  When it’s already sweltering in the room and it hits 92 degrees outside, it’s time to remove that heat source.  This has the added benefit that I can keep things running while I sleep without the buzz of fans.

Before I finally went through with the move (and downstairs instead of just in the empty bedroom next door) I made sure I was able to remotely boot both machines.  That was a pretty painless process.  The little Atom board I have in the NAS doesn’t have an option to Boot from PCI, so I had to re-enable the onboard NIC, and and using an extra cable just for this purpose pretty much.  I found the program WakeUp on the Mac to send the remote boot signal.

So now on to the title of this post.  I recently found clonezilla and that got me started on a kick to figure out management of extensive LANs.  The particular use case I’m thinking of is schools.  Back at Messiah, virtually every break, we’d spend a day running around between labs, re-imaging the machines.  I’m fairly certain they were paying pretty well for that, and I really like the idea of using Linux (free, non-commercial) for that.  Also, the idea of imaging machines with an Ubuntu (Edubuntu coming to mind) distro, just adds to the thrill.  And in looking at Edubuntu, I thought about thin clients.  So I’m going to start to work on setting up a virtual lab environment to be able to test all these diverse elements.

On a related note, I was also able to configure both a Vyatta and pfSense virtual machine, and get the machines behind them talking to the internet.  So these will be crucial to this virtual lab concept.  Ought to be interesting.

One final thing: I really want to get into some BSD stuff.

Migration Update

Working on moving all the data back to NFS shares right now.  Still not what I wanted, but it’s something I can handle reasonably well with the auto-mounter from the command line, which is probably how things will be done most of the time.  And it’s probably more efficient than working through the Finder anyway.  I still hope I can get it working, but at the moment, I need to get things running again rather than messing around with data stores.  I did a trial run with the XP VM that I actually use to manage the VM Host it’s running on (ah, recursion), and hopefully because of something I did before I migrated, I was unable to log in.  There didn’t appear to be any corruption issues with it.  It booted just fine and brought me to the login screen, so we’ll see what happens there, and if any issues arise in the other VMs.

As far as the data server changes, all I ended up doing was installing the latest 0.7.2 beta of FreeNAS.  Nexenta required additional drives and there were no elegant resolutions, so back to FreeNAS.  I have high hopes for FreeNAS 0.8 and am anxiously awaiting new news on that front.  This is especially the case now that I’m taking advantage of ZFS datasets.  I decided I’d try compression on a VM and an ISO dataset.  Since the VM set would be more active and I only have a dual Atom, I went with the ‘lzjb’ algorithm as it was described as ‘fast’.  With all my existing VMs migrated to that dataset, I’m getting a compression ratio of 1.45x, which is acceptable, I feel, considering it put a decent load on the CPU at that.  (Just did the math and that works out to about 30% savings, very nice.) Since the ISO dataset would mostly be used in a read context and not very often, I decided I’d try to get more compression out of it and went with the ‘gzip’ (equivalent to ‘gzip-6’ according to the comments).  Here, the effort was quite wasted, as the resulting ratio was a mere 1.04x, definitely not worth the CPU time.  With FreeNAS 0.8 and the move to FreeBSD 8 series, I’ll also be able to pick up dedup, which I would bet would result in a huge gain in free space.

Update: I’m working on a new OS install (Xubuntu) and CPU load is fairly constant at around 10%.  This is theoretically doing both ISO decompression and live VM compression.  Networking is its usual terrible self, but I am reading and writing from/to the same disk, so I’m not getting the best duplex speeds.

Handy VMFS Script

I talked last night about migrating my VMs and rebuilding my “storage infrastructure”.  Currently all of my equipment is in my room and it produces a ton of heat and noise, so I usually shutdown the ESXi machine at night (the FreeNAS box doesn’t make any noticeable noise or heat).  All that to say, that I didn’t want to leave that machine on all night so that it could copy the files to it’s local drive.  I decided that I have this big RAID array attached to my laptop, so I might as well dump them there.

First thing, I initiated an iSCSI connection, that led to Finder asking me if I wanted to format the drive, which I definitely did not want to do.  This is because ESX formats the drive with their VMFS, so I Google’d around a bit to find this Google Code Project vmfs.  I was now able to browse around the datastore and copy bits to my local machine.  One problem though, it’s command line based (I guess there’s a WebDAV extension, but I didn’t try that) and the available copy command isn’t recursive.  So I put ran a couple commands to get a list of files, created a few directories since those didn’t copy and set it running.  I had to make some tweaks in the morning to fix issues with spaces in the name.

I’ve since updated the method and placed it into a script.  You no longer have to manually create any of the directories, and it’s down to a simply calling a script.  (VMFS Recursive Copy Script)

Slow Progress

I’m still here.  Slowly making progress through the RHCE book.  I’m discovering typos, inaccuracies (possibly due to the age of the book), and things just don’t seem to work very nicely, and it gets very frustrating to try to learn from something that can’t even do it right itself.  I’m also at a point where I’m hitting on things that (I think) I am unlikely to ever encounter.  So those are also difficult to work at.

I’m thinking of doing some storage system modifications.  If I go through with them, I’ll end up migrating the couple VMs I have set up to the local storage, doing whatever I need to to the storage box, and then move them all back.  I’m playing with the idea of using NFS.  I already migrated one of the rarely used VMs over to an NFS share.  The reason I’m considering a storage revamp is the permissions issues that still evade my fixes.  I don’t know if it was during install, post-install, something I forgot, what-have-you, but I haven’t found an acceptable solution as of yet.  Currently, the best method I have is to automount the NFS shares through the command line to copy ISOs, but the new VM NFS share I can access from the Finder.  It’s all rather bewildering and annoying.  I’m probably going to try a fresh FreeNAS install with better knowledge and then go for Nexenta if that fails.

RHCE Slacking

To be quite honest, I’ve been slacking the last couple days working through the RHCE book I have. RHCE Red Hat Certified Engineer Linux Study Guide (Exam RH302) (Certification Press) I got distracted by the RAID inquiries and household chores since this is Messiah’s graduation weekend and we’re having people in the house.

I’m into the Labs of Chapter 4: Linux Filesystem Administration.  It’s actually sort of fitting that I ended up learning RAID administration while in the midst of these labs.  I assume later in the book, RAID will be put forth, so it won’t be wasted time (in regards to the RHCE, it definitely wasn’t a waste even if it doesn’t show up).

In fact, I messed up part of Lab 1 and put a bad path into the fstab file and had to figure out how to recovery from it.  I was rather proud of myself for getting it fixed since it wasn’t prescribed and I made use of the previous chapter (GRUB & Booting) to get into emergency mode and correct the line.

I’ll get back into it tomorrow, it’s definitely been too long of a break.  We’ll see how I do.  I think before I go for any test, I’ll go back through the book and complete all the labs, and when I can do them blindfolded I’ll be ready.

EDIT: I checked out a few more books on Amazon for RHCE and there are several newer and just as highly rated books available.  I’ll probably also purchase one of them to get a different perspective on the exam.  The book I currently have has a publication date in 2007 which is practically ancient history in computer time.

Software RAID – Pt. 2

It’s not the next day, but I spent some time messing with mdadm and setting up RAID on Linux.  Fairly easy process.  I have neither the hardware nor use for setting up something for real, so I haven’t really seen what it can do as far as performance and just how long it takes to resync after growing, etc.

To mess with this I cobbled together a couple bash scripts, one does various array builds, RAID level upgrades, and resizes, the other tears everything down so I can start again.

You can find the build script here (safe_raid.sh) and tear down here (unraid.sh).  To run you need to call them as root, since they involve file systems.  Also, be aware that RAID is picky about things (as it should be), so jumping right through safe_raid will not give you a working RAID volume, you have to wait for some processes to complete before moving on.

Software RAID – Pt. 1

I attended the monthly Central PA Linux Users Group meeting again this month.  Doug talked about software RAID in Linux.  He went through all the trouble of building a 3TB array during his talk and this sent me scouring for an virtual disk solution.

I remembered a ZFS tutorial which made use of several file backed disks to allow experimentation with the file system, to learn the commands, etc.  (ZFS Tutorial 1)  If you click through to that tutorial, you’ll see they make use of Solaris command ‘mkfile’.  As this is not included with Linux, I had to dig around for another way to make a disk like file.  Read enough books on virtual machines and you’re bound to remember something about using ‘dd’ to create virtual disks.  That problem solved.

The next issue I ran into was that ‘mdadm’ did not like these raw disks.  So I tried a couple things until, after too long, I found prior art using loopback devices.  That now fixed, viola, RAID-6 array built from 6 100MB virtual drives.  Heck of a  lot easier to blow away and rebuild.

After figuring out the whole process now, I finally know enough search terms to get Google to give me just the result I wanted from the start.  This blog post (Stupid RAID Tricks with EVMS and mdadm) gives a step-by-step walk through of setting up and managing RAID.  I haven’t had time yet tonight to go through it, but I will do so and post back here with my own take on it.  That post is 4 years old (some recent comments though) so I’m certain some things have changed since it was originally posted.

FreeNAS

I said a couple days ago that I had killed off OpenFiler and went with FreeNAS.  A couple of things changed since the post about getting OpenFiler working as a SAN.

I would like to be able to add a second VM Host and having external storage is pretty much essential for that to be practical.  So with this in mind, I went to Newegg and picked up an Micro-ATX Atom motherboard, small but nice case, some RAM, and a good Intel Gigabit NIC.  (Already had the drive I was going to use.)  I put that all together and went immediately to FreeNAS.

I installed FreeNAS 0.7.1 onto a USB stick (from the CD, didn’t use a second computer) and hooked everything up and tested it out.  It apparently worked since I stuck with it.  The USB boot was important because I didn’t have a 2nd HD lying around and it seemed like a huge waste.  Even if I did partition, I have no plans of even filling the dedicated drive, let alone the remainder of the OS drive.  OpenFiler doesn’t have this capability, so barring lack of functionality, FreeNAS was the way to go.

Now for my FreeNAS setup, I simply added the 250GB drive and since ZFS was an option, I just a single stripe ZFS file system.  (I wasn’t going to get any redundancy anyway so I figured no harm and potential upgrade path.)  I configured iSCSI (plenty of guides out there for that) and bada-bing, iSCSI hosted virtual machines.