Terminal Multiplexing

I’ve spent the last couple days picking up new projects to look into (backups, cloning, network monitoring), which I plan to give some space on the blog as I work on.

For the moment, I spent some time today trying to compare the two most popular terminal multiplexers for *nix systems, screen and tmux.

The largest benefit of a terminal multiplexer to me is how it abstracts a terminal session and network connection.  Both screen and tmux allow you to start a session, detach from that session (possibly accidentally), and then reattach at a later time right where you left off and with all your programs still running.  This would have saved me hours upon hours of time in previous positions I’ve held.  The Windows desktop would crash for whatever reason, or just be in serious need of a restart, and I’d loose a dozen SSH sessions to half a dozen machines, then I’d spend 20 minutes reconnecting and getting back into place, trying to figure out where I had left off.  If I had known about (and had available) a terminal multiplexer, I could open one connection to each machine, start a screen or tmux session, spawn additional shells inside there, and if I lost my connection, reconnect and pick up EXACTLY where I left off.

I spent an hour or so today trying to compare screen and tmux, to decide which I would make my preferred terminal multiplexer.

For basic operations they’re virtually identical in operation.  I would be happy (and able) to use either for the basics, and probably will.  For that reason, I won’t say that my experience with one or the other affected my decision that I would try to stick with tmux in the future.

There a few reasons I’ve decided to side with tmux rather than screen.  The first to come to mind is the BSD licensing.  tmux is BSD, screen GPL, and I have better feelings about the BSD license.  Second, various blogs and articles around the web over the past year have written about the stagnation of the screen project.  Slow or non-existent patches and updates, crufted code, and a complex configuration file.  It seems to me that tmux is the future, and even if it never replaces screen, it has a strong presence and is here to stay, which is what will really matter.

Further Reading:
dayid’s screen and tmux cheat sheet
TMUX – The Terminal Multiplexer (Part 1)
Is tmux the GNU Screen killer?

Network Diagram

Went ahead and updated my network diagram with the newest elements.  Included the EC2 instance running the blog and wiki and future location of whatever website I decide to put up.  The yellow lines represents a VPN tunnel.

All Things New

This is my first post from my own website.  Pretty cool, huh?

In the last few weeks we got Comcast Internet here at the house, which gave me an opportunity to do a bunch of stuff related to my projects.  We were able to have Cat5e run along with the coax, so I now have a hard line to my bedroom so that should make things much nicer when the microwave is on, or trying to stream AirPlay, or managing the VM Host downstairs.

I also went ahead and hooked up the pfSense firewall I built a month or so ago.  Have that nicely configured with Squid (w/ filtering), setup a basic VPN (which I’ve been able to test quite easily since the DSL line is still functional).  Just have a decent network layout going.  I haven’t gotten the traffic shaping configured in a way I like yet.

After that project was done, I moved on to (hopefully) setting up a personal website/blog/email server on the home network.  I ran into some snags with Comcast blocking some ports, so I decided to finally bite the bullet and venture into the cloud.  Started by messing with a few Spot Instances on EC2 for a few pennies an hour, to try to figure things out.  Nothing overly complicated, but a fair number of big annoyances for even such a small player as myself.  Once I was comfortable, I purchased a Reserved Micro Instance, allocated an Elastic IP and started configuring email (not quite working) and got this blog installed and migrated.  And now I type this first post.

The next thing on my list is to install a wiki here so I can keep a record of projects, diagrams, and little tips I never have a use for at the moment I find them.

Edit: I had a Spot Instance running for a couple days which I started a wiki on (not critical) thinking I’d be able to migrate relatively easily if I ever went to a Reserved Instance.  When I went to move, I wasn’t getting anywhere and couldn’t mount it to save my life, but was just now able to mount it on my permanent instance and migrate all the old wiki data over.  Haven’t found any issues.  Sweet.

Parallel…this would have been useful

Found this tool the other day and made sure to firmly plant a note of it in my mind for next time I need to do a bunch of uncoupled tasks.

Watch the video and see for yourself.

There is some similarity to the ‘xargs -P ‘ command, but parallel seems to be much more powerful and pretty intuitive.

Next Project: Virtual Lab

I’ve finally moved the file server and VM host machine out of my room.  When it’s already sweltering in the room and it hits 92 degrees outside, it’s time to remove that heat source.  This has the added benefit that I can keep things running while I sleep without the buzz of fans.

Before I finally went through with the move (and downstairs instead of just in the empty bedroom next door) I made sure I was able to remotely boot both machines.  That was a pretty painless process.  The little Atom board I have in the NAS doesn’t have an option to Boot from PCI, so I had to re-enable the onboard NIC, and and using an extra cable just for this purpose pretty much.  I found the program WakeUp on the Mac to send the remote boot signal.

So now on to the title of this post.  I recently found clonezilla and that got me started on a kick to figure out management of extensive LANs.  The particular use case I’m thinking of is schools.  Back at Messiah, virtually every break, we’d spend a day running around between labs, re-imaging the machines.  I’m fairly certain they were paying pretty well for that, and I really like the idea of using Linux (free, non-commercial) for that.  Also, the idea of imaging machines with an Ubuntu (Edubuntu coming to mind) distro, just adds to the thrill.  And in looking at Edubuntu, I thought about thin clients.  So I’m going to start to work on setting up a virtual lab environment to be able to test all these diverse elements.

On a related note, I was also able to configure both a Vyatta and pfSense virtual machine, and get the machines behind them talking to the internet.  So these will be crucial to this virtual lab concept.  Ought to be interesting.

One final thing: I really want to get into some BSD stuff.

Migration Update

Working on moving all the data back to NFS shares right now.  Still not what I wanted, but it’s something I can handle reasonably well with the auto-mounter from the command line, which is probably how things will be done most of the time.  And it’s probably more efficient than working through the Finder anyway.  I still hope I can get it working, but at the moment, I need to get things running again rather than messing around with data stores.  I did a trial run with the XP VM that I actually use to manage the VM Host it’s running on (ah, recursion), and hopefully because of something I did before I migrated, I was unable to log in.  There didn’t appear to be any corruption issues with it.  It booted just fine and brought me to the login screen, so we’ll see what happens there, and if any issues arise in the other VMs.

As far as the data server changes, all I ended up doing was installing the latest 0.7.2 beta of FreeNAS.  Nexenta required additional drives and there were no elegant resolutions, so back to FreeNAS.  I have high hopes for FreeNAS 0.8 and am anxiously awaiting new news on that front.  This is especially the case now that I’m taking advantage of ZFS datasets.  I decided I’d try compression on a VM and an ISO dataset.  Since the VM set would be more active and I only have a dual Atom, I went with the ‘lzjb’ algorithm as it was described as ‘fast’.  With all my existing VMs migrated to that dataset, I’m getting a compression ratio of 1.45x, which is acceptable, I feel, considering it put a decent load on the CPU at that.  (Just did the math and that works out to about 30% savings, very nice.) Since the ISO dataset would mostly be used in a read context and not very often, I decided I’d try to get more compression out of it and went with the ‘gzip’ (equivalent to ‘gzip-6’ according to the comments).  Here, the effort was quite wasted, as the resulting ratio was a mere 1.04x, definitely not worth the CPU time.  With FreeNAS 0.8 and the move to FreeBSD 8 series, I’ll also be able to pick up dedup, which I would bet would result in a huge gain in free space.

Update: I’m working on a new OS install (Xubuntu) and CPU load is fairly constant at around 10%.  This is theoretically doing both ISO decompression and live VM compression.  Networking is its usual terrible self, but I am reading and writing from/to the same disk, so I’m not getting the best duplex speeds.

Handy VMFS Script

I talked last night about migrating my VMs and rebuilding my “storage infrastructure”.  Currently all of my equipment is in my room and it produces a ton of heat and noise, so I usually shutdown the ESXi machine at night (the FreeNAS box doesn’t make any noticeable noise or heat).  All that to say, that I didn’t want to leave that machine on all night so that it could copy the files to it’s local drive.  I decided that I have this big RAID array attached to my laptop, so I might as well dump them there.

First thing, I initiated an iSCSI connection, that led to Finder asking me if I wanted to format the drive, which I definitely did not want to do.  This is because ESX formats the drive with their VMFS, so I Google’d around a bit to find this Google Code Project vmfs.  I was now able to browse around the datastore and copy bits to my local machine.  One problem though, it’s command line based (I guess there’s a WebDAV extension, but I didn’t try that) and the available copy command isn’t recursive.  So I put ran a couple commands to get a list of files, created a few directories since those didn’t copy and set it running.  I had to make some tweaks in the morning to fix issues with spaces in the name.

I’ve since updated the method and placed it into a script.  You no longer have to manually create any of the directories, and it’s down to a simply calling a script.  (VMFS Recursive Copy Script)

Slow Progress

I’m still here.  Slowly making progress through the RHCE book.  I’m discovering typos, inaccuracies (possibly due to the age of the book), and things just don’t seem to work very nicely, and it gets very frustrating to try to learn from something that can’t even do it right itself.  I’m also at a point where I’m hitting on things that (I think) I am unlikely to ever encounter.  So those are also difficult to work at.

I’m thinking of doing some storage system modifications.  If I go through with them, I’ll end up migrating the couple VMs I have set up to the local storage, doing whatever I need to to the storage box, and then move them all back.  I’m playing with the idea of using NFS.  I already migrated one of the rarely used VMs over to an NFS share.  The reason I’m considering a storage revamp is the permissions issues that still evade my fixes.  I don’t know if it was during install, post-install, something I forgot, what-have-you, but I haven’t found an acceptable solution as of yet.  Currently, the best method I have is to automount the NFS shares through the command line to copy ISOs, but the new VM NFS share I can access from the Finder.  It’s all rather bewildering and annoying.  I’m probably going to try a fresh FreeNAS install with better knowledge and then go for Nexenta if that fails.

RHCE Slacking

To be quite honest, I’ve been slacking the last couple days working through the RHCE book I have. RHCE Red Hat Certified Engineer Linux Study Guide (Exam RH302) (Certification Press) I got distracted by the RAID inquiries and household chores since this is Messiah’s graduation weekend and we’re having people in the house.

I’m into the Labs of Chapter 4: Linux Filesystem Administration.  It’s actually sort of fitting that I ended up learning RAID administration while in the midst of these labs.  I assume later in the book, RAID will be put forth, so it won’t be wasted time (in regards to the RHCE, it definitely wasn’t a waste even if it doesn’t show up).

In fact, I messed up part of Lab 1 and put a bad path into the fstab file and had to figure out how to recovery from it.  I was rather proud of myself for getting it fixed since it wasn’t prescribed and I made use of the previous chapter (GRUB & Booting) to get into emergency mode and correct the line.

I’ll get back into it tomorrow, it’s definitely been too long of a break.  We’ll see how I do.  I think before I go for any test, I’ll go back through the book and complete all the labs, and when I can do them blindfolded I’ll be ready.

EDIT: I checked out a few more books on Amazon for RHCE and there are several newer and just as highly rated books available.  I’ll probably also purchase one of them to get a different perspective on the exam.  The book I currently have has a publication date in 2007 which is practically ancient history in computer time.

Software RAID – Pt. 2

It’s not the next day, but I spent some time messing with mdadm and setting up RAID on Linux.  Fairly easy process.  I have neither the hardware nor use for setting up something for real, so I haven’t really seen what it can do as far as performance and just how long it takes to resync after growing, etc.

To mess with this I cobbled together a couple bash scripts, one does various array builds, RAID level upgrades, and resizes, the other tears everything down so I can start again.

You can find the build script here (safe_raid.sh) and tear down here (unraid.sh).  To run you need to call them as root, since they involve file systems.  Also, be aware that RAID is picky about things (as it should be), so jumping right through safe_raid will not give you a working RAID volume, you have to wait for some processes to complete before moving on.