Wednesday, July 18, 2012

Day 20–Slip it to the right

Copyright 2008-2009, Paul Jackson, all rights reserved

Note: Cross-posted from my roadtrip blog because the post has something remotely to do with scheduling.

  Miles MPG Avg. Speed
Today 398 44.1 59
Trip 6129 46.7 49

Food
(today/budget)

Hotel
(today/budget)

Trip Savings

$15 / $127 $100 / $100 AAA - $26
PriceLine – $945
Real $$ – $290
image

Today was driving.  From Spokane, out of Washington, across Idaho, and then down through Montana almost to Wyoming and Yellowstone.  The section of Idaho we went through was all mountains, so no potato for Aryn.

We did stop somewhere in Montana to get a huckleberry shake, which was quite good.

The plan was for three nights here, then on across South Dakota and then up to Grand Forks, but we’re behind schedule.

Much like with a software project, I had a perfectly reasonable schedule when we left Orlando.  I researched the driving time between cities and how long we’d stay in each place, put it in Excel, added three days for unexpected contingencies, and figured we were set.

Then, early in the project drive, scope-creep wormed its way in.  A night in Austin when we were supposed to push through to Carlsbad, an extra night in Albuquerque to avoid getting to Vegas on a Friday, an extra night in Vegas to see a third show – all reasonable and, hey, had that contingency time and we could probably make up a day between Vegas and San Francisco anyway. 

Well, the convention in San Francisco made making up the day and arriving early not so feasible, which put us on track to arrive home on the 24th.  On schedule, but with no contingency.

Then the tire problem, which ate three hours of the morning on the way out of San Francisco … three hours isn’t a problem, right?  But it cascaded into us coming out of Crater Lake at dusk, which doubled the time it took to get off the mountain and forced us to stop short of Seattle (cascaded in the Cascades, get it?).  There’s a day onto the right of the schedule.

The routing off of Ranier yesterday, which took us back toward Tacoma and Seattle and around the north side of the mountain was a complete cluster.  I’d originally headed east out of the park, towards Yakima, which would have put us on track.  But on the road down to Yakima we hit construction and it was one-laned and it’s a longer road – after coming down from Sequoia in the dark and coming down from Crater lake in the dark, I didn’t want to take a chance on a third time and winding up behind schedule. 

Look how well that worked out.

So the remaining schedule looks like:

7/18/2012 Yellowstone
7/19/2012 Yellowstone
7/20/2012 Black Hills
7/21/2012 Grand Forks
7/22/2012 Grand Forks
7/23/2012 Grand Forks
7/24/2012 Minneapolis
7/25/2012 Chicago
7/26/2012 Atlanta
7/27/2012 Orlando

 

 

 

 

 

 

I emailed work and let them know I won’t be in until Monday, 7/30.

Wednesday, February 1, 2012

Consolidating My Cloud Storage–Part I: Backups

Copyright 2008-2009, Paul Jackson, all rights reserved

Many years ago, when it first came out, I used Carbonite for online backups.  After a while, I found that I had so much NAS storage in my house that I stopped using Carbonite.  Then a year or so ago, I moved into an apartment and decided to go with offsite backup for my data again – in a house, you’re the only one you have to worry about burning the place down, but in an apartment you’re at the mercy of the dumbass next door – so I signed up with Mozy.

These are both good, cost-effective services, but over time I’ve found myself either signing up for or trying to find other uses for cloud storage.  Obviously services like Dropbox for file synching, but also I’d like to be able to mount cloud storage and use it directly, without using local disk space.  So I also looked at TntDrive, which allows you to mount Amazon S3 buckets as a drive.

At $9.99 a month for Mozy, the same for a larger Dropbox, and $39.95 for TntDrive, we’re at $270 for the first year.  $240 for each year after that, assuming no upgrade costs for TntDrive.  Mozy has a new Stash service in beta, which is somewhat similar to Dropbox for the PCs attached to your Mozy account, but sometimes I need to share a large file with someone who doesn’t even have Dropbox.  My mom, for instance, isn’t going to install Dropbox, or delivering processed video files to customers – for that I use S3, because I can upload the file and then send my customer a download link.

I had been considering a switch to Megaupload for that, but …

Anyway, I’ve always used CloudBerry’s free S3 Explorer for managing my S3 buckets, but I was still using a version I’d downloaded years ago and never bothered to upgrade, so I decided to see what the new version might offer.  Turns out that’s quite a bit, starting with a whole product for backups: CloudBerry S3 Backup.

My main requirement for a backup solution is that it be as transparent and timely as Mozy or Carbonite.  Full system backups I can do locally to NAS periodically, the offsite solution is for data.  I don’t want to have to run it manually.  CloudBerry meets this requirement with both scheduled and continuous, real-time backup.  We’ll give that a try and see how it works.

After downloading the demo and installing it, the first step is to create a Backup Plan and … holy crap!

image

S3, Azure, Google, and, like, a dozen other cloud storage services, including some I think they must have made up because I’ve never heard of them.  So, clearly, my storage target is not limited to S3 and I like that.  Options are good.

Next we enter the connection information for the S3 account and specify a bucket:

image

Note the little hint at the bottom of the dialog – mounting cloud storage as a virtual disk is one of the things I want to do.  This option is read-only, which is actually one of the uses I have for that feature.  I want my kids to be able to access my main music library, but not be able to change it (I don’t care what they do with their music, but I’ll take no chances with my Katy Perry MP3s).  We’ll check that feature out in a later post.

Next we specify a new S3 bucket to contain the backup and select its location.  This is one of the things that I like about an S3 solution – this degree of control may not be for everyone, but, well, I have control-issues, so this makes me happy.

image

Um … did I mention lots of control?

image

I went with Advanced Mode to support multiple versions of the files and block level backups (backing up only the changed segments of files).  I also turned on the Volume Shadow Copy Service, as I want this backing up changes continuously while I use the computer.

Then we select the files and directories we want to backup:

image

A nice feature here is the ability to add a network share to your backup, something not found in services like Mozy or Carbonite (last I checked).

But something those services do offer that CloudBerry doesn’t seem to is Explorer-integration:

image

It would be nice to be able to add a file or folder to a backup plan from Explorer – because opening the CloudBerry app and changing it there is too much work and I’m lazy.  It’d be a bit more work to do than Mozy’s, because Mozy only has one backup plan.  I’d suggest an “Add to CloudBerry Backup >” menu with submenus for applicable plans. 

More options:

image

And another suggestion: I’d like to be able to specify filters like these per folder/subfolder.  For instance, I might have a folder for downloaded apps that I want to keep the installs for (exe, msi, etc.), but I’d like to ignore the exes and dlls in my Visual Studio projects.  That would be interesting … because it would satisfy my control issues …

image

Compression and encryption are self-explanatory, but Reduced Redundancy Storage requires some explanation.  According to Amazon, S3 offers 99.999999999% durability and can survive concurrent data loss in two facilities – in layman’s terms that means “really, really good”.  RRS “only” offers 99.99% durability and data loss in a single facility.  What does that mean?

Lots of options for cleaning up old versions of files and deleted files:

image

And here’s where I can make the setting that will let me replace Mozy:

image

Real-time Backup is what I’m looking for myself, but the product’s flexible and powerful enough to handle a variety of backup strategies.  Why real-time?  Well, this is what my Mozy status looks like today:

image

I have it scheduled for twice a day and the laptop was on most of last night and all day the day before … but it hasn’t backed up in two days, despite my twice a day schedule.  I like the idea of the backup application being notified of changes and backing up right then, with possibly a scheduled backup to, well, backup the backup.

And finally email notifications for the backup status:

image

And the configuration’s done so I ran the backup.  My dataset for this was about 14GB, which brings us to some areas in which CloudBerry Backup could be improved.  There doesn’t seem to be a way to determine the original and compressed total for a given backup plan.  This is information that I’d like to know, so I had to figure out the original size manually by reviewing the folders I’d selected for backup:

image 

Improvement Opportunity: In the review of the selected folders, the UI for a partial folder is very faint and indistinguishable from an unselected folder.  Compare this with Mozy’s view:

image image
CloudBerry Mozy

The Mozy representation is far more usable.

I used S3 Bucket Explorer to determine the size of the files on S3 and came up with a little less than 12GB after compression.  CloudBerry’s S3 Explorer also has the capability to determine bucket size, and in a much nicer interface that allows you to drill down and graphically see the space used by different folders:

image

Unfortunately for me, the free version is limited to 2GB of objects for this report. 

Improvement Opportunity: I’d like to see this same capability in the backup tool – for a given Backup Plan I’d like to see, in the same graphical drill-down, the original size on disk and the compressed size on S3.

My next test was of the “continuous” feature, so I created a new folder in my documents library and dropped a Word document in it, using S3 Explorer to watch the bucket.  Within about two minutes, the new folder and Word document showed up on S3.  This is a feature Mozy doesn’t offer – it may not be something I use for all folders, but CloudBerry offers the flexibility to have multiple backup plans, scheduled, continuous, or on-demand, so I can set the appropriate plan for each type of data.

Next I wanted to test performance, so I created a new folder with three 700MB files.  I then added these to a CloudBerry backup plan and to Mozy in order to compare the upload speed of each.

image

image

Ouch … what’s up with this?  CloudBerry averages about 500KB/second when it’s backing up files, but Mozy averages 3.5-4MB/second.  That’s a pretty significant difference (a 7x to 8x difference in performance). 

CloudBerry, of course, is dependent on the S3 infrastructure, so this can probably be blamed on Amazon.  This difference doesn’t thrill me, but I’m more concerned with the restore (download) performance.  After all, the backup will be in the background and transparent, but when it comes time to restore it’ll be because a Bad Thing has happened and I want access to my data.

image

image

It seems Mozy still has the edge here, but the 4.2MB/second restore speed is completely respectable.

So I’ll run this for about a week and then see where we are.