We're still doing the root cause analysis of what happened here, and we'll let you know when we find out. The problem occurred at the company where we host our systems, not at Chumby itself.
As to what chumbys should do under these circumstances - chumbys that ware active during this outage should still have had their alarms go off. If one had restarted the chumby, that would have resulted in alarm failures.
We have a plan for better off-line operation, but it was out in the future a bit, once overall functionality stabilized - the biggest issue is that modifying offline mode requires firmware updates, and thus more difficult and failure-prone to deploy.
Today will no doubt be consumed with internal meetings to post-mortem this event and plans for insulating chumbys from this sort of problem in the future. We're actively committed to reduce customer impact from these types of events.
The timing, of course, couldn't be worse - we're *officially* launching the chumby today, and I wake up to a forum full of posts about an outage, and a string of emails from IT discussing the issue. They were up all night working with the hosting company to fix this.
But then, what else could we expect? Murphy always strikes at exactly these times.