13 Feb 2018
16:42 UTC
Jamie Nguyen
Around the time of the incident, we'd noticed that the tail management process had stopped responding to update commands. This itself doesn't result directly in significant problems, but something we did need to rectify so we restarted this management process; this resulted in a load spike on the tail for around 10-15 minutes. The tail has been performing normally the rest of the day.
13 Feb 2018
09:17 UTC
Jamie Nguyen
It looks like the load was extremely high on this tail, though the tail didn't actually go offline. Any affected Cloud Servers would likely be suffering from slow or frozen disk access; operating systems may also have appeared to be frozen.
We've identified the issue and remedied, so the load on tail56 appears back down to normal limits. This should mean that your Cloud Servers are performing normally again.
13 Feb 2018
09:10 UTC
Andrew Ladlow
We've been alerted to an unexpected outage of tail56 at approximately 09:05AM on Tuesday 13th.
tail56 is one of the physical boxes that forms part of our Cloud infrastructure. Some of your Cloud Servers will have discs running on this hardware. If you've got an affected Cloud Server, you may notice it is unresponsive or offline.
Sorry about this! We're investigating and working to restore full service as soon as possible. We'll post updates here.
If you're experiencing any problems with your services at Bytemark that you think might be related, please do get in touch.
Expected resolution time updated: We are currently investigating.