Central database cluster malfunction

Expected resolution: 9 Feb 2017, 21:20 UTC
Return to issues
Issue status: Resolved Date:

15 Feb 2017
15:19 UTC

Posted by:

Patrick Cherry

We've done an analysis of how the postgres cluster failed, and why it took so long to get going again. There was a configuration error with how the replication was set up between our master, slave and backup database servers, and that has now been rectified.

Issue status: Investigating Date:

9 Feb 2017
21:38 UTC

Posted by:

Patrick Cherry

We're still investigating why the route disappeared from two of the servers in our database cluster spontaneously.

However the route's disappearance revealed errors in how our postgresql database replication had been configured. This has now been rectified, so we don't expect the same error to happen again.

Issue status: Investigating Date:

9 Feb 2017
20:57 UTC

Posted by:

Patrick Cherry

Our central postgres and mysql databases malfunctioned this evening when a route to the local IPv6 network disappeared. This will have affect all machine updates on our cloud platform, as well as our support system.

The problem has been rectified now, and services are up again.

As ever if you are experiencing problems, please raise an urgent ticket and I'll take a look.

Return to issues

Issue still not addressed? Please contact support.