Galera

Thanks Percona and attendees for a great Percona Live UK 2011

Many people have asked me what do I think was the best thing about Percona Live UK. I always answered: that it happened in the first place! This was the first time we had such a large and high-quality MySQL conference in Europe, and many well known bloggers and speakers that can't always travel to Santa Clara were present.

More importantly, many MySQL users who don't travel to Santa Clara could now see them speak and meet with them. I met at least 4 hard core MySQL DBA's from Helsinki that I've never met before. We had to travel to London to meet each other! (But if you are in Helsinki, we have our first MySQL user group tomorrow, this should fix things!)

When I walked into the conference venue, I introduced myself to a person that stood there talking to Baron Schwartz. He introduced himself as Schlomi Noach. Then we started laughing: we know each other quite well, Schlomi and I run the annual MySQL awards as co-secretaries. I never realized we had not met in person before!

The content of the program was very high quality. In the past few years I've come to value informal one-to-one discussions as my primary source of new information instead of the official lectures, but at this conference I actually chose to attend as many talks as possible and learned new information from many of them.

Galera 1.0 is here, Severalnines support, more to come

There are moments in history that become like signpost that everyone remembers the rest of their lives. Like where were you when you heard the news that JFK had been shot, or those 9/11 planes hit the WTC twin towers. If you work with MySQL and high-availability, then this week will be remembered as such. And if you're a European MySQL geek, you will remember that we were at the Percona Live UK conference when Galera clustering 1.0 was announced. Btw, the conference itself was also historical, at least for European MySQL users. I will have to write a separate blog post about the conference, because it was a great one, and I have to post slides of my 2 talks too. But this blog post is dedicated to the stable release of Galera.

Helsinki MySQL User Group, Tue Nov 1 @ 18:00

Suomeksi: MySQL käyttäjätapaaminen Helsingissä 1. marraskuuta. Klikkaa allaolevaa linkkiä ilmoittautuaksesi, siellä saat myös lisätietoa suomeksi.

Finally it's here! So many of you have always asked about it. Markus and other Elisa guys. Osma and Ilkka at Habbo Hotel. And others... MySQL was born in Helsinki, InnoDB was born in Helsinki, a lesser known database / also MySQL engine called Solid was born in Helsinki, and 2 great replication companies, Continuent with multiple generations of clustering for MySQL, and Codership with Galera, are Helsinki companies. And amidst this embarrassment of riches, what did we not have?

A MySQL User Group.

Galera disk bound workload revisited

Update 2012-01-09: I have now been able to understand the poor(ish) results in this benchmark. They are very likely due to a bad hardware setup and neither Galera nor InnoDB is to blame. See https://openlife.cc/blogs/2012/january/re-doing-galera-disk-bound-bench…

People commenting on my results for benchmarking Galera on a disk bound workload seemed to be confused by the performance degrading when writing to more than one master, and not convinced at my speculations on the reasons. Since sysbench 0.5 has the benchmarks in the form of LUA scripts, it was temptingly easy to tweak those a little to see if my speculations were correct. So yesterday I did run tests again with a slightly modified sysbench workload. (Everything else is identical, so see previous article for details on the setup.)

More Galera lessons: parallel slave, out of order commits and deadlocks

2 concepts I've been an active advocate of during the past few years are both supported by Galera: Multi-threaded (aka parallel) slave, and allowing out-of-order commits on such a parallel slave. In trying to optimize Galera settings for the disk bound workload I just reported on, I also came to test these alternatives.

Single threaded vs Multi threaded slave

All of my previously reported tests have been run with wsrep_slave_threads=32. For the memory-bound workload there was no difference using one thread or more, but I left it at 32 "just in case". For the disk bound workload there is a clear benefit in having a multi-threaded slave:

Benchmarking Galera on a disk bound workload

Update 2012-01-09: I have now been able to understand the poor(ish) results in this benchmark. They are very likely due to a bad hardware setup and neither Galera nor InnoDB is to blame. See https://openlife.cc/blogs/2012/january/re-doing-galera-disk-bound-bench…

After getting very good results with Galera with using a memory bound workload, I was eager to then also test a disk bound workload. Also this time I learned a lot about how Galera behaves and will try to share those findings here.

Setup

The setup for these tests is exactly the same as in last weeks benchmarks, except that I've now loaded the database with 10 tables containing 40 million rows each. This sums up to about 90GB database:

Running sysbench tests against a Galera cluster

So, vacation is over and I was in luck: Already during first week I had ample time to finally put Galera replication to the test. It was a great experience: I learned a lot, and eventually got the great results I was hoping to see.

Again I've started by just running the standard Sysbench oltp read-write test. Since this is a commonly used benchmark, it produces numbers that are comparable with others running the same benchmarks. Including, as it happens, Galera developers themselves.

These tests were run on an 8 core server with 32 GB of RAM and the disk on some EMC device with a 2,5GB write cache.

The ultimate MySQL high availability solution

A while ago Baron blogged about his utter dislike for MMM, a framework supposedly used as a MySQL high-availability solution. While I have no personal experience with this framework, reading the comments to that blog I'm indeed convinced that Baron is right. For one thing, it includes the creator of MMM agreeing.

Baron's post still suggests - and having spoken with him I know that's what he has in mind - that a better solution could be built, it's just MMM that has a poor design. I'm going to go further than that: Personally, I've come to think that this family of so called clustering suites is just categorically the wrong approach to database high-availability. I will now explain why they fail, and what the right way is instead.

About the bookAbout this siteAcademicAccordAmazonBeginnersBooksBuildBotBusiness modelsbzrCassandraCloudcloud computingclsCommunitycommunityleadershipsummitConsistencycoodiaryCopyrightCreative CommonscssDatabasesdataminingDatastaxDevOpsDistributed ConsensusDrizzleDrupalEconomyelectronEthicsEurovisionFacebookFrosconFunnyGaleraGISgithubGnomeGovernanceHandlerSocketHigh AvailabilityimpressionistimpressjsInkscapeInternetJavaScriptjsonKDEKubuntuLicensingLinuxMaidanMaker cultureMariaDBmarkdownMEAN stackMepSQLMicrosoftMobileMongoDBMontyProgramMusicMySQLMySQL ClusterNerdsNodeNoSQLodbaOpen ContentOpen SourceOpenSQLCampOracleOSConPAMPPatentsPerconaperformancePersonalPhilosophyPHPPiratesPlanetDrupalPoliticsPostgreSQLPresalespresentationsPress releasesProgrammingRed HatReplicationSeveralninesSillySkySQLSolonStartupsSunSybaseSymbiansysbenchtalksTechnicalTechnologyThe making ofTransactionsTungstenTwitterUbuntuvolcanoWeb2.0WikipediaWork from HomexmlYouTube

Search

Recent blog posts

Recent comments