Tuesday, May 27, 2014

MySQL Fabric: Musings on Release 1.4.3

As you might have noticed in the press release, we just released MySQL Utilities 1.4.3, containing MySQL Fabric, as a General Availability (GA) release. This concludes the first chapter of the MySQL Fabric story.

It all started with the idea that it should be as easy to manage and setup a distributed deployments with MySQL servers as it is to manage the MySQL servers themselves. We also noted that some of the features that were most interesting were sharding and high-availability. Since we also recognized that every user had different needs and needed to customize the solution, we set of to create a framework that would support sharding and high-availability, but also other solutions.

With the release of 1.4.3, we have a range of features that are now available to the community, and all under an open source license and wrapped in an easy-to-use package:

  • High-availability support using built-in slave promotion in a master-slave configuration.
  • A framework with an execution machinery, monitoring, and interfaces to support management of large server farms.
  • Sharding using hash and range sharding. Range sharding is currently limited to integers, but hash sharding support anything that looks like a string.
  • Shard management support to move and split shards.
  • Support for failure detectors, both built-in and custom ones.
  • Connectors with built-in load balancing and fail-over in the event of a master failure.

Beyond MySQL Fabric 1.4.3

As the MySQL Fabric story develop, we have a number of challenges ahead.

Loss-less Fail-over. MySQL 5.7 have extended the support for semi-sync so that transactions that are not replicated to a slave server will not be committed. With this support, we can truly have a loss-less fail-over so that you cannot lose a transaction if a single server fails.

More Fabric-aware connectors. We currently have support for Connector/J, Connector/PHP, and Connector/Python, but one common request is to have support for a Fabric-aware C API. This is both for applications developed using C/C++, but also to add Fabric support to connectors based on the MySQL C API, such as the Perl and Ruby connector.

Multi-Node Fabric Instance. Many have pointed out that the Fabric node is a single point of failure, and it is instead a single node, but if the Fabric node goes down, the system do not stop working. Since the connectors cache the data, they can "run on the cache" for the time it takes for the Fabric node to be brought up again. Procedures being executed will stop, but once the Fabric node is on-line again, execution will resume from where it left off. To ensure that the meta-data (the information about the servers in the farm) is not lost in the event of a machine failure, MySQL Cluster can be used as storage engine, and will then ensure that your meta-data is safe.

There are, however, a few advantages in having support for multiple Fabric nodes:

  • The most obvious advantage is that execution can fail-over to another node and there will be no interruption in the execution of procedures. If the fail-over is built-in, you avoid the need for external clusterware to manage several Fabric nodes.
  • If you have several Fabric nodes available to deliver data, you improve responsiveness to bursts in meta-data requests. This can happen if you have a large bunch of connectors brought on-line at the same time.
  • If you have multiple data centers, having a local version of the data to serve the applications deployed in the same center improve locality of data and avoid an unnecessary round-trip over WAN to fetch some meta-data.
  • With several nodes to execute management procedures, you can improve scaling by being able to execute several management procedures in parallel. This would require some solution to avoid that that procedures do no step over each other.
Location Awareness. In deployments spread over several data-centers, the location of all the components suddenly become important. There is no reason for a connector to be directed to a remote server when a local one suffices, but that require some sort of location awareness in the model allowing the location of servers (or other components) to be given.

Extending the model by adding data centers is not enough though. The location of components withing a data center might be important. For example, if a connector is located in a particular rack in the data center, going to a different rack to fetch data might be undesirable. For this reason, the location awareness need to be hierarchical and support several levels, e.g., continent, city, data center, hall, rack, etc.

Multi-Shard Queries. Sharding can improve performance significantly since it split the data horizontally across several machines and each query therefore go directly to the right shard of the data. In some cases, however, you also need to send queries to multiple shards. There are a few reasons for this:

  • You do not have the shard key available, so you want to search all servers for some object of interest. This of course affect performance, but in some cases there are few alternatives. Consider, for example, searching for a person given name and address when the database is sharded on the SSN.
  • You want to generate a report of several items in the database, for example, find all customers above 50 that have more than 2 cars.
  • You want a summary of some statistic over the database, for example, generate a histogram over the age of all your customers.
Session Consistency Guarantees. As Alfranio point out, when you use multiple servers in your farm, and transactions are sent to different servers at different times, it might well be that you write one transaction that goes to the master of a group and then try to read something from the same group. If the write transactions have not reached the server that you read from, then you might get an incorrect result from your transaction. In some cases, this is fine, but in other cases, you have certain guarantees that you want to have on your session. For example, you want to ensure that anything you write will also be available when you read in transactions following the write, you might want to guarantee that multiple reads read later data all the time (called "read monotonicity"), or other forms of guarantees on the result sets you get back from the distributed database. This might require connectors to wait for transactions to reach slaves before reading, but this should be transparent to the application.

This is just a small set of the possibilities for the future, so it is really going to be interesting to see how the MySQL Fabric story develops.


Unknown said...

Hey Mats, thanks for sharing this awesome post on MySQL Fabric.

CoolKiran said...

Right now the limitation of MySQL Fabric is that, Fabric will not allow the Joins over the sharded data.

Any chances to include in the upcoming release ?

As we might have to get the information present in two shards by joining. How do we achieve this ?

Best Regards,

Mats Kindahl said...

Hi Kiran,

MySQL Fabric does not currently support joins of sharded data if it requires joining between shards, but still support joins within a shard.

What is the use-case you have? I think that is more interesting, since one of the reasons for sharding is performance, and a cross-shard join would kill performance, which sort of defeats the purpose of using sharding.

CoolKiran said...

Hi Mats,

Thanks for your reply.

I am still testing/trying out the MySQL Fabric before we move to stage/prod.

If I have a table data lets say employee where shardkey is empid. If shard-A has the empid ranging from 1 to 1000 and shard-B has has the empid ranging from 1001 to 2001. If I want to get the total employees list for reporting purpose or If I have to perform some aggregate functions. What should I do ? Please suggest.

Mats Kindahl said...

Hi Kiran,

For this you do not need cross-shard joins but rather support for scatter-gather execution (a.k.a. cross-shard queries). This is a different beast and frequently requested.

Currently it is, unfortunately, necessary to issue the scatter-gather in the application, but since this is such a frequent request it is a high priority for us to make this available.

CoolKiran said...

Hi Mats,

Thanks for your reply.

How do I use cross-shard queries to access the table data present in more than shards ?

Best Regards,

Sekar said...

I thought protecting the website from the hackers is a hectic task. This post makes it easy for the developers and business people to protect the website. Keep sharing posts like this…
Hire Magento Developer
Hire Web Developer
Hire Php Programmer
Hire Php Developer
Opencart Developers India
Hire a Coder

Anonymous said...


☑️ The COMPOSITE CYBER SECURITY SPECIALISTS have received numerous complaints of fraud associated with websites that offers an opportunity to buy or trade binary options and bitcoin investments through Internet-based trading platforms.  Most Of The complaints falls into these Two categories:
1. 🔘Refusal to credit customers accounts or reimburse funds to customers:
These complaints typically involve customers who have deposited money into their binary options trading account and who are then encouraged by “brokers” over the telephone to deposit additional funds into the customer account.  When customers later attempt to withdraw their original deposit or the return they have been promised, the trading platforms allegedly cancel customers’ withdrawal requests, refuse to credit their accounts, or ignore their telephone calls and emails.

2. 🔘Manipulation of software to generate losing trades:
These complaints allege that the Internet-based binary options trading platforms manipulate the trading software to distort binary options prices and payouts in order to ensure that the trade results in a Loss.  For example, when a customer’s trade is “winning,” the countdown to expiration is extended arbitrarily until the trade becomes a loss.

☑️ Most people have lost their hard earned money through binary options and bitcoin investments, yet they would go and meet fake recovery Experts unknowingly to help them recover their money and they would end up losing more money in the process. This Is Basically why we (COMPOSITE CYBER SECURITY SPECIALISTS) have come to y’all victim’s rescue. The clue is most of these Binary option brokers have weak Database security, and their vulnerabilities can be exploited easily with the Help of our Special HackTools, Root HackTools And Technical Hacking Strategies because they wouldn’t wanna spend money in the sponsorship of Bug bounty Programs which would have helped protect their Database from Unauthorized access to their Database, So all our specialists do is to hack into the Broker’s Database via SQL Hook injections & DNS Spoofing, Decrypt your Transaction Details, Trace the Routes of your Deposited Funds, Then some Technical Hacking Algorithms & Execution Which we cant explain here would follow then you have your money recovered. 💰 ✔️

☑️All our Specialists are well experienced in their various niches with Great Skills, Technical Hacking Strategies And Positive Online Reputations And Recommendations🔘
They hail from a proven track record and have cracked even the toughest of barriers to intrude and capture all relevant data needed by our Clients.
We have Digital Forensic Specialists, Certified Ethical Hackers, Software Engineers, Cyber Security Experts, Private investigators and more. Our Goal is to make your digital life secure, safe and hassle free by Linking you Up With these great Professionals such as JACK CABLE, ARNE SWINNEN, SEAN MELIA, DAWID CZAGAN, BEN SADEGHIPOUR And More. These Professionals are Well Reserved Professionals who are always ready to Handle your job with great energy and swift response so that your problems can be solved very quickly.
All You Need to Do is to send us a mail and we’ll Assign any of these specialists to Handle your Job immediately.

☑️ Below Is A Full List Of Our Services:
* PHONE HACKING (giving you Unnoticeable access to everything Happening on the Target’s Phone)

••• Email:

🔘2020 © composite cybersecurity specialists
🔘Want faster service? Contact us!
🔘All Rights Reserved ®️

John Keck said...

I'm happy to share my experience so far in trading binary options.
I have been losing and finding it difficult to make profit in trading for long until i meet [Robert Seaman] who help me and gave me the right strategy and winning signals to trade and also i was able to get all my lost fund back from greedy brokers through Him. now i can make a profit of 12000USD weekly through his amazing masterclass strategy feel free to email him on: Robertseaman939@gmail.com or  WhatsApp: +44 7466 770724