Maria DB

Interesting post to start our configuration:

https://www.digitalocean.com/community/tutorials/how-to-configure-a-galera-cluster-with-mariadb-on-ubuntu-12-04-servers

apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db

add-apt-repository 'deb http://mirror.jmu.edu/pub/mariadb/repo/5.5/ubuntu trusty main'

apt-get update

apt-get upgrade

apt-get install python-software-properties

apt-get install -y galera  mariadb-galera-server-5.5 mariadb-client-5.5 libmariadbclient18 mariadb-client-core-5.5 rsync netcat-openbsd

Create in /etc/mysql/conf.d a new file cluster.cnf

[mysqld]
query_cache_size=0
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=0
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
#wsrep_provider_options="gcache.size=32G"

# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://first_ip,second_ip,third_ip"

# Galera Synchronization Congifuration
wsrep_sst_method=rsync
#wsrep_sst_auth=user:pass

# Galera Node Configuration
wsrep_node_address="this_node_ip"
wsrep_node_name="this_node_name"

Copy on all others nodes the same /etc/mysql/debian.cnf file start the first node, using this command:

sudo service mysql start --wsrep-new-cluster

On each of the other nodes, you can now start MariaDB as you normally would. They will search for any member of the cluster list that is online. When they find the first node, they will join the cluster.

sudo service mysql start

Logs..logs...logs:

tail -f /var/log/syslog

mysql query to show the cluster size:

mysql -u root -p -e 'SELECT VARIABLE_VALUE as "cluster size" FROM INFORMATION_SCHEMA.GLOBAL_STATUS WHERE VARIABLE_NAME="wsrep_cluster_size"'

TODO

  • create users with grant, and try to connect from others cluster nodes.
  • start and stop all nodes, less than one!!!

** Configuring NGINX Plus for TCP Load Balancing

https://www.nginx.com/blog/mysql-high-availability-with-nginx-plus-and-galera-cluster/

stream {
    upstream db {
        server db1:3306;
        server db2:3306;
        server db3:3306;
    }

    server {
        listen 3306;
        proxy_pass db;
        proxy_connect_timeout 1s; # detect failure quickly
    }
}

This is a consequence of the Galera replication process; when UPDATEs to the same record are executed in parallel across databases, a deadlock can occur and the database rejects the transaction.

In some situations, this behavior is acceptable. If an application is unlikely to submit conflicting updates in parallel, and the application code can gracefully handle these very infrequent rejected transactions (by returning an error to the user, for example), then it might not be a serious issue.

If this behavior is not acceptable, the simplest solution is to designate a single primary database instance in the upstream server group, by marking the others as backup and down:

upstream db {
    server db1:3306;
    server db2:3306 backup;
    server db3:3306 down;
}