GA

2019/11/12

InnoDB Clusterの構築だけMySQL Shellでやって、運用はSQLインターフェイスでやるのはアリ?

TL;DR

  • 調べたいこと
    • MySQL InnoDB Clusterの構築(と、その後の運用フェーズでメンバーの追加削除)だけMySQL Shellでやって、フェイルオーバーその他の動作はSQLインターフェイス( mysql コマンドまたはスクリプトなど) からだけできるか?
    • できた
  • …というちょっとした検証をしたい時にMySQL Shellは便利

日々の覚書: MySQL Shellのdba.deploySandboxInstanceでサクッとmysqldを起動する でやったのと同じ手順で、 ubuntu:latest (2019/11/12時点) なDockerコンテナに mysql-community-server, mysql-shell, mysql-router-community をインストールした。
( ´-`).oO( mysql-community-servermysql-router-community って “community” が付く位置が違って一瞬ハマった…そして mysql-shell は “community” がつかない…
# dpkg -l | grep -i mysql
ii  mysql-apt-config            0.8.14-1                          all          Auto configuration for MySQL APT Repo.
ii  mysql-client                8.0.18-1ubuntu18.04               amd64        MySQL Client meta package depending on latest version
ii  mysql-common                8.0.18-1ubuntu18.04               amd64        Common files shared between packages
ii  mysql-community-client      8.0.18-1ubuntu18.04               amd64        MySQL Client
ii  mysql-community-client-core 8.0.18-1ubuntu18.04               amd64        MySQL Client Core Binaries
ii  mysql-community-server      8.0.18-1ubuntu18.04               amd64        MySQL Server
ii  mysql-community-server-core 8.0.18-1ubuntu18.04               amd64        MySQL Server Core Binaires
ii  mysql-router-community      8.0.18-1ubuntu18.04               amd64        MySQL Router
ii  mysql-shell:amd64           8.0.18-1ubuntu18.04               amd64        MySQL Shell (part of MySQL Server) 8.0
mysqlsh を起動し、 dba.deploySandboxInstance() で2つばかりインスタンスを立ち上げる。
# mysqlsh
 MySQL  JS > dba.deploySandboxInstance(3306)
 MySQL  JS > dba.deploySandboxInstance(3307)
どっちかのインスタンス(最初のマスターにするやつ、どれでもいいと思う)にそのまま接続して、InnoDB Clusterを構築する。
 MySQL  JS > \c root@localhost:3306
reating a session to 'root@localhost:3306'
Please provide the password for 'root@localhost:3306':
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): Y

 MySQL  localhost:3306 ssl  JS > dba.createCluster('sakila_cluster')
クラスターのメンバーを追加するには、一度変数に受けてClusterオブジェクトにしてから addInstance() っぽい。
 MySQL  localhost:3306 ssl  JS > var c = dba.getCluster()
 MySQL  localhost:3306 ssl  JS > c.addInstance('localhost:3307')
Please provide the password for 'root@localhost:3307':
Save password for 'root@localhost:3307'? [Y]es/[N]o/Ne[v]er (default No): Y

NOTE: The target instance 'localhost:3307' has not been pre-provisioned (GTID set is
empty). The Shell is unable to decide whether incremental distributed state
recovery can correctly provision it.
The safest and most convenient way to provision a new instance is through
automatic clone provisioning, which will completely overwrite the state of
'localhost:3307' with a physical snapshot from an existing cluster member. To
use this method by default, set the 'recoveryMethod' option to 'clone'.

The incremental distributed state recovery may be safely used if you are sure
all updates ever executed in the cluster were done with GTIDs enabled, there
are no purged transactions and the new instance contains the same GTID set as
the cluster or a subset of it. To use this method by default, set the
'recoveryMethod' option to 'incremental'.

Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C
ここでMySQL 8.0.17とそれ以降ならCLONEプラグインを使って初期データのセットアップを済ませるモードが選べる。らくちん。
Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: localhost:3307 is being cloned from 127.0.0.1:3306
** Stage DROP DATA: Completed
** Clone Transfer
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: localhost:3307 is shutting down...

* Waiting for server restart... ready
* 127.0.0.1:3307 has restarted, waiting for clone to finish...
* Clone process has finished: 59.55 MB transferred in about 1 second (~inf TB/s)

State recovery already finished for 'localhost:3307'

The instance 'localhost:3307' was successfully added to the cluster.
なんか仕事をしている気になれそうな出力があったあと、クラスターのメンバーへの追加が終わっている。
 MySQL  localhost:3306 ssl  JS > c.status()
{
    "clusterName": "sakila_cluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "127.0.0.1:3306",
        "ssl": "REQUIRED",
        "status": "OK_NO_TOLERANCE",
        "statusText": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "127.0.0.1:3306": {
                "address": "127.0.0.1:3306",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.18"
            },
            "127.0.0.1:3307": {
                "address": "127.0.0.1:3307",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.18"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "127.0.0.1:3306"
}
はい。取り敢えず2台構成ではあるけどMySQL InnoDB Clusterの形にはなったのでMySQL Shellはここまで。
次はMySQL Router。
初期設定は --bootstrap でInnoDB Clusterに参加しているmysqldを指定( username@hostname:port 形式) してやるくらい。
# mysqlrouter --bootstrap root@localhost:3306 --user=mysqlrouter
Please enter MySQL password for root:
# Reconfiguring system MySQL Router instance...

- Checking for old Router accounts
 - Found old Router accounts, removing
- Creating mysql account 'mysql_router1_c1ntzh4p43m6'@'%' for cluster management
- Storing account in keyring
- Adjusting permissions of generated files
- Creating configuration /etc/mysqlrouter/mysqlrouter.conf

# MySQL Router configured for the InnoDB cluster 'sakila_cluster'

After this MySQL Router has been started with the generated configuration

    $ /etc/init.d/mysqlrouter restart
or
    $ mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf

the cluster 'sakila_cluster' can be reached by connecting to:

## MySQL Classic protocol

- Read/Write Connections: localhost:6446
- Read/Only Connections:  localhost:6447

## MySQL X protocol

- Read/Write Connections: localhost:64460
- Read/Only Connections:  localhost:64470
これでコンフィグが書き込まれるので、その後フツーに起動してやれば息をし始める。
Dockerコンテナでsystemdを使えるような設定にはしてないので、 nohup で起動しちゃう。
# nohup  mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf &
[1] 4673

# mysql -h127.0.0.1 -P6446 -uroot -sse "SELECT @@port" ### mysqlrouterのRead/Writeポート
3306

# mysql -h127.0.0.1 -P6447 -uroot -sse "SELECT @@port" ### mysqlrouterのRead/Onlyポート
3307
ちゃんとつながった。
じゃあコイツを mysql コマンドラインクライアントでSQL使ってスイッチオーバーさせる。
mysql> SELECT * FROM performance_schema.replication_group_members; -- BEFORE
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 5442bcff-0506-11ea-b701-0242ac110003 | 127.0.0.1   |        3306 | ONLINE       | PRIMARY     | 8.0.18         |
| group_replication_applier | 88295d82-0506-11ea-8a34-0242ac110003 | 127.0.0.1   |        3307 | ONLINE       | SECONDARY   | 8.0.18         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.05 sec)

mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 5442bcff-0506-11ea-b701-0242ac110003 | 127.0.0.1   |        3306 | ONLINE       | PRIMARY     | 8.0.18         |
| group_replication_applier | 88295d82-0506-11ea-8a34-0242ac110003 | 127.0.0.1   |        3307 | ONLINE       | SECONDARY   | 8.0.18         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.05 sec)

mysql> SELECT group_replication_set_as_primary('88295d82-0506-11ea-8a34-0242ac110003');
+--------------------------------------------------------------------------+
| group_replication_set_as_primary('88295d82-0506-11ea-8a34-0242ac110003') |
+--------------------------------------------------------------------------+
| Primary server switched to: 88295d82-0506-11ea-8a34-0242ac110003         |
+--------------------------------------------------------------------------+
1 row in set (0.15 sec)

mysql> SELECT * FROM performance_schema.replication_group_members; -- AFTER
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 5442bcff-0506-11ea-b701-0242ac110003 | 127.0.0.1   |        3306 | ONLINE       | SECONDARY   | 8.0.18         |
| group_replication_applier | 88295d82-0506-11ea-8a34-0242ac110003 | 127.0.0.1   |        3307 | ONLINE       | PRIMARY     | 8.0.18         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)
ルーター側はどうなったかな。
# mysql -h127.0.0.1 -P6446 -uroot -sse "SELECT @@port" ### Read/Writeポート
3307

# mysql -h127.0.0.1 -P6447 -uroot -sse "SELECT @@port" ### Read/Onlyポート
3306
いけてますね。
MySQL Shellが苦手でも、一度構築(これは面倒なのでMySQL Shellに任せてしまいたい、個人的に)してしまえばあとはいつもの使い慣れたものから呼び出せるということでひとまず安心。

0 件のコメント :

コメントを投稿