Its Nightmare Story struggled a lot to reduce the cut over time with Azure MySQL to AWS Migration MySQL Migration.
Tested Methods,
- Backup and Restore - Its took 6 hours to complete
- AWS DMS - Text Data truncated and timestamp column data was 6 hours behind from the source
But Nothing helped due to Bugs, features unavailable etc.., But by default replication helped us to work.
Two types replication available in Azure MySQL ,
1.GTID based Replication
This is we already discussed with multiple topics in our blog for GCP cloud, also we will explain how to configure GTID based replication from Azure MySQL to AWS RDS in next blog
MySQL GTID Replication and lower_case_table_names
Multi Source Replication from GCP Cloud MySQL to Percona XtraDB Cluster
2.Binlog position-based replication
We are performing binlog position-based replication to cut over the application in this blog
Environment Setup
Below POC environment setup done for testing
- Azure MySQL 5.6.4 General Purpose instance launched with minimal configuration
- AWS RDS for MySQL 5.7 launched with customized parameter group
- Network connectivity between Azure PaaS and AWS PaaS Service
Once environment is up and running, modify the below server parameters in both side
Below values needs to update in Both environments,
Azure MySQL:
- replicate_wild_ignore_table=mysql.%,information_schema.%,performance_schema.%,sys.%
- binlog_row_image = FULL
- binlog_expire_logs_seconds = 1440
AWS RDS MySQL:
- time_zone = Same as Azure MySQL
- binlog_format = ROW
- collation and character_set parameters = Same as Azure MySQL
Now both server is ready to configure the replication
Pre-Steps to configure the replication
1.Let’s verify and take a note of binlog position in Azure MySQL
mysql> show master status.
+------------------+----------+--------------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+----------------------+-----------------+--------------+------------------+-------------------+
| mysql-bin.00865 | 52884741 | | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
2.Create a replication user with necessary privileges
CREATE ‘repluser’@’%’ IDENTIFIED BY ‘datablogs123’.
GRANT REPLICATION SLAVE ON *.* TO ‘repluser’@’%’;
GRANT REPLICATION CLIENT ON *.* TO ‘repluser’@’%’;
GRANT SELECT, CONNECT ON *.* TO ‘repluser’@’%’;
FLUSH PRIVILEGES;
3.Using mysqldump start the schema backup and followed by data backup
mysqldump -h datablogs.mysql.database.azure.com -u repluser@datablogs -p datablogsprod --no-data --routines --triggers --events > datablogsprodmeta.sql
mysqldump -h datablogs.mysql.database.azure.com -u repluser@datablogs -p datablogsprod --routines --triggers --events --skip-add-drop-table --max_allowed_packet=1024M --single-transaction > datablogsproddata.sql
4.Once backup is completed restore the data into MySQL RDS ,
mysql -h datablogs.c5exr7bs8bax.ap-southeast-1.rds.amazonaws.com -u repluser -p datablogsprod < datablogsprodmeta.sql
mysql -h datablogs.c5exr7bs8bax.ap-southeast-1.rds.amazonaws.com -u repluser -p datablogsprod --max_allowed_packet=1024M < datablogsproddata.sql
Configuring the replication
In Amazon MySQL set the external master with the help of Step 1 binlog file name and binlog position
CALL mysql.rds_stop_replication;
CALL mysql.rds_reset_external_master;
CALL mysql.rds_set_external_master ('datablogs.mysql.database.azure.com', 3306, 'repluser@datablogs', 'repluser', 'mysql-bin.00865', 52884741, 0);
CALL mysql.rds_start_replication;
Once executed start the replication and check the slave status in RDS ,
mysql> show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: datablogs.mysql.database.azure.com
Master_User: repluser@datablogs
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.00865
Read_Master_Log_Pos: 52884741
Relay_Log_File: relaylog.000004
Relay_Log_Pos: 273
Relay_Master_Log_File: mysql-bin.00865
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB: datablogsprod
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table: mysql.plugin,mysql.rds_monitor,mysql.rds_sysinfo,innodb_memcache.cache_policies,mysql.rds_history,innodb_memcache.config_options,mysql.rds_configuration,mysql.rds_replication_status
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 52884741
Relay_Log_Space: 19742
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1053643462
Master_UUID: 787858b4-a021-11ea-b894-97f71069e028
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.00 sec)
Post Validation and troubleshooting steps of Azure MySQL Replication
Validate the data and tables counts between Azure MySQL and MySQL RDS
If we get any errors related with system schema in the Azure use the following method to fix it , we got same error like below ,
Can't find 'mysql.__firewallrules__' doesnt exist
We can skip the error with below commands in MySQL RDS ,
CALL mysql.rds_stop_replication;
CALL mysql.rds_skip_repl_error;
CALL mysql.rds_start_replication;
But in MySQL RDS Row by row only errors will be skip and it will take long time to do it , follow the below method and check the binlog position until for the DML , DDL queries are captured in binlog
SHOW BINLOG EVENTS IN 'mysql-bin.00865' FROM 52884567 LIMIT 10 \G
Note : Based on the error modify the binlog file and position in above command
Once found the binlog position repeat the configuring replication step
Now All set for Azure to AWS data sync , now its ready for application cutover with zero downtime !!!