Showing posts with label percona. Show all posts
Showing posts with label percona. Show all posts
Friday, February 17, 2023
Wednesday, March 30, 2022
Selva |
March 30, 2022 |
No comments
|
AWS, Clustering, DNS, High Availability, MongoDB, percona, Replication, Route 53, Zero Downtime
We love MongoDB for extraordinary features as per business perspective
Lets come to our Blog Discussion , Only in PaaS Environments we have features like DNS endpoints for database easily connect with primary or secondary at any single point of failure
Mongo Atlas Providing all the features but small scale customers still using MongoDB with Virtual Machines or EC2 Instances . To handle point of failures in primary we can use DNS Seed List Connection Format in mongoDB . We will discuss in detail how to we configure this in AWS Cloud
What is seed list ?
Seed list can be list of hosts and ports in DNS Entries . Using DNS we can configure available mongoDB servers in under one hood . When client connects to an common DNS , its also knows replica set members available in seed list . Single SRV identifies all the nodes associated with the cluster . Like Below ,
root@ip-172-31-86-8:~# mongo "mongodb+srv://superuser:zU2iU9pF7mO7rZ4z@db.datamongo.com/?authSource=admin&readPreference=primary&ssl=false" Percona Server for MongoDB shell version v4.4.13-13 connecting to: mongodb://db1.datamongo.com:27717,db3.datamongo.com:27717,db2.datamongo.com:27717/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&readPreference=primary&replicaSet=db-replication&ssl=false
Environment Setup :
For Testing Purpose , We have launched 3 Private Subnet Servers and 1 Public Subnet Server to use like Bastion . Create One Private Hosted Zone for DNS and Installed Percona Server for MongoDB 4.4.13 then configured Replication in it
AWS EC2 Servers ,
Route 53 Hosted Zone ,
Creating A Records :
We have launched private subnet instances , so we required to create A Records for private IP's . If Public IPv4 DNS available we can create CNAME Records
A Records Created for db1 server ,
Inside the datamongo.com hosted Zone , Just Click Create Record
Same like we need to create A Records for other two nodes
Verify the A Records ,
root@ip-172-31-95-215:~# dig db1.datamongo.com ; <<>> DiG 9.11.3-1ubuntu1.17-Ubuntu <<>> db1.datamongo.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13639 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ;db1.datamongo.com. IN A ;; ANSWER SECTION: db1.datamongo.com. 10 IN A 172.31.85.180 ;; Query time: 2 msec ;; SERVER: 127.0.0.53#53(127.0.0.53) ;; WHEN: Tue Mar 29 11:58:09 UTC 2022 ;; MSG SIZE rcvd: 62 root@ip-172-31-95-215:~# dig db2.datamongo.com ; <<>> DiG 9.11.3-1ubuntu1.17-Ubuntu <<>> db2.datamongo.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9496 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ;db2.datamongo.com. IN A ;; ANSWER SECTION: db2.datamongo.com. 300 IN A 172.31.83.127 ;; Query time: 3 msec ;; SERVER: 127.0.0.53#53(127.0.0.53) ;; WHEN: Tue Mar 29 12:06:28 UTC 2022 ;; MSG SIZE rcvd: 62 root@ip-172-31-95-215:~# dig db3.datamongo.com ; <<>> DiG 9.11.3-1ubuntu1.17-Ubuntu <<>> db3.datamongo.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46401 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ;db3.datamongo.com. IN A ;; ANSWER SECTION: db3.datamongo.com. 300 IN A 172.31.86.8 ;; Query time: 2 msec ;; SERVER: 127.0.0.53#53(127.0.0.53) ;; WHEN: Tue Mar 29 12:06:33 UTC 2022 ;; MSG SIZE rcvd: 62 root@ip-172-31-95-215:~#
Creating SRV and TXT Records :
As like Atlas , Once we have the A Records for MongoDB Nodes , we can able to create SRV Records
Again Inside the datamongo.com hosted Zone , Just Click Create Record
Once its created , again click create record and create TXT records
Reading SRV and TXT Records :
We can use nslookup and verify the configured DNS Seeding ,
root@ip-172-31-95-215:~# nslookup > set type=SRV > _mongodb._tcp.db.datamongo.com Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: _mongodb._tcp.db.datamongo.com service = 0 0 27717 db2.datamongo.com. _mongodb._tcp.db.datamongo.com service = 0 0 27717 db3.datamongo.com. _mongodb._tcp.db.datamongo.com service = 0 0 27717 db1.datamongo.com. Authoritative answers can be found from: > set type=TXT > db.datamongo.com Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: db.datamongo.com text = "authSource=admin&replicaSet=db-replication" Authoritative answers can be found from:
Verify Connectivity :
Its all done , We can verify the connectivity with DNS Seed List Connection format ,
By Default , it will connect with ssl true , but we have configured mongodb without SSL . If you required to configure with SSL please refer our blog and configure DNS Seeding with help of this blog
root@ip-172-31-86-8:~# mongo "mongodb+srv://superuser:zU2iU9pF7mO7rZ4z@db.datamongo.com/?authSource=admin&readPreference=primary&ssl=false" Percona Server for MongoDB shell version v4.4.13-13 connecting to: mongodb://db1.datamongo.com:27717,db3.datamongo.com:27717,db2.datamongo.com:27717/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&readPreference=primary&replicaSet=db-replication&ssl=false Implicit session: session { "id" : UUID("ee74effc-92c7-4189-9e97-017afb4b4ad4") } Percona Server for MongoDB server version: v4.4.13-13 --- The server generated these startup warnings when booting: 2022-03-29T11:32:47.133+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem --- db-replication:PRIMARY> rs.status().members.find(r=>r.state===1).name; 172.31.83.127:27717 db-replication:PRIMARY> rs.status().members.find(r=>r.state===1).stateStr; PRIMARY db-replication:PRIMARY> rs.status().members.find(r=>r.state===2).name; 172.31.85.180:27717 db-replication:PRIMARY> rs.status().members.find(r=>r.state===2).stateStr; SECONDARY
Currently 172.31.83.127 is the primary server and 172.31.85.180 is secondary , to test connection we have stopped the primary server (172.31.83.127) in AWS console
after stopping primary server (172.31.83.127) , mongodb failover happened to to 172.31.85.180 . Its verified without disconnecting the mongo shell
root@ip-172-31-86-8:~# mongo "mongodb+srv://superuser:zU2iU9pF7mO7rZ4z@db.datamongo.com/?authSource=admin&readPreference=primary&ssl=false"Percona Server for MongoDB shell version v4.4.13-13connecting to: mongodb://db1.datamongo.com:27717,db3.datamongo.com:27717,db2.datamongo.com:27717/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&readPreference=primary&replicaSet=db-replication&ssl=falseImplicit session: session { "id" : UUID("ee74effc-92c7-4189-9e97-017afb4b4ad4") }Percona Server for MongoDB server version: v4.4.13-13---The server generated these startup warnings when booting:2022-03-29T11:32:47.133+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem---db-replication:PRIMARY> rs.status().members.find(r=>r.state===1).name;172.31.83.127:27717db-replication:PRIMARY> rs.status().members.find(r=>r.state===1).stateStr;PRIMARYdb-replication:PRIMARY> rs.status().members.find(r=>r.state===2).name;172.31.85.180:27717db-replication:PRIMARY> rs.status().members.find(r=>r.state===2).stateStr;SECONDARYdb-replication:PRIMARY> rs.status().members.find(r=>r.state===1).name;172.31.85.180:27717db-replication:PRIMARY> rs.status().members.find(r=>r.state===1).stateStr;PRIMARY
Its working as expected and we have no worries if anything happens on mongoDB primary node in Cloud IaaS as Well !!!
Please contact us if any queries and concerns , we are always happy to help !!!
Wednesday, February 19, 2020
It is easy to migrate the MySQL databases from one cloud provider to another cloud provider , but without downtime is little difficult to migrate the 1 TB of data . Using GTID based replication is easy to achieve in few hours of effort
Before go into detail , we will go through explanations of each things,
What is Cloud MySQL ?
Google Cloud Managed service for MySQL Database servers . It is supporting MySQL 5.6 and 5.7 with first and second generation instances . First and Second generation differs with allocating RAM and Storage
What is Percona xtradb cluster ?
Synchronous multi-master replication using percona MySQL server and percona xtrabackup using galera Library . As recommended it should contains odd number of nodes and same data spanned with all nodes
What is multi-source replication ?
MySQL multi-source replication enables a replication slave to receive transactions from multiple immediate masters in parallel
In this scenario , we have to migrate more than 3 GCP Cloud MySQL Managed instances into
Percona Galera Cluster . Before proceed to migrate the data , it is requires to compare MySQL 5.7 and Percona MySQL 5.7
https://www.percona.com/doc/percona-server/5.7/feature_comparison.html
So that we can avoid impact after migrating the data into new server
Instructions follow to setup multi source replication ,
- In GCP MySQL Instance , take a user database backup from each instances separately through mysqldump or using export in GCP GUI Console and move to backup files to destination server using gsutil or scp utility
mysqldump --user=root --password --host=gcpinstance1 sourcedb1 > mysqldump1.sql
mysqldump --user=root --password --host=gcpinstance2 sourcedb2 > mysqldump2.sql
- Before goes to next steps , we have to ready with three node percona XtraDB cluster environment before proceeds with next steps
- In Percona XtraDB Cluster end , need to modify below server variables in my.cnf file for GTID based replication and restart mysql service
[mysqld]
server-id=[SERVER_ID]
gtid_mode=ON
enforce_gtid_consistency=ON
log_slave_updates=ON
replicate-ignore-db=mysql
binlog-format=ROW
log_bin=mysql-bin
expire_logs_days=1
read_only=ON
Same Changes needs to edit for additional node of percona cluster
- Restore multiple source cloud mysql backup into one percona cluster instance
mysql --user=root --password --host=perconainstanceslave < mysqldump1.sql
mysql --user=root --password --host=perconainstanceslave < mysqldump2.sql
- Once restore is completed , we need to add multiple source of GCP Cloud MySQL instances into single percona cluster instance using below command
CHANGE MASTER TO MASTER_HOST="gcpinstance1", MASTER_USER="slaveuser1",MASTER_PASSWORD="slaveuser1", MASTER_PORT=3306, MASTER_AUTO_POSITION = 1;
CHANGE MASTER TO MASTER_HOST="gcpinstance2", MASTER_USER="slaveuser2",MASTER_PASSWORD="slaveuser2", MASTER_PORT=3306, MASTER_AUTO_POSITION = 1;
One we execute this , we will get below error due to applied transactions exists already in slave server or we will get duplicate transactions issues commonly
Slave_IO_Running: No
Slave_SQL_Running: Yes
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
- So,We need to get gtid_purged values from each separate source backups to ignore the deleted transactions in the binary log and troubleshoot above issues
cat mysqldump1.sql | grep GTID_PURGED | cut -f2 -d'=' | cut -f2 -d$'''
cat mysqldump2.sql | grep GTID_PURGED | cut -f2 -d'=' | cut -f2 -d$'''
- To set global GTID_PURGED values , needs to do reset master in percona galera cluster . If server is in cluster not possible to execute reset master command . Needs to disable wsrep_provider variable in wsrep.conf to execute reset master and restart mysql service in slave server
#wsrep_provider = /usr/lib/galera/libgalera_smm.so
- Once mysql service is restarted , we can able to do reset master in percona galera cluster server and set GTID_PURGED values
Set GTID_PURGED Value :
Check Show Slave Status G :
We able to see slave server is get synced in few minutes , and seconds_behind_master is 0
Check Show Processlist :
Highlighted the multi master servers ( two servers ) is syncing to slave server
Once completed all the steps , enable cluster wsrep_provider variable in wsrep.conf and restart mysql service in slave server
Verify once again the slave status and processlist :)
References :
External Replica Setup : https://cloud.google.com/sql/docs/mysql/replication/configure-external-replica
Get gtid_purged : https://dev.mysql.com/doc/refman/8.0/en/replication-multi-source-provision-slave.html
Set gtid_purged and troubleshoot : https://dev.mysql.com/doc/refman/8.0/en/replication-multi-source-provision-slave.html
Friday, January 3, 2020
Error 'Table 'EMPLOYEES.POSITION' doesn't exist' on query. Default database: 'employees'. Query: 'ALTER TABLE EMPLOYEES.POSITION ADD COLUMN phone VARCHAR(15)'
Interesting , table exist on slave server . But we are getting above error frequently and unable to broken the replication because database size is too big
We are having environment as GTID replication setup with windows server (Master) to Ubuntu Linux machine (Slave) . When we dig into all the findings , concluded its may be with case sensitivity issue . Lower_case_table_names variable value is same on both servers . But as per MySQL documents
Database and table names are not case-sensitive in Windows , but are case-sensitive in most varieties of Unix.Column,index,stored routine, and event names are not case-sensitive on any platform,nor are column aliases
So , decided to tackle the current situation without data loss in slave server . Changing the lower_case_table_names is not best practice and it will corrupt case_sensitive names data files .
we have stopped the slave , executed all the DML and DDL statements in slave server to match the schema deployments with master server .
And followed GTID replication error skipping process to skip the transactions manually handled in slave servers
Capture the error GTID with master binlog files
Skipped the binlog position 904 transactions in slave serverroot@master:/var/log/mysql# mysqlbinlog --start-position=904 mysql-bin.000004/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;DELIMITER /*!*/;# at 4#200102 11:35:24 server id 1 end_log_pos 123 CRC32 0x14c9c2f7 Start: binlog v 4, server v 5.7.28-0ubuntu0.16.04.2-log created 200102 11:35:24# Warning: this binlog is either in use or was not closed properly.BINLOG 'fNUNXg8BAAAAdwAAAHsAAAABAAQANS43LjI4LTB1YnVudHUwLjE2LjA0LjItbG9nAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAAXwAEGggAAAAICAgCAAAACgoKKioAEjQAAffCyRQ='/*!*/;# at 904#200102 12:06:51 server id 1 end_log_pos 969 CRC32 0x3564eba7 GTID last_committed=4 sequence_number=5 rbr_only=yes/*!50718 SET TRANSACTION ISOLATION LEVEL READ COMMITTED*//*!*/;SET @@SESSION.GTID_NEXT= '69b98063-2d31-11ea-9586-12b321a8670d:78'/*!*/;# at 969#200102 12:06:51 server id 1 end_log_pos 1042 CRC32 0xb527b5d3 Query thread_id=328 exec_time=0 error_code=0
mysql> stop slave;Query OK, 0 rows affected (0.00 sec)mysql> SET @@SESSION.GTID_NEXT= '69b98063-2d31-11ea-9586-12b321a8670d:78';Query OK, 0 rows affected (0.00 sec)mysql> begin ; commit;Query OK, 0 rows affected (0.00 sec)Query OK, 0 rows affected (0.00 sec)mysql> SET gtid_next=automatic;Query OK, 0 rows affected (0.00 sec)mysql> start slave;Query OK, 0 rows affected (0.00 sec)
Finally , GTID Replication begins to sync pending SQL threads and upto date the data with Master server !!!
Monday, December 23, 2019
Selva |
December 23, 2019 |
No comments
|
Backup, Database Administration, Docker, Mongo, NoSQL, percona
Sounds interesting from Percona Backup tool for MongoDB !!! I just wants to try and explore the tool with docker on today !!! Docker is first time for me , but in few days docker become favorite one to use for all kind of HA scenario works
Lets move into today's practices and issues ,
Note : Percona Backup for MongoDB supports Percona Server for MongoDB or MongoDB Community Server version 3.6 or higher with MongoDB replication enabled
Step 1 : Launched the Ubuntu 16.04 machine from AWS , then updated the latest packages and installed the docker
sudo apt-get update
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker
Once docker installed with latest package verify the docker version using docker --version
Step 2 : Installed two docker mongo containers with replica set enabled
docker run --detach --name datablogs-mongo-primary --volume /var/lib/mongo:/data/db --volume /etc/mongodb.conf:/etc/mongo.conf --publish 44444:27017 mongo --replSet datablogs-repl-set
docker run --detach --name datablogs-mongo-secondary --volume /var/lib/mongo-slave:/data/db --volume /etc/mongodb-slave.conf:/etc/mongo.conf --publish 55555:27017 mongo --replSet datablogs-repl-set
We need to access the mongoDB instances outside the world , so I have publised the mongo db ports with different one
--publish 44444:27017
--publish 55555:27017
To access the mongoDB , we need to check the IP Address of both containers using below command
docker inspect datablogs-mongo-primary | grep IPAddress
docker inspect datablogs-mongo-secondary | grep IPAddress
-- usage of extensions refer with docker help
Step 3 : Configure mongo replica
We need to login with docker command line and configure and start the replication between mongo servers . login into primary mongo container and execute below commands in mongo shell ,
Once installed the pbm tool , login each mongo containers set storage path and start the pbm agent . I have used local storage path for mongo backup
storage.yaml :
Step 3 : Configure mongo replica
We need to login with docker command line and configure and start the replication between mongo servers . login into primary mongo container and execute below commands in mongo shell ,
docker exec -it datablogs-mongo-primary /bin/bash
config = {"_id" : "datablogs-repl-set","members" : [{"_id" : 0,"host" : "172.17.0.2:27017"},{"_id" : 1,"host" : "172.17.0.3:27017"}]};
rs.initiate(config);
Once we initiated the replication primary mongo shell will be changed
Step 4 : Install Percona Backup and Configure
Before proceeding this activity , we need to update and upgrade the packages using apt-get on each mongo containers
Installed the percona backup for mongoDB with below reference URL , we need to follow percona site for proper installation
storage.yaml :
type: filesystem
filesystem:
path: /tmp
pbm store set --config=storage.yaml --mongodb-uri="mongodb://127.0.0.1:55555/"
pbm-agent --mongodb-uri mongodb://172.17.0.2:27017 &
Step 5 : Backup and restore the collections using pbm
Once completed the setup running the backup in secondary mongo server
pbm backup --mongodb-uri mongodb://127.0.0.1:27017
Dropped the datablogs db and restored the backup using pbm
Finally verfied the db and collections in primary server
Am Really happy tested percona backup for mongoDB with Docker today !!! Keep learning !!!
Monday, December 16, 2019
Wow !!! Its easy too restore sensitive data without any fear !!!
Its really tough sometimes , we restored the sensitive data without knowing and test mails triggered to customer as $100 deducted from your account for purchase . Its strange scenario when we missed to cleansing the customer data in DEV Sandbox !!!
Yes MasKING sensitive / Credential is easy now !!! Reference : https://github.com/kibitan/masking
Just tried simple practice for masking the paymentdb table data with masking , Its working as expected
Step 1 : Installed latest Ruby version and masking using below commands . Before doing install , update the server with latest packages
Step 2 : Create masking.yml file anywhere in your linux machine . I have created the file inside etc directory with below values
In this address is table in paymentdb and name , email are few columns in address table
Step 4 : Restoring the paymentdb backup database into paymentdb_masked
Step 5 : Yay !!! Data has been successfully masked with masking utility . Results are below
Its really tough sometimes , we restored the sensitive data without knowing and test mails triggered to customer as $100 deducted from your account for purchase . Its strange scenario when we missed to cleansing the customer data in DEV Sandbox !!!
Yes MasKING sensitive / Credential is easy now !!! Reference : https://github.com/kibitan/masking
Just tried simple practice for masking the paymentdb table data with masking , Its working as expected
Step 1 : Installed latest Ruby version and masking using below commands . Before doing install , update the server with latest packages
rvm install ruby-2.6.3
gem install masking
In this address is table in paymentdb and name , email are few columns in address table
Step 3 : Now ready to take backup with masking utility of paymentdb
mysqldump --complete-insert -u root -p paymentdb | masking -c /var/lib/mysql/masking.yml > /tmp/paymentdb.sql
mysql -u root -p paymentdb_masked < /tmp/paymentdb.sql
Yup !!! Saved one more escalation !!! Thanks to MasKING :)
Tuesday, November 26, 2019
Problem :
2. Prepare the backup for Restore
3.Restore and Configure the Replication
We are running 5 node percona cluster on Ubuntu 16.04, and its configured with master-slave replication. Suddenly we got an alert for replica broken from slave server, which was earlier configured with normal replication
We have tried to sync the data and configure the replication, unable to fix that immediately due to huge transactions and GTID enabled servers. So we have decided to follow with innobackupex tool, and problem fixed in 2 hours
Followed all the steps from percona doc and shared the experience in my environment
Steps involving to repair the broken Replication :
1.Backup master server
2.Prepare the backup
3.Restore and Configure the Replication
4Check Replication Status
1.Backup master server
We need to configure the complete master server database into a slave. So we are taking a full backup from the master server, before proceeding the backup we should check disk space available for the backup because of its system-level backup
We have created a specific user for taking a backup from master server, once the backup is completed will get OK like below,
2. Prepare the backup for Restore
We need to prepare the backups to apply the transaction logs into data files, once it is OK, data files are ready to restore
Before moving the prepared files into slave server, verify the GTID information from xtrabackup_binlog_info
3.Restore and Configure the Replication
We can restore the backup else to create a new data directory and move the files into the directory. Stop mysql service before start restoring
Default it will restore the data related files in server datadir , using the server my.cnf configuration file .Once changed the data directory, we need to change the owner and permission of the MySQL data directory
innobackupex --copy-dir /mnt/2019-11-26_09-35-31
Default it will restore the data related files in server datadir , using the server my.cnf configuration file .Once changed the data directory, we need to change the owner and permission of the MySQL data directory
chown mysql:mysql /mnt/mysqldatanew
And restart the service with a new data directory, once its started login with master MySQL root user password. Because we have taken file backup from the master so metadata will be the same of master
Execute below commands to configure the replication
4. Check Replication StatusOnce slave is configured, verify the replication status as below
Also, Slave has retrieved a new transaction
Thanks for Reading !!!