export PATH=/bin:/usr/bin:/usr/local/bin
#Decalre Today Date
TODAY=`date +"%d%b%Y"`
#Declare Variables Required to pass for mongo dump command
DB_BACKUP_PATH='/mnt/mongobackup'
MONGO_HOST='localhost'
MONGO_PORT='27017'
MONGO_USER='xxxxxxxxxxx'
MONGO_PASSWD='xxxxxxxxxxxxx'
DATABASE_NAMES='ALL'
#Remove Old Backup Files
find ${DB_BACKUP_PATH} -name "*.zip" -type f -mtime +3 -delete
find ${DB_BACKUP_PATH} -type d -mtime +3 -exec rm -rf {} \;
#Create Directory for Backup
mkdir -p ${DB_BACKUP_PATH}/${TODAY}
cd ${DB_BACKUP_PATH}/${TODAY}/
if [ ${DATABASE_NAMES} = "ALL" ]; then
echo "You have choose to backup all database"
mongodump --uri="mongodb://${MONGO_USER}:${MONGO_PASSWD}@${MONGO_HOST}:${MONGO_PORT}"
else
echo "Running backup for selected databases"
for DB_NAME in ${DATABASE_NAMES}
do
mongodump --uri="mongodb://${MONGO_USER}:${MONGO_PASSWD}@${MONGO_HOST}:${MONGO_PORT}/${DB_NAME}"
done
fi
#Compress The Backup
cd ${DB_BACKUP_PATH}/${TODAY}
zip -r ${DB_BACKUP_PATH}_${TODAY}.zip ${DB_BACKUP_PATH}/${TODAY}
cd ${DB_BACKUP_PATH}/${TODAY}
#Copy the Compressed file into Azure Container using Shared Access Token
azcopy cp ${DB_BACKUP_PATH}_${TODAY}.zip "https://xxxxxxxxxxx.blob.core.windows.net/xxxxxxxxxxxx?sp=w&st=xxxxxTxxxxxxxZ&se=xxxxxxZ&spr=https&sv=2021-06-08&sr=c&sig=csdfcdsxxxxxxxxxxxxxxx" --recursive=true
#Send Mail with Backup Logs
if [ $? -ne 0 ]
then
echo "Mongo Native backup Failed in $(hostname) $(date). Please contact administrator." | mail -r mail@datablogs.com -s "Mongo Native backup Failed $(hostname)" dbsupport@datablogs.com < /mongodata/cronscripts/mongo_backup_log.log
else
echo "Mongo Native backup completed in $(hostname)." | mail -r mail@datablogs.com -s "Mongo Native backup completed in $(hostname)" dbsupport@datablogs.com < /mongodata/cronscripts/mongo_backup_log.log
fi
Comparing Redis , AWS Elasticache giving multiple options to cache the data in
cloud . Its enhanced with two ways of Access control options in PaaS
Service
Below are the few mission handling features we loved in elastic cache ,
Detecting data failures and recovery the nodes automatically
Multi AZ features with automatic failover to replication nodes
Redis Cluster mode enabled supports data partitioning up to 500 shards in
different zones
Redis managed backups , automatic failure detection and log files transfer
to Kinesis Firehouse and cloud watch
Easy online migration from any Redis source
Encryption at rest and Encryption in transit are supported
Data volume up to peta bytes with micro second response time
Let start migrating data from Redis to Elasticache ,
Preparing our source and target Redis nodes for migration
Launched two EC2 machines and installed redis 5.x version
Configured replication between master and replica
To avoid downtime , we are doing online migration from replication server . so we have modified below values in replica server . Its based on your source modify the below values
3.Configured replication between master and replica
Note : Transferring source master or replica should not have AUTH configured and protected mode disabled
4.Launch Elastic cache instance with below steps
Cluster Mode Disabled ( Redis Engine 5.0.5 or Higher )
Multi-AZ Should be Enabled
Encrypt at rest and in-transit should be disabled
If we have not followed above all steps , we will get below popup while we start the migration
Starting Migration
In Elastic console Click --> Action --> Migrate data from endpoint
Once clicked , we can type the Source Redis Endpoint ( Private IP ) and Port ( Customized Port ) in below console ,
Then click Start Migration and follow the logs from Events in same console ,
If we get any error , It will be printed in the events log
Once its started , Restart the Master of Elasticache node one time , In our case we need to restart the replica server because its our source
Once restarted verify the info replication in Elasticache nodes
Once started successfully , we can't perform any other operation in Elasticache nodes ,
Verifying the Data Migration Progress
We have inserted sample record in the EC2 Master Node and verified the same in the replica and as well as in Elasticache
Data looks good , once all the data transferred from IaaS to PaaS . Complete the migration in the Elasticache console and point the application endpoints to Elasticache node
Sharing simple steps to configure Cassandra cluster in few minutes ,
Prerequisite :
Install Java ( Jre ) from oracle site and verify java version , But our installing step will be take care of java installation as well
java --version
Installing Cassandra repository :
What are repositories ?
"Authentication keys" are usually obtained from the maintainer of the software repository. The maintainer will often place a copy of the authentication key on a public key server such as www.keyserver.net. The key can then be retrieved using the command.
Install the Cassandra repository into /etc/apt/sources.list.d/cassandra.sources.list using below command
Depends on requirement , update the 36x values into below command
echo "deb http://www.apache.org/dist/cassandra/debian 36x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
if we didn't know the public key to authenticate for download packages , just run apt-get update commands and get the authentication key for Cassandra software
root@datablogs:~# sudo apt-get update
Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial InRelease
Hit:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-updates InRelease
Hit:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-backports InRelease
Get:5 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]
Hit:6 http://apt.postgresql.org/pub/repos/apt xenial-pgdg InRelease
Get:4 https://downloads.apache.org/cassandra/debian 36x InRelease [267 B]
Ign:4 https://downloads.apache.org/cassandra/debian 36x InRelease
Get:7 https://downloads.apache.org/cassandra/debian 36x/main amd64 Packages [678 B]
Fetched 113 kB in 1s (79.6 kB/s)
Reading package lists... Done
W: GPG error: https://downloads.apache.org/cassandra/debian 36x InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 749D6EEC0353B12C
W: The repository 'http://www.apache.org/dist/cassandra/debian 36x InRelease' is not signed.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
Here , 749D6EEC0353B12C is the public key . Add the public key to install Cassandra ,
root@datablogs:~# sudo apt-key adv --keyserver pool.sks-keyservers.net --recv-key 749D6EEC0353B12C
Executing: /tmp/tmp.HUY7Ncexdr/gpg.1.sh --keyserver
pool.sks-keyservers.net
--recv-key
749D6EEC0353B12C
gpg: requesting key 0353B12C from hkp server pool.sks-keyservers.net
gpg: key 0353B12C: public key "T Jake Luciani <jake@apache.org>" imported
gpg: Total number processed: 1
gpg: imported: 1 (RSA: 1)
Installing Apache Cassandra :
Start updating the repository and install Cassandra cluster , Java Packages included with installation
root@datablogs:~# sudo apt-get update
Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial InRelease
Hit:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-updates InRelease
Hit:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-backports InRelease
Get:5 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]
Hit:6 http://apt.postgresql.org/pub/repos/apt xenial-pgdg InRelease
Get:4 https://downloads.apache.org/cassandra/debian 36x InRelease [267 B]
Fetched 112 kB in 1s (95.7 kB/s)
Reading package lists... Done
root@datablogs:~# sudo apt-get install cassandra
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
ca-certificates-java fontconfig-config fonts-dejavu-core java-common libavahi-client3 libavahi-common-data libavahi-common3 libcups2 libfontconfig1 libjpeg-turbo8
libjpeg8 liblcms2-2 libnspr4 libnss3 libnss3-nssdb libopts25 libpcsclite1 libpython-stdlib libpython2.7-minimal libpython2.7-stdlib libxi6 libxrender1 libxtst6 ntp
openjdk-8-jre-headless python python-minimal python2.7 python2.7-minimal x11-common
Suggested packages:
cassandra-tools default-jre cups-common liblcms2-utils pcscd ntp-doc libnss-mdns fonts-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho fonts-wqy-microhei
fonts-wqy-zenhei fonts-indic python-doc python-tk python2.7-doc binutils binfmt-support
The following NEW packages will be installed:
ca-certificates-java cassandra fontconfig-config fonts-dejavu-core java-common libavahi-client3 libavahi-common-data libavahi-common3 libcups2 libfontconfig1
libjpeg-turbo8 libjpeg8 liblcms2-2 libnspr4 libnss3 libnss3-nssdb libopts25 libpcsclite1 libpython-stdlib libpython2.7-minimal libpython2.7-stdlib libxi6
libxrender1 libxtst6 ntp openjdk-8-jre-headless python python-minimal python2.7 python2.7-minimal x11-common
0 upgraded, 31 newly installed, 0 to remove and 9 not upgraded.
Need to get 63.1 MB of archives.
After this operation, 167 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libjpeg-turbo8 amd64 1.4.2-0ubuntu3.4 [111 kB]
Get:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-updates/main amd64 x11-common all 1:7.7+13ubuntu3.1 [22.9 kB]
Get:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial/main amd64 libxtst6 amd64 2:1.2.2-1 [14.1 kB]
.....
.....
.....
Configuring Apache Cassandra :
Needs to configure required variables in cassandra.yaml file ,
cluster_name: Name of the cluster
cluster_name: 'Test Cluster' ( Should be same in all Cassandra nodes )
-seeds: This is a comma-delimited list of the IP address of each node in the cluster
listen_address: This is IP address that other nodes in the cluster will use to connect to this one. It defaults to localhost and needs changed to the IP address of the node
rpc_address: This is the IP address for remote procedure calls. It defaults to localhost. If the server’s host name is properly configured, leave this as is. Otherwise, change to server’s IP address or the loop back address (127.0.0.1)
Sounds interesting from Percona Backup tool for MongoDB !!! I just wants to try and explore the tool with docker on today !!! Docker is first time for me , but in few days docker become favorite one to use for all kind of HA scenario works
Lets move into today's practices and issues ,
Note : Percona Backup for MongoDB supports Percona Server for MongoDB or MongoDB Community Server version 3.6 or higher with MongoDB replication enabled
Step 1 : Launched the Ubuntu 16.04 machine from AWS , then updated the latest packages and installed the docker
sudo apt-get update
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker
Once docker installed with latest package verify the docker version using docker --version
Step 2 : Installed two docker mongo containers with replica set enabled
We need to login with docker command line and configure and start the replication between mongo servers . login into primary mongo container and execute below commands in mongo shell ,
MongoDB supports horizontal scaling of the data with the help of the shared key. Shared key selection should be good and poor shared key split the data in only a single shared
Today have tried a simple setup of MongoDB sharding with two shared nodes, sharing the simple steps to configure the same. Initially prepared with server lists and IP addresses of each server to avoid confusion by myself
Launched 6 ubuntu servers and installed mongo in all the servers, set hostname accordingly. As above 2 mongo shared, 1 mongo router and 3 mongo config servers have been launched. Before installing mongo update the system with the latest packages
sudo apt-get update && sudo apt-get upgrade
Then start installing the MongoDB in all the servers
For secure authentication, MongoDB recommends the X.509 certificate to secure connections between production systems. we need to create a key file for secure authentication between the members of your replica set.
Initially in primary config server create the key file with OpenSSL and copy the same SSL file to another server in the same location
Once a key file is created, add value in all the /etc/mongod.conf. Its should be same as below because mongod.conf file is case sensitive
security:
keyFile: /opt/mongo/mongodb-keyfile
sudo systemctl restart mongod
Main Components :
Config Server: This stores metadata and configuration settings for the rest of the cluster
Query Router: The Mongols daemon acts as an interface between the client application and the cluster shards. It’s like a listener of mongo instances
Shard: A database server that holds a portion of your data. Items in the database are divided among shards either by range or hashing
Steps involving in the configuration :
1.Configure the config servers
2.Configure the Query Router
3.Configure the sharding
1.Configure the config servers
Using single config server is not enough to maintain the metadata at the time of the disaster, we are setting up one primary and two secondary replica set
On each config server, edit below values in mongod.conf. bind IP values will be different for each server
Then restart mongo service using below command on each config servers
sudo systemctl restart mongod
Once restarted initiate the config server using below command, please replace the hostnames accordingly
And do check the rs.status of config server replica sets
Configuring the config server is completed, let's move on next steps
2.Configure the Query Router
Using the config server metadata information, send read and write queries to the correct shards
Create /etc/mongos.conf file and add the below lines
Create a new systemd unit file for mongos called /lib/systemd/system/mongos.service
Once we created files, needs to enable systemctl for mongos.service using below commands 1.sudo systemctl stop mongod 2.sudo systemctl enable mongos.service 3.sudo systemctl start mongos 4.systemctl status mongos
3.Configure the sharding servers
On each shared server, edit below values in mongod.conf. Bind IP values will be different for each server and restart the mongod service
Once everything is completed, using mongo query router address login into any one of shared servers, I have created a separate admin user for MongoDB. If required create it
Connect mongos interface and add the shared nodes, if you have replica set for shared nodes steps will be different to add shared
It’s done, shared000 and shared001 are added. There are many links available for sharding the database and collections to mangos. Tried with below examples for my test and its working as expected