Parallel deployment tools

If you are dealing with deployments on regular basis then choosing a good and stable deployment library which has certain features for deployment e.g. parallel deployments is mandatory.

We recently faced same challenges as previously we were using Ant for packaging and deployment, then moved to Gradle but after considering moving our infrastructure behind load balancers (to support more users and fault tolerance) we wanted to have tool which can enable us to be able to deploy to our servers in parallel behind load balancer.

For this purpose we explored and tried out a few tools to see which tool would fit better for a longer run in our scenario. List of the tools which we evaluated are following:

1. Ansible: While it does support parallel deployment, it is quite a large tool set and more of a server management and configuration tool. We are already using Chef for server management and configuration so going with this was not ideal for us.

We preferred Chef over Ansible due to the utility and customizations we can do via cookbooks and recipes.

2. Capistrano: This tool is based on Ruby and follows deployment to a custom folder which is customizable but expects every deployment to be working from a separate folder via symbolic link.

3. Fabric: This is a python based library which streamlines system administration and supports for parallel deployments across fleet of servers.

In the end we decided to go with Fabric due to the following reasons:

  • Python is a more mature and widely used system administration language.
  • A vibrant community and documentation.
  • We are already using Python for some our tasks and plan to migrate other scripts (which are written in shell and php) to Python in the longer run.

Migrating domains from Godaddy to Route53

Godaddy is one of the oldest and widely used domain services provider but after after having Route53 from AWS (Amazon Web Services) it is almost necessary (as per a developer and automation perspective) to move to Route53 or any other DNS provider which provides us APIs and other services to automate around the domains. Due to this requirements, I had to move a few domains to Route53 from GoDaddy.

Migrating domains from one DNS provider to another is a big decision and requires careful planning to avoid any down time. When we were doing this we planned this adequately and followed these steps which I believe can be helpful for others as well.

Steps:

  1. Reduce TTL for every record to a few minutes so when we are moving they start sending traffic to Route53 immediately. This is not a required step but recommended if you want to test it quickly. If your current TTL time is in days or weeks then this will take that much time to go live.
  2. Export zone file from GoDaddy.
  3. Import zone file into Route53.
  4. Announce downtime for maintenance on scheduled date to your customers.
  5. Initiate transfer from Route53. Refer to #2 in references.
  6. Accept and authorize it from GoDaddy. Refer to #3 in references.
  7. Verify the changes.

Verification:

  1. After completing migration, we will need to run the following for each record from zone file to make sure it is responding to DNS query both with name server and without it:
  2. dig +noauthority +noquestion +nostats myDomain.com
  3. dig +noauthority +noquestion +nostats myDomain.com @ns-AWS-DNS.
  4. nslookup -debug myDomain.com
  5. nslookup -debug myDomain.com ns-AWS-DNS.
  6. traceroute myDomain.com

References:

  1. http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/MigratingDNS.html 
  2. http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/domain-transfer-to-route-53.html?console_help=true
  3. https://pk.godaddy.com/help/transferring-domain-names-to-another-registrar-3560

Creating local apt repo from DVD ISOs on Debian

openlogo-100Recently I installed Debian 8.4.0 on my system. I installed it from DVD ISO downloaded from Debian’s site. After installation of basic system, I needed a faster way to install new packages and for that I setup local repo from the downloaded ISOs.

To do so, I followed the following steps:

1. Create mount points at your required locations e.g. /media/username/CD1, /media/username/CD2 etc.

2. Update /etc/fstab with entries for mounting ISO image every time on reboot:
/home/username/Debian/debian-8.4.0-amd64-DVD-1.iso /media/username/CD1/ iso9660 loop,ro,user,noauto 0

3. After saving changes, run mount -a as root to make sure your changes are done correctly in /etc/fstab. Once this is confirmed, you can now mount this point by running mount /media/username/CD1 as root.

4. Now Update /etc/apt/sources.list with the following:
deb file:///media/username/CD1/ jessie main contrib

Make sure you comment out any other line pointing to CD ROM mounting of same ISO image present in sources.list. Once this is done run apt-get update

This should now first contact your local repo and then go ahead to other repos you have listed in sources.list.

AWS Volume Snapshot Automation

I am working on a small backup automation script for AWS (Amazon Web Services). Previously, I wrote this script shared on Github Gist here to take a snapshot of a specific volume for one user which is already being used in production.

Now I am thinking to add the following to this script:

  • Add support in script to be used by multiple AWS accounts in different regions.
  • Remove older snapshots and keep only the latest 3 copies for so.

I will share this script as a separate project on Github once complete.

Linux Training Workshop

In last few months, I conducted Linux Training Workshop for my colleagues in groups. It was aimed at our developers who were developing LAMP projects on Windows machines.

The presentation gives a very basic summary but in workshop I provided them examples and later also distributed a document which contains useful Linux commands.

Incremental backup using Rdiff-Backup

Backup. No one can deny the fact that it is a very core requirement of IT businesses. We had been using a script to backup data to a specified folder on our Linux server and then from Linux server we used to write CD weekly or bi weekly basis. Due to lack of time, we couldn’t improve this for quite sometime.

Recently, when due to some extremely busy schedule I couldn’t take backups on CDs, I thought to improve our backup process. Our requirements were really simple, we needed a backup process which could copy our required folders to a remote server and then later on keep doing some incremental backups there. Upon some research, I found two software being used for this purpose one is rsync, the other one rdiff-backup, I chose to go with rdiff-backup software as it was purely written for this purpose of backup and it also uses the libraries of rsync.

I studied and evaluated it and found it worth trying. After a few days of trying on this on my local machine, I installed this both on our local server and online server to start backup using this. Following are some of the steps which would help anyone setup this on their machines:

  • To use rdiff-backup for remote backup of local servers. Both servers need to have rdiff-backup installed. rdiff-backup is available from apt-get on a debian system and for Redhat based systems it could be downloaded from http://rdiff-backup.nongnu.org/
  • After installing rdiff-backup on both servers to start taking backup issue the following command:
    rdiff-backup /home/www/ [email protected]::/home/meraj/backup/ (whhere /home/www/ is my local source folder to taken backup of and [email protected]::/home/meraj/backup/ is server address and destination folder)

Now my next step is to automate this stuff. I don’t want do this every day manually so now will try to get some time and automate this by writing a shell script and then run it through crontab.

While setting up and running rdiff-backup, I noticed the following issues:

  1. To make sure that rdiff-backup is working correctly, before trying to start backup run ‘rdiff-backup –version‘ on both servers and make sure that it returns version correctly, which would mean that rdiff-backup is correctly configured and running.
  2. Make sure that on the destination server we have SE (on redhat machines) not running or if running, then make it permissive mode.
  3. In case running SE in permissive mode also doesn’t help then try to relabel the _librsync.s using ‘chcon -t texrel_shlib_t /usr/lib/python2.4/site-packages/rdiff_backup/_librsync.s