Parallel deployment tools

If you are dealing with deployments on regular basis then choosing a good and stable deployment library which has certain features for deployment e.g. parallel deployments is mandatory.

We recently faced same challenges as previously we were using Ant for packaging and deployment, then moved to Gradle but after considering moving our infrastructure behind load balancers (to support more users and fault tolerance) we wanted to have tool which can enable us to be able to deploy to our servers in parallel behind load balancer.

For this purpose we explored and tried out a few tools to see which tool would fit better for a longer run in our scenario. List of the tools which we evaluated are following:

1. Ansible: While it does support parallel deployment, it is quite a large tool set and more of a server management and configuration tool. We are already using Chef for server management and configuration so going with this was not ideal for us.

We preferred Chef over Ansible due to the utility and customizations we can do via cookbooks and recipes.

2. Capistrano: This tool is based on Ruby and follows deployment to a custom folder which is customizable but expects every deployment to be working from a separate folder via symbolic link.

3. Fabric: This is a python based library which streamlines system administration and supports for parallel deployments across fleet of servers.

In the end we decided to go with Fabric due to the following reasons:

  • Python is a more mature and widely used system administration language.
  • A vibrant community and documentation.
  • We are already using Python for some our tasks and plan to migrate other scripts (which are written in shell and php) to Python in the longer run.

Migrating domains from Godaddy to Route53

Godaddy is one of the oldest and widely used domain services provider but after after having Route53 from AWS (Amazon Web Services) it is almost necessary (as per a developer and automation perspective) to move to Route53 or any other DNS provider which provides us APIs and other services to automate around the domains. Due to this requirements, I had to move a few domains to Route53 from GoDaddy.

Migrating domains from one DNS provider to another is a big decision and requires careful planning to avoid any down time. When we were doing this we planned this adequately and followed these steps which I believe can be helpful for others as well.


  1. Reduce TTL for every record to a few minutes so when we are moving they start sending traffic to Route53 immediately. This is not a required step but recommended if you want to test it quickly. If your current TTL time is in days or weeks then this will take that much time to go live.
  2. Export zone file from GoDaddy.
  3. Import zone file into Route53.
  4. Announce downtime for maintenance on scheduled date to your customers.
  5. Initiate transfer from Route53. Refer to #2 in references.
  6. Accept and authorize it from GoDaddy. Refer to #3 in references.
  7. Verify the changes.


  1. After completing migration, we will need to run the following for each record from zone file to make sure it is responding to DNS query both with name server and without it:
  2. dig +noauthority +noquestion +nostats
  3. dig +noauthority +noquestion +nostats @ns-AWS-DNS.
  4. nslookup -debug
  5. nslookup -debug ns-AWS-DNS.
  6. traceroute



Creating local apt repo from DVD ISOs on Debian

openlogo-100Recently I installed Debian 8.4.0 on my system. I installed it from DVD ISO downloaded from Debian’s site. After installation of basic system, I needed a faster way to install new packages and for that I setup local repo from the downloaded ISOs.

To do so, I followed the following steps:

1. Create mount points at your required locations e.g. /media/username/CD1, /media/username/CD2 etc.

2. Update /etc/fstab with entries for mounting ISO image every time on reboot:
/home/username/Debian/debian-8.4.0-amd64-DVD-1.iso /media/username/CD1/ iso9660 loop,ro,user,noauto 0

3. After saving changes, run mount -a as root to make sure your changes are done correctly in /etc/fstab. Once this is confirmed, you can now mount this point by running mount /media/username/CD1 as root.

4. Now Update /etc/apt/sources.list with the following:
deb file:///media/username/CD1/ jessie main contrib

Make sure you comment out any other line pointing to CD ROM mounting of same ISO image present in sources.list. Once this is done run apt-get update

This should now first contact your local repo and then go ahead to other repos you have listed in sources.list.

AWS Volume Snapshot Automation

I am working on a small backup automation script for AWS (Amazon Web Services). Previously, I wrote this script shared on Github Gist here to take a snapshot of a specific volume for one user which is already being used in production.

Now I am thinking to add the following to this script:

  • Add support in script to be used by multiple AWS accounts in different regions.
  • Remove older snapshots and keep only the latest 3 copies for so.

I will share this script as a separate project on Github once complete.

Git Hands On Workshop

A friend requested to do a small hands on workshop on git. I prepared the following very brief introduction and tutorial  for him and his team.

What is Git?
Git is a distributed revision control and source code management (SCM) system with an emphasis on speed. Git was initially designed and developed by Linus Torvalds for Linux kernel development in 2005.
What are forks?

When you are cloning a git repo on your local workstation, you cannot contribute back to the upstream repo unless you are explicitly declared as contributor.

So that clone (to your local workstation) isn’t a fork. It is just a clone.

Basic Git Operations
To initialize an existing project in Git, go to that directory and type this command.
In order to work with code, you will get a copy of it. The path to clone could be copied from bitbucket / github etc shown at top e.g. git clone

Once you have made changes to the code, before committing doing git status will show a list of modified files for analysis.

If there are any files, you don’t want to include in your project on git create a file named .gitignore with names of those files e.g. out put of a .gitignore file:

In git you have to add files / folders which are not yet part of project before you could commit them. To add an individual file git add filename (this concept is also called staging in git)

To add all modified and deleted files in your project, run this command:

This records a snapshot of your changes. With every commit it is recommended to include a comment so in future it could be easily tracked and know what changes were done and why. 

The commits don’t push the changes immediately to the remote repository. To push them to remote repository, use this command:

To see the differences between between your changes (not yet committed or pushed) and your last changes already committed run this command:

To remove a file in git first you remove the file from local, then run the following command, which will communicate this change to staging. In next commit, it will remove this file completely from repository as well.

To rename a file, use this command. 

By default, with no arguments, git log lists the commits made in that repository in reverse chronological order. There is a comprehensive list of options available to utilize with this git log command to make it more useful. Some common scenarios are comparison of history word by word, line by line etc.

If changes have been pushed to your fork by other users, you will need to pull them in before you can push. This pulls those changes in and applies your changes on top of them.

Above I have listed only a few common scenarios / commands, while git provides many more options and it will take time to master them all.

There are many resources available which would help in learning and exploring git further. Following are the two links which I found pretty useful:

Linux Training Workshop

In last few months, I conducted Linux Training Workshop for my colleagues in groups. It was aimed at our developers who were developing LAMP projects on Windows machines.

The presentation gives a very basic summary but in workshop I provided them examples and later also distributed a document which contains useful Linux commands.

Coding Standards

I have been programming from last 8+ years. I worked on different platforms and languages. Every language has its own power and beauty. But there are some global standards (or principles) which if followed could increase the ease of maintenance and scalability of code. Also, these standards greatly help in environments where multiple people are working on same project.

Another plus point for such code is that if a developer leaves, then a new developer could very easily follow and start from where he left due to good readability and clear code. In this regard a very good read is Code Complete by Steve McConnell. I follow a few rules taken from Code Complete, other sources and my experience:

Naming Conventions:

Variable names should be in all lower-case, with words separated by an underscore, example:

Names should be descriptive, but concise. We don’t want huge sentences as our variable names, but typing an extra couple of characters is always better than wondering what exactly a certain variable is for.

Function Names:

Functions should be named descriptively. Function names should preferably have a verb in them somewhere.

Function Arguments:

Arguments should be treated same as variable names. In most cases, we’d like to be able to tell how to use a function by just looking at its declaration. And also when we generate documentation via script, these will help us understand it better.

Include the braces:

One should use complete syntax for conditional / loop structures. Even if the body of some construct is only one line long, do not drop the braces. Examples:


Each function should be preceded by a comment that tells a programmer everything they need to know to use that function.

The meaning of every parameter, the expected input, and the output are required as a minimal comment. The function’s behaviour in error conditions (and what those error conditions are) should also be present. Nobody should have to look at the actual source of a function in order to be able to call it with confidence in their own code.

Especially important to document are any assumptions your code makes, or preconditions for its proper operation. Any one of the developers should be able to look at any part of the application and figure out what’s going on in a reasonable amount of time.

Final Year Project Cleared

As I had been posting about my final year project progress here and here. Allhamdullilah, I have cleared it with 3.67 GPA. While working on my own project, I also emailed a number of students and those who responded I continued to share some of my tips emails. I am really happy to see that my tips emails were very useful for some students and some of them were able to create their project and clear it.

I plan to continue with this and would be looking forward to help further students in upcoming final year projects. But I would like to clarify one thing that help doesn’t mean copy. It means I would only try to provide some guideline (for any project on which students are working and I have idea about that project) and with that guideline students would be able to make progress on their projects.

So those who require help could contact me by leaving comments here or sending email at naqoosh AT gmail DOT com. I will get back soon inshAllah.

LESS and node.js

Recently I when I was studying and trying to use some new HTML5 code, I also got to know about node.js and LESS.

When briefly read about them, I was amazed the features and functionality they provide. I strongly believe both of these will simplify the life of a web application developer. I am looking forward to learn and work on them in the near future.

I will try to write some of my experiments here.