Using SendGrid templates with Node.js

For a while now at Mish Guru we had been using Mandrill to handle transactional emails, things like password resets. But unfortunately as Mandrill has recently decided to change their pricing plans, this was no longer a cost effective option for what we were trying to achieve.

We decided to give SendGrid a try as it seemed to have all the features we wanted like templates, no cost at low volumes and a node wrapper over the API to help get us going.

This all seemed great until I got started and realised that SendGrid has horrible documentation on how to use their Node package, especially when it came to using their template engine.

Screen Shot 2016-05-26 at 2.43.24 PM
This is the entire section on templating in the docs for the node package

 

After giving the appropriate feedback on the SendGrid docs page

uncomfortableimpressionableamericanwirehair-size_restricted

 

I decided to make this quick guide to share what I’ve learnt about getting templates going with Node. This tutorial will quickly cover how you can setup a template in SendGrid, and start sending your users customised emails.

Setting up a template

After you’ve created your SendGrid account, you’ll need to create a template on their dashboard. Once you’ve created a new template you should be on the template editor which looks like this

Screen Shot 2016-05-26 at 2.50.19 PM

Now you can start formatting and adding elements to your template by turning the drag and drop editor on on the left hand control panel, and then going to the build tab to start adding elements to your template

Required tags

For some reason or another the <%body%> tag is mandatory, even though in this example we aren’t going to use it. It has to be in your template somewhere, and you have to provide some kind of body when you send your email via Node.

Similarly when you’re sending your email from Node, you have to provide a <%subject%> variable, even if you’ve hardcoded the subject of your email into the template.

Variable substitution

There is very little documentation on how variable substitution works on SendGrid’s Node docs, however with a bit of digging through their API docs and trial and error I managed to get variables substituting in the format of %my_substituted_var%.

It seems user given variables don’t work when you have the angle brackets <% ... %> around them, these are reserved for the <%body%> and <%subject%> tags.

So now you can make your template that might look something like this

Screen Shot 2016-05-26 at 3.17.40 PM
Example template in SendGrid. Note the required <%body%> tag.

Note at the bottom of the template I’ve inserted the <%body%> tag, even though the template I’ve created has all of the information that is required for this email. This just seems to be a quirk of SendGrid and you’ll get an error if you don’t include it.

Button URLs

In an email like the one in our password reset example, you’ll most likely want to have the URL of the button set to a custom URL which is unique for each email. You can do this by clicking on the button, then set the button URL to a template variable in the button properties section on the left hand side of the template editor.

Screen Shot 2016-05-26 at 3.22.01 PM
This is the entire section on templating in the docs for the node package

Sending an email from Node

After your template is looking good, head over to the API keys section on the SendGrid site and create a new API key that has permission to send mail and read templates. You’ll also need to get the ID of the template you want to use, which is available from the template page on the SendGrid dashboard.

Then after a quick npm install sendgrid --save in your project you can start sending emails right away! Heres an example of how you would fire off the template we created earlier in this example

 

Note a few things here.

  1. The body of the email must be set using the html parameter in the email constructor. It failed when I gave it as an empty string. For this example I set it to an empty paragraph tag which was at the bottom of the template so it renders out as nothing
  2. The subject must also be given, even if you’ve hard coded it into the template, otherwise you’ll get an error.
  3. When setting the substitutions you have to add the % delimiters or it won’t work
  4. The syntax for specifying your template is a bit strange. You have to set it as a filter and enable it. If you copy paste how it’s done here it should work.

TL;DR

SendGrid provide a way to send templated emails like you would in Mandrill, but getting everything going is nowhere near as straight forward as it should be.

Happy coding!

Advertisements

How to setup Trello to better manage your sprints

It’s no secret that Trello is an awesome tool for keeping track of all sorts of things, including your team’s progress during a sprint. With a few free Chrome plug-ins you can get even more value from your boards by enabling you to track sprint points and group your stories together as epics.

Why Bother?

Tracking sprints with stock Trello can be a bit messy and hard to follow. There is not standard way of tracking sprint points and the labels for grouping cards together leave a lot to be desired.

Stock Trello is a little bland when it comes to tracking sprints
Stock Trello is a little bland when it comes to tracking sprints

Tracking sprint points is a great way to track the absolute progress of a sprint, but tracking the estimated points and actual consumed points allows you to see if your team is correctly estimating the difficulty of each task.

Tracking both consumed and estimated gives you a better idea of how much your team tends to under or over estimates tasks. By feeding this back to the team they can iterate and improve their ability to estimate how many points a task will take, giving everyone a more realistic overview of what can be achieved in a sprint.

Scrum for Trello

Scrum for Trello enables you to track sprint points on your Trello cards by adding special text to the name of a card. You can add the number of estimated sprint points a task has by adding the number of points in parenthases to the beginning of the card name. Likewise you can track the number of consumed sprint points in square brackets to the end of the card to see how many points the task actually took.

Sounds like a bit too much effort? Not to worry, when you go to edit the name of a card the extension automatically provides buttons for you to select the estimated and consumed points.

Scrum For Trello allows you to enter the amount of estimated and consumed points that a task has when you edit the name of the task
Scrum For Trello allows you to enter the amount of estimated and consumed points that a task has when you edit the name of the task

Now that you’ve entered all the points on your cards you can get an overview of your sprint at any time with the totals that the extension provides you.

Scrum for Trello provides totals of the consumed and estimated points for each card, list and for the whole board
Scrum for Trello provides totals of the consumed and estimated points for each card, list and for the whole board

Card Colour Titles for Trello

Most of the time when you’re running sprints, tasks will be grouped together as part of a larger feature or epic. Trello lets you label cards with a name and colour to help group them, but by default the label just shows up as a coloured strip along the top of the card.

Trello card labels let you group cards together with a coloured label and a label name
Trello card labels let you group cards together with a coloured label and a label name
... But label names don't appear on cards by default, which means you have a lot of colours to memorise!
… But label names don’t appear on cards by default, which means you have a lot of colours to memorise!

Card Colour Titles for Trello solves this problem by simply putting the label names onto the front of the card so you can see what the label is at a glance.

Finished! Now we have a great overview of which tasks belong together and how the sprint points are tracking
Finished! Now we have a great overview of which tasks belong together and how the sprint points are tracking

With these two plugins you can enhance Trello to be a fantastic sprint tracking tool which will help you stay ont op of all your sprint tasks and help your team better estimate their sprint points in the future.


As a side note there is also a great plugin called Agile SCRUM for Trello boards which adds support for estimated and consumed points, as well as task labelling. It also gives you nice progress bars as the background for cards and lists.

It looks great but in my opinion feels a little too small and cluttered, as well as lacking support for providing the total number of estimated and consumed points for the entire board. I also don’t like that it makes you write labels as part of the card name instead of using Trello’s built in labelling.

Agile SCRUM for Trello
Agile SCRUM for Trello

Deploying a Django App to Amazon AWS (with Nginx + Gunicorn + Git)

This tutorial will cover deploying a stateless django stack to Amazon AWS. Our stack will consist of some version of Ubuntu (for this tutorial I’m using 12.04 LTS), nginx + gunicorn to serve the Django app, a Postgres Amazon RDS instance for the database, Amazon S3 for our static files and BitBucket as out private Git repository that the server updates from.

The code that does all the automation in this tutorial can be found here.

The Setup

Traditionally web hosts would have tight coupling between their servers, filesystems and databases. This makes for an easy setup but really reduces your options on terms of scalability. Having everything linked together would often mean that you needed to scale vertically; that is scale by buying bigger and better hardware.

These days with stuff out there such as Amazon Web Services (AWS) it is affordable to decouple all these components and scale them as needed. By decoupling our filesystem, database and servers into separate components we are able to create a stack where it is easy to scale out horizontally; that is by adding more components.

AWS is great, but out of the box it is pretty raw and there isn’t much in terms of getting a basic Django stack going. There are a few options with Amazon’s Elastic Beanstalk, however these are very limited in terms of configuration. The Elastic Beanstalk configuration for Django is something along the lines of a MySQL RDS server with Apache serving the Django stuff. This is OK, however a very common setup with Django is the nginx + gunicorn combo powering the site and a Postgres database behind it all.

In this tutorial I will show you how to configure an AWS instance with nignx and gunicorn, with supervisor monitoring the processes to check that they are online. Our code will be stored on a private BitBucket repo (which is free), and we will be able to update our server with a single command. Our staticfiles will be served from Amazon S3 and our database will be an Amazon RDS instance running Postgres. This means that if at some later date you decided that you needed multiple instances running your app, you could just scale out and spawn more EC2 instances without having to worry about the shared static files or database stuff. But for the purposes of this tutorial we will keep it down to just one instance, as this then fits within Amazon’s free tier.

Our basic architecture for our Django deployment to AWS
Our basic architecture for our Django deployment to AWS
This setup would allow you to scale out your deployment easily as the database and static files are nicely decoupled from the server logic
This setup would allow you to scale out your deployment easily as the database and static files are nicely decoupled from the server logic

Prepping AWS

Firstly we need to take care of a few things on AWS before we can get started. The very first one is to generate a pair of AWS access keys and ssh keys if we don’t already have them. Follow the guides here and here to get these.

Secondly we need to create a security group for our AWS services so they can all talk to each other and the worldwide web. On the AWS console click on EC2 then on the lefthand side under Network and Security click on Security Groups. Now click on Create Security Group, and fill out the dialog with the name and description of your security group.

Create a security group for your application
Create a security group for your application

Once it’s created, go to the list of security groups and click on the one you created. Down the bottom of the screen click on the Inbound tab. In the Create a new rule dropdown box select SSH and click Add Rule. Do the same for HTTP and HTTPS. Then select Custom TCP rule and add port 5432 to the list – this is the port we connect to our Postgres server over. Also if you want to test the server using Django’s runserver command you can add port 8000 to the list, however this is optional. Finally click Apply Rule Change.

Ensure that ports 22 (SSH), 80 (HTTP), 443 (HTTPS) and 5432 (Postgres) are open. Port 8000 can optionally be left open for server debugging.
Ensure that ports 22 (SSH), 80 (HTTP), 443 (HTTPS) and 5432 (Postgres) are open. Port 8000 can optionally be left open for server debugging.

Setting up S3

Now we are able to setup an Amazon S3 bucket for our static files. In the AWS console go to the S3 page and click on Create Bucket. Name your bucket and choose a region for it to live in, and choose to setup logs if you want them later on. Note down what name you gave your S3 bucket, we’re going to need this later.

Setting up RDS

Now we need a database to power our Django app. In the AWS console go to the RDS page and click on Launch a DB Instance. Choose PostgreSQL as the engine by clicking Select next to it. If you want to take advantage of the high availability replication of RDS the choose Yes on the page asking if you want to use Multi-AZ, however for the purposes of keeping within the free tier we are going to select No for this step. Continue the process of adding database instance names, usernames and password taking care to note down all of these along the way.

Setting up Our Django Project

We need to be able to have separate settings for both our production and development code. To do this do the following

  • Create a folder in the root directory of your Django project called requirements that has three pip requirements files in it:
    • common.txt for all your common python dependancies between the server and local (add Django to this file)
    • dev.txt for your local python dependancies
    • prod.txt for your server python dependancies (add boto, django-storages and psycopg2 to this)
  • Create a folder where the settings.py of your Django project is located called settings that has four Python files in it
    • init.py
    • common.py for all your common Django settings
    • dev.py for your local Django settings
    • prod.py for your server Django settings
  • At the top of both dev.py and prod.py add the line from <django_project_name>.settings.common import *

Change the

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "<django_project_name>.settings")

in both wsgi.py and manage.py to

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "<django_project_name>.settings.prod")

This means that the project with default to the production settings, however you can run it locally using

python manage.py runserver --settings=<django_project_name>.settings.dev

To add the S3 bucket we created earlier to the project, add the following to settings/prod.py replacing <s3_staticfiles_bucket_name> with whatever you decided to call your bucket earlier.

INSTALLED_APPS += ('storages',)
AWS_STORAGE_BUCKET_NAME = "<s3_staticfiles_bucket_name>"
STATICFILES_STORAGE = 'storages.backends.s3boto.S3BotoStorage'
S3_URL = 'http://%s.s3.amazonaws.com/' % AWS_STORAGE_BUCKET_NAME
STATIC_URL = S3_URL

Finally we want to hook up our Amazon RDS instance to our code, so in settings/prod.py add the following information with the name, user and password filled in with the one’s you noted down when creating your RDS instance. The host URL can be found by going to the RDS section of the AWS console and clicking on your database to reveal it’s public DNS.

DATABASES = {    
    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'mydbname',
        'USER' : 'mydbuser',
        'PASSWORD' : 'mydbpass',
        'HOST' : 'xxxx.xxxx.ap-southeast-2.rds.amazonaws.com',
        'PORT' : '5432',
    }
}

If you aren’t sure about how it should all look when you are done, check out this demo project that I’ve put up on GitHub.

Pushing the Project to BitBucket

We are going to use BitBucket’s free private Git hosting to host our code as this gives us a high quality location to store our code that our AWS servers can also access to update themselves. Firstly you will need to create a pair of SSH keys for BitBucket, which you can do by following these instructions. Keep that keyfile somewhere safe as we will need it soon.

You’ll need to make your Django project a git repo if you have haven’t already, on Mac OSX or Linux you can do this by opening the folder of your Django project on a terminal and typing

git init

Now we want to tell the git repo to not store our fabric files when we add them to the project so in the same command prompt type

echo "./fabfile/" > .gitignore
git add .
git commit -m "Initial commit"

Now on the BitBucket site we need to create repository for our app. Do this by clicking on Create at the top of the page. Follow the steps to create a new repository, and to make life easier create it with the same name as our project. Now BitBucket will ask us if we have existing code to push up, which we do! Follow the instructions from BitBucket to add a remote repository, and push up the code to that repository.

Fabric

This is where the magic bits happen. Now our code is up on BitBucket and our AWS database and storage space are all online we can run a fabric file to do the rest. Fabric is an automation tool which allows you to run commands on remote servers.

Download this repo from GitHub and move the fabfile folder into your projects directory. In the command prompt type

pip install -r fabfile/requirements.txt

This will install fabric along with a few other things we need for our deployment. Now go into the file fabfile/project_conf.py and edit all the settings to match your own. There are quite a few things in here and they are all decently important, so take your time to make sure they are all corrent. Then from the project root type

fab spawn instance

This takes a while, but all things going well will start an EC2 instance on AWS, setup nginx and gunicorn on it, pull your code from BitBucket, install your Python packages from the requirements files we made earlier, setup supervisor to manage the server, collect your static files and send them to S3 and finally sync your database tables with RDS before starting the server process and giving you its public URL.

Be sure to take this public URL and add it to the EC2_INSTANCE list at the bottom of fabfile/project_conf.py, so that any future calls we make to the fab file know which server to execute stuff on.

Maintenance

If all went well then now all you have to do when you want to update your server is push the latest version of your code to BitBucket then type

fab deploy

into your command prompt at the root of your Django project. This will tell the server to pull the latest version of your code from BitBucket then reload the servers. If you added Python packages you will also need to

fab update_packages

possibly followed by a

fab reload_gunicorn

to get the changes to show up.

A full list of the commands is available in the readme for the fabric file on GitHub. Happy deploying!

Solar Farm Calculator

During my final year of electrical engineering I opted to take a sustainable energy paper in order to get out of having to take a brutal computer hardware paper. For this paper we were required to take a real world site and design a sustainable energy system based on that site.

After a few false starts with sites that turned out to not have much opportunity for improvement, time was running out for our team to complete our design! My two team members had experience in solar systems and during our discussions I gathered that most solar installations followed a very similar architecture (see below)

Image

In light of this I proposed that instead of developing a specific design for a given site, why not take all the parameters from a site and use them to develop a generic tool which would allow you to check if installing solar on a site was even feasible. Not only did this solve the requirements for the paper but it also would help others get past the annoying stage we kept hitting where after a significant amount of time had been invested the site turned out to be a poor candidate for a sustainable upgrade.

The result of all this was the Solar Farm Calculator, an open source solar simulation tool written in Python. The tool takes parameters about a site and a given time frame, then runs a simulation to calculate the expected power output of the site and the related financials.

Image

The calculator takes a latitude and longitude for a site, and uses the Google Maps Reverse GeoCoding API to find out where that location actually is. Then using PySolar the program is able to simulate the solar energy the site receives across the given timeframe. This is coupled with a bunch of solar models to calculate the energy generated at 20 minute intervals across a day, which is then averaged down to a one day resolution. Using the location we can also access average temperature data from the World Bank Climate Data API to figure out how much power we’re losing through the cables, which is altered by ambient temperature.

Taking all this with a bunch of other losses from stuff like inverters and transformers we can then figure out how much power the site is generating per day, and given a price that we can sell the power for we can then figure out how much money the solar farm generates in a day.

To keep things generic we used the Open Exchange Rates API to gather currency information. This allows all the components to be specified in their own currency, and the factored in when calculating the financial results. To make this easier I ended up writing a little module in Python called PyExchangeRates to handle all the exchange rates stuff and offline caching.

The financials even factor in the depreciation of the equipment and the interest on the loan needed to start the solar farm. After running a simulation the results end up looking like this:

Image

The calculator can be downloaded as a pre-built binary for Windows and Mac OSX from SourceForge, and as always the source is available on GitHub if you want to contribute.

Special thanks to Darren O’Neill and Jarrad Raumati who worked on this project with me 0 specifically all the nasty power flow analysis and excellent late night banter as the assignment deadline crept up.

Using Open Exchange Rates with Python

I recently found myself in need of the ability to work with multiple currencies within Python and came across Open Exchange Rates – a JSON feed with multiple currencies updated hourly. Best of all there is a free plan which allows you to hit the API up to 1,000 times a month! For my purposes hourly rates were not needed – daily rates were fine and should be for most commerce applications where the rates are only being used to estimate a price.

Unfortunately as of now there is no official Python wrapper to get rates from the API, however as they are delivered in JSON they are simple enough to grab and decode. I wrote a little wrapper, PyExchangeRates, which simplifies access to different currencies and allows you to work with money of different currencies just like you would with regular numbers in Python.

Here’s a little example of how the module works. Firstly the module is imported and an ‘Exchange’ is created using your API key from Open Exchange Rates. This will download the latest rates and save them to a local file. Next time the module is loaded, if the file can be found the rates will be loaded from the local version. If the local version is older than one day old and there is an internet connection is available, the rates will be updated automatically, however the old rates can be used if there isn’t any internet connection available. This saves you from overrunning your free limit of API hits.

import PyExchangeRates

# Create an 'Exchange' object, this holds all the information about  the currencies and exchange rates
# Get a free API key from https://openexchangerates.org/signup/free

exchange = PyExchangeRates.Exchange('YOUR API KEY HERE')         

Now the exchange is created we can withdraw some currencies. There are over 100 currencies available, all accessible via their standard three letter identifier – the full list is available here.

# Withdraw a few different currencies from the exchange
a = exchange.withdraw(1000, 'USD')
b = exchange.withdraw(1000, 'EUR')

Now that we have a few different currencies, we can play with them like they are regular numbers in Python. The currencies need not be the same to be added together etc but the result will always be in USD for consistency.

# Money can be added together, the result will be in USD
print a + b
2352.363797 USD

# Money can be subtracted...
print a - b
-352.363797 USD

# Multiplied...
print a * b
1352363.796680 USD

# Scaled up...
print a * 2
2000.000000 USD

# and divided by a constant
print b / 2
500.000000 USD

Now that we have a few different currencies, we can play with them like they are regular numbers in Python. The currencies need not be the same to be added together etc but the result will always be in USD for consistency. No need to worry, money can be converted from one currency to another with one li

# Money can also be converted to other currencies 
print a.convert('AUD')
1061.079000 AUD

Feel free to fork the project on Github if you can think of any improvements, happy coding!

Getting Started with LaTeX

I see people constantly frustrated with MS Word, especially with bigger more involved documents. The good news its that there is a better way!

LaTeX is a program used to make documents using code. This lets you worry about what your writing instead of how it looks, plus you get the added bonus of awesome automatic citations, cross-referencing and table of contents (just to name a few).

If it sounds a little hard don’t panic! Once you see how easy it is and how good your documents look you’ll wonder how you lived without it.

I’ve created a little tutorial report template which covers a lot of the really common things you’ll end up using in LaTeX with plenty of comments to help explain whats going on. You can download it here from my GitHub.

The best bit is that you don’t even need to worry about installing anything! Sign up for an account with ShareLaTeX here for free and upload the .zip from GitHub as your first project.

Enjoy!

Mission Control – DIY USB Control Surfaces

Over the years I have always been interested in music. As an electrical engineer in training, my skills in electronics have progressed a lot more than my music skills in the last few years, however the interest for music has still been there. I found myself more and more interested in the production side of music. After a short stint playing with a Vestax VCI-100 in Traktor I purchased a Novation Launchpad for Ableton Live. This was the coolest purchase I had made in a while, but I found that the Launchpad in its own wasn’t enough.

I learnt to mix live sound on huge analogue desks, and virtual studio environments on the computer. Just being able to launch clips wasn’t enough, I needed to be able to mix them. While there is a large selection of control surfaces available out there, they all seemed to be too expensive for what you got, so I decided to make my own.

Mission Control is a project to create an expandable open-harware platform to create USB control surfaces for Ableton Live (or any other software for that matter) using atmega microcontrollers. I am still working on the software at the moment but here is a preview of my first prototype controller I am building at the moment, which is nearing hardware completion.