Using SocketIO with Python and Flask on Heroku by Alexander Hadik

Sometimes all you want to do is put together small web app with a Python based server. Flask is the go-to choice and it couldn't be easier to use. Launching your app on Heroku with Flask is a well documented process. But things kind of hit a wall when you want to use SocketIO for websockets. Every tutorial online is basically a chat-app with an overly complicated process of integrating Redis, and at the end of the day, doesn't lay out the basics of bare-bones websocket integration. Often times, I simply need a few event handlers - not the full schebang. This tutorial explains how to set up an extremely simple Flask webapp including basic SocketIO event handlers, and launch it to Heroku.

Set Up

Let's get started with our dev environment. We'll log into Heroku, set up a virtual environment for Python and install a few dependencies.

$ heroku login
  Enter your Heroku credentials.

$ mkdir myapp
$ cd myapp
$ virtualenv venv
$ source venv/bin/activate

Now we're working within our virtual environment and can install our dependencies with pip.

$ pip install gunicorn==0.16.1 Flask Flask-SocketIO

Here we're installing gunicorn to be our web server and Flask to be our web framework. We're also install Flask-SocketIO to be our SocketIO server that will handle incoming requests and send responses back out to clients. It's worth noting that we've requested a specific version of gunicorn due to an issue in gunicorn 0.17 specifically. Newer versions may resolve this issue.

Web Template

We'll make a directory for templates and place an HTML page in it.

$ mkdir templates
$ cd templates
$ touch index.html

We'll place the following code in this file. All this code does is let us type content into a text box, send it to our Python server via SocketIO, and wait for the server to echo it back over SocketIO, at which point we display the echoed content.

        <title>Heroku SocketIO</title>
        <script type="text/javascript" src="//"></script>
        <script src=""></script>
        <script type="text/javascript" charset="utf-8">
            var socket = io.connect('http://' + document.domain + ':' + location.port);
            socket.on('echo', function(data){
            function send(){
                socket.emit('send_message', {message : $('form textarea').val()});
                position: relative;
                margin-left: auto;
                margin-right: auto;
                width: 400px;
                width: 100%;
                height: 100px;
        <div class="input">
                <textarea placeholder="Send a message to the server..."></textarea>
                <button type="button" onclick="send(); return false;">Send</button>
        <div id="response">

The first two script tags load the jQuery and SocketIO libraries. The third script tag lays out our communication with the server via SocketIO. First, we set up a handler to listen for 'echo' events sent from the server. In response to this event, we display the event's content on the webpage.

Python Server

The next step is to put together the Flask server in Python.

$ cd ..
$ touch

First, we'll import the dependencies we need.

from flask import Flask, render_template
from flask.ext.socketio import SocketIO
import json

Next we need to set up the app through the Flask framework and create the SocketIO object.

app = Flask(__name__)
socketio = SocketIO(app)

For this app, we only need to route the root directory and render our index.html template when the root is requested. We do that with Flask's syntax:

def index():
    return render_template('index.html',)

Now, it's time to handle the SocketIO events we expect to receive, as we laid out in our HTML template. Our webpage emits an event under the name send_message and receives events with the name echo. Since this app just echos text back to the client, all we need to do is set up a handler for send_message and in response, emit an event with the name echo:

def handle_source(json_data):
    text = json_data['message'].encode('ascii', 'ignore')
    socketio.emit('echo', {'echo': 'Server Says: '+text})

Finally, we just need to make sure the SocketIO server runs when the script is run. So we add the following to the end of our script:

if __name__ == "__main__":

All together, our server code looks like:

from flask import Flask, render_template
from flask.ext.socketio import SocketIO
import json

app = Flask(__name__)
socketio = SocketIO(app)

def index():
    return render_template('index.html',)

def handle_source(json_data):
    text = json_data['message'].encode('ascii', 'ignore')
    socketio.emit('echo', {'echo': 'Server Says: '+text})

if __name__ == "__main__":

Running on Heroku

With our server code and HTML template finished, all we need to do is push our work to Heroku. The first step is to tell Heroku what it needs to do when our dyno spins up. That's of course accomplished with a Procfile:

$ touch Procfile

And we'll place the following in that file:

web: gunicorn --worker-class socketio.sgunicorn.GeventSocketIOWorker --log-file=- server:app

So what does this do? Well it tells Heroku that here we have a web app, and it needs to spin up a gunicorn server for it to run. We'll need a worker for SocketIO which is what the --worker-class argument is doing. We also want to print any errors directly to std out for simplicity. Finally, we inform gunicorn that our script of interest is named server and our Flask app is called 'app'.

Let's test our app by running it with foreman, which is installed as part of the Heroku Toolbelt:

$ foreman start

If everything has gone to plan, you'll see our web app spin up and we can visit it in a web browser to make sure everything works

12:23:31 web.1  | started with pid 83820
12:23:34 web.1  | 2015-01-29 12:23:34 [83820] [INFO] Starting gunicorn 0.16.1
12:23:34 web.1  | 2015-01-29 12:23:34 [83820] [INFO] Listening at: (83820)
12:23:34 web.1  | 2015-01-29 12:23:34 [83820] [INFO] Using worker: socketio.sgunicorn.GeventSocketIOWorker
12:23:34 web.1  | 2015-01-29 12:23:34 [83821] [INFO] Booting worker with pid: 83821

With our app working, all that's left to do is create a Heroku app, commit to Git, and push to deploy:

$ git init
$ git add .
$ git commit -m "init"
$ heroku create
  Creating infinite-beach-1519... done, stack is cedar-14 |
  Git remote heroku added
$ git push heroku master
  Counting objects: 1453, done.
  Delta compression using up to 8 threads.
  Compressing objects: 100% (1382/1382), done.
  Writing objects: 100% (1453/1453), 4.89 MiB | 3.04 MiB/s, done.
  Total 1453 (delta 91), reused 0 (delta 0)
  remote: Compressing source files... done.
  remote: Building source:
  remote: -----> Python app detected
  remote: -----> Stack changed, re-installing runtime
  remote: -----> Installing runtime (python-2.7.9)
  remote: -----> Installing dependencies with pip
  remote: -----> Discovering process types
  remote:        Procfile declares types -> web
  remote: -----> Compressing... done, 46.4MB
  remote: -----> Launching... done, v3
  remote: deployed to Heroku
  remote: Verifying deploy... done.
   * [new branch]      master -> master

Everything launched just fine - now all we have to do is visit our app in a browser and enjoy the fruits of our labor. Good luck with Flask and SocketIO!

NucleoBytes: Channel Coding for Mutation Resistance by Alexander Hadik

What if historians today had access to detailed census data from hundreds or thousands of years ago? It's pretty obvious that our understanding of past cultures would be drastically different. However, what are we doing to prevent the same cycle hundreds of years from now? Where are we storing the massive reams of data that future societies would find invaluable?

The short answer is nowhere useful. It's stored on hard disks, or perhaps metallic platters, some of it tucked away in vaults, some of it not. Few of these storage mediums have adequate life spans. All of them demand resources in terms of space, power, maintenance etc. Even if these disks and platters survived for the next 500 years, there is no guarantee that future societies will have the capability to read these ancient forms of data.

What if we used DNA? An incredibly resilient organic molecule, it's no new idea that DNA is a storage medium ripe for exploration. However, an encoding scheme must be used that protects against the imperfections in DNA synthesis and sequencing, as well as natural degredation over time.

Proposed here is a small stab at a large problem, using Hamming Codes to encode binary data in nucleotides.


GitHub Repository

The Python program found on this GitHub Repo can read any ASCII text file, and encode it in a DNA sequence that is aligned on 13 bits of data in a (13,8) Hamming Code. That is, 5 parity bits for every 8 bits of original data. The output is in FASTA format.

With an encoded file, the same program can be used to decode an encoded DNA sequence, in FASTA format, back to its original content.

The program makes use of the Python multiprocessing library.

Included Files

  • Python script for encoding and decoding
  • resources/: Several testing text files ranging from small to large sizes

Usage takes several command line arguments:

usage: [-h] [--decode] [--encode] [--workers W] input [output]

positional arguments:

optional arguments:
    -h, --help         show this help message and exit
    --decode, -d       decode boolean flag
    --encode, -e       encode boolean flag
    --workers W, -w W  number of processes to spawn

Only two arguments are required:

Specify encoding or decoding:

  • --encode, -e: Convert a text file into DNA, in FASTA format, encoded with (13,8) Hamming Code
  • --decode, -d: Convert a FASTA file of 13 bit aligned DNA (same format as output of --encode) back to its original format.

Specify input file

  • The first non-flag argument is the input text file. For --encode, this is a normal text file. For --decode this is a FASTA formatted Hamming encoded file.

Optional arguments:

  • -h, --help : Get usage documentation
  • -w W : Specify the number of processes (workers) to spawn to make use of multiple cores. Default is 1.
  • output file: The second non-flag positional argument is the output destination. Default is out.txt.


The following libraries are required for this software:

  • bitstring:
  • bitarray:
  • binascii:
  • multiprocessing

Display Custom E-Ink Images with RePaper and Arduino Uno by Alexander Hadik


Getting Started by Displaying Custom Images

I recently got the E-Ink display sold by Adafruit to play around with on Arduino. The Arduino Uno isn't powerful enough to do more complex things like dynamically write content to the screen (such as text or shapes). What it can do very well is show images loaded into memory.

The Adafruit tutorials do a great job of describing how to get up and going with the code provided by RePaper. Right out of the box you can get an image of a cute cat, as well as some other preloaded images, to show on the screen. However there's really no info provided on how to go farther with an Uno. This tutorial will walk through the steps of how to load your own custom images onto your Arduino and display them on your E-Ink display. Let's get started!

What you'll need

  1. An Arduino Uno
  2. The RePaper E-Ink display available from Adafruit
  3. The Arduino IDE and demo code available from installed as an Arduino library.

You can check that these are properly installed by going to File > Sketchbook in the Arduino IDE and checking that epaper exists as an option.

What to do

Wire Up Your Display

Follow the instructions provided on Adafruit for connecting the RePaper display to your Arduino. Once it's plugged in, you should be able to upload the demo.ino sketch and get two images to alternate back and forth; some text, and a cat.

Prepare your content

Now that we've got our cat going, let's prepare some custom images to display. First of all, you'll need to note what size screen you have. There are three options: 1.44", 2" and 2.7". You can see the size of the screen you have by measuring it diagonally or just looking at your order. These screens have the following dimensions.

  • 1.44": 96x128 pixels
  • 2": 96x200 pixels
  • 1.44": 176x264 pixels

In a digital illustration program like Adobe Illustrator, create your images in grayscale with the same dimensions as your screen. Save them in JPG format and make sure that they are oriented so that the height is the smaller dimension - that is your image is oriented so it's longer than it is wide. Save as many images as you like, just be aware that the Arduino Uno has 32 KB flash memory.

Convert your content

You can't just store a JPG image on your Arduino. It has to be converted to an XBM file which stores the values of each pixel as a hexadecimal value in a C array. This is the format that RePaper uses to store images in the Arduino memory. There are two ways of doing this:


GIMP is a common image editing tool that can open and save pretty much any image format, including XBM. You can download it from and is available for OS X and Linux. You can open your exported JPG images and then save them as .XBM files using the Export option.

Make sure you don't check the X10 check box!

Web Conversion

There's a convenient web app that can also handle the file conversion.

Just upload your image and you can download the XBM version of it.

Store your content

Once you've got your images converted, you'll need to save them to the images library directory.

When you install the Arduino IDE, it creates an Arduino directory, which is where libraries are installed. On OS X it's traditionally in your Documents folder. You can find where it is by going to your Preferences in the Arduino IDE and looking at the Sketchbook location section.

Navigate to this directory and then to libraries > Images. This is where the images for RePaper code is stored. You'll see some file names that make sense like cat_1_44.xbm. This, naturally, is the cat image for the 1.44" RePaper display.

Just save your newly made XBM images into this directory with the same naming convention (myimage_x_y.xbm) inserting the proper size for the screen you have inplace of x and y.

You'll want to open your image files in a text editor like Sublime or Atom and check that the first lines have the following format:

#define myimage_2_0_width 200
#define myimage_2_0_height 96
static unsigned char myimage_2_0_bits[] = {

That is, the #define and char array declaration are named the same as your file and have the proper width, height, and screen size set. Open some of the images that exist in the Images directory for reference if you like.

Update the code

With your images stored, it's time to update the Arduino code! We're going to start with the code supplied by RePaper as a base.

In Arduino, go to File > Sketchbook > demo. Select Save As so you can edit and save your changes.

This demo code, if you upload it, will make your RePaper display flash two images in sequence. We're going to swap their two images for three of our own images. For the sake of this demo, we're going to call these images image1, image2 and image3.

  1. Change the screen size for your screen on line 45: 144, 200, 270.
  2. The next section has the header // select two images from: text_image text-hello cat aphrodite venus saturn. We're better than this, because we've added our oown images! So change the code to read:

    #define IMAGE_1  image1
    #define IMAGE_2  image2
    #define IMAGE_3 image3
    • Here, image1, image2, etc are the images you saved in the Images directory with the _x_y.xbm chopped off.
  3. Find the section headed with the comment // calculate the include name and variable names. It should have four lines that read:

  4. Add another section for our third image

  5. The next section is headed with the comment //images. It should have two sections that look like

    PROGMEM const
    #define unsigned
    #define char uint8_t
    #include IMAGE_1_FILE
    #undef char
    #undef unsigned
  6. Add a third section like so:

    PROGMEM const
    #define unsigned
    #define char uint8_t
    #include IMAGE_3_FILE
    #undef char
    #undef unsigned
  7. Finally, we need to change the code that cycles the image through in order. It's the large switch statement in the main loop of the program. To start with it has 4 cases that clear the screen and then transition from clear -> text_image, text_image -> cat and cat -> text_image. Change that to resemble

    switch(state) {
        case 0:         // clear the screen
            state = 1;
            delay_counts = 5;  // reduce delay so first image come up quickly
        case 1:
        case 2:
            EPD.image(IMAGE_1_BITS, IMAGE_2_BITS);
        case 3:
            EPD.image(IMAGE_2_BITS, IMAGE_3_BITS);
        case 4:
            EPD.image(IMAGE_3_BITS, IMAGE_1_BITS);
            state = 2;
  8. You're done! Upload your code to your Arduino and watch your images cycle by!


Some common issues that I came across were:

  1. The size of XBM digits. If you open your XBM in a text exit, the bunch of hexadecimal numbers you see should have the form 0x00 not 0x0000. If you see 4 numbers/letters, you exported your image as X10. Go back into where you did your image conversion and make sure you turn that off.
  2. Image orientation. If your images are showing up weird on your screen, make sure they're the right size, and that when you open them on your computer, they're wider than they are tall.
  3. The headers of your XBM files. Double check that your image files have the following format (example from the image files and names I used):

    #define costhour_2_0_width 200
    #define costhour_2_0_height 96
    static unsigned char costhour_2_0_bits[] = {

Laser Etched Notebooks by Alexander Hadik

Everyone loves MOLESKINE notebooks but telling them apart can be impossible. I myself use a different notebook for each class, and decided to straighten myself out by laser etching the notebooks' covers. For each notebook, I included the name of the course and an illustration. Laser etching creates a beautiful relief on the notebooks' front and the end result is almost magic.

My friends and classmates all loved the notebooks and asked how I had made them, so I've offered to produce etchings for classmates.

Create PubMed Citations Automatically Using PubMed API by Alexander Hadik

Recently I was working on a small website for a research lab at Brown University. Naturally the lab requested a list of their publications be included on the site, however this is a list that changes frequently and I didn't want to burden the lab with updating their site every time they publish.

Luckily, PubMed has an API that allows you to retrieve data from their databases in XML or JSON form. For me, JSON was perfect, as I could easily parse the data and present it on the website using JQuery.

There are a few different resources PubMed offers and each allows you to retrieve different info. In my case, I wanted to get a list of all publications by a specific author, and then retrieve all the publications details about each of those articles. Unfortunately I haven't found a way to do that in one query, so I resorted to a temporary two step process of getting the article IDs for every article by an author, and then getting the details for each of those IDs with separate requests. In a scenario where this request is done every time a page loads, this isn't practical. However, in a scenario where the update is done once every 24 hours, this process is permissible.

The data can be retrieved in JSON format using specific URLs that contain the desired search terms. I found a great resource that details the different PubMed search sources available, and the parameters available to search each of them with.

The two services I used were ESearch and Esummary. ESearch is for retrieving the full list of work by an author, and ESummary is for getting the details on each work. I'm working on a website so I used JavaScript and JQuery to retrieve the data with the following code:

$.getJSON(',erica[author]&retmode=json', function(data){
    var ids = data.esearchresult.idlist;
    var publications = [];
    iterateJSON(ids, publications);

Let's break this first piece of code down. I'm using the JQuery getJSON function to retrieve and parse the JSON returned from the following GET request:,erica[author]&retmode=json

There's four important sections to this, and you can construct your own version of a request URL using the link I provided above as reference.

  • The domain:
    • This specifies that I want to use the ESearch feature.
  • The database: 
    • This specifies that I want to search the PubMed database, but others can be used as well.
  • The search term: 
    • This specifies that I want to search for entries that match the full author name of Erica Larschan.
  • The return form: 
    • This specifies that I want the data returned in JSON format as opposed to XML.

This request returns a JSON object that you can extract a list of article IDs from, which I do with 

var ids = data.esearchresult.idlist;

Now comes the problem of retrieving the summary for each of these articles. This needs to be done recursively with a callback function so that the IDs are iterated through only as fast as the data can be retrieved. JavaScript's asynchronous properties won't pause for the summary request to complete before moving on to the next ID, which causes big namespace issues.

Instead, my recursive approach involves popping the requested ID from the list of IDs, and passing the now smaller ID list along to the next iteration, like so

function iterateJSON(idlist, publications) {

    var id = idlist.pop();
    $.getJSON(''+id+'&retmode=json', function(summary){

        var citation = "";
        for(author in summary.result[id].authors){
            citation+=summary.result[id].authors[author].name+', ';
        citation+=' \"'+summary.result[id].title+'\" <i>'+summary.result[id].fulljournalname+'</i> '+summary.result[id].volume+'.'+summary.result[id].issue+' ('+summary.result[id].pubdate+'): '+summary.result[id].pages+'.';
            iterateJSON(idlist, publications);

This function builds strings of citations according to MLA format, and returns an array of them, which can then be used on the front end of my website, using something like Angular JS. If the list of IDs is not empty, the function calls itself recursively with the reduced ID list. Once the list is empty, the function uses the data and terminates, in this case just printing it to the console for testing purposes.

Quad-core Microprocessor by Alexander Hadik

An on going project of mine at the moment is a team project at Brown University for the IEEE Micro-Mouse competition. This is a common robotics competition format, where teams design and build a robot that can autonomously navigate a maze to a target in the fastest time possible. 

With the competition approaching in April, we're looking for ways to break away from the constraints of Arduino; one of the more common tools used by teams. As an alternative, we're exploring the Parallax Propellor Quad-core microprocessor. Not only does this micro-processor outstrip common Arduino chips, but designing the PCB ourselves, we're able to swap in larger amounts of RAM, and other improvements.

Our most recent milestone was getting the microprocessor running and interfacing with an IDE running on OS X 10.9. Luckily, the great community surrounding Parallax offered an OS X alternative to Parallax's Windows only IDE, but compilation was not a total breeze, with drivers needed for communication with the chip's serial ports, among other obstacles. However, the chip is now up and running at full speed, and we're excited to explore its full power. 


Creating a Design Language by Alexander Hadik

As a TA for CS195i, Designing, Developing and Evaluating User Interfaces , I put together a resource for students in the class on creating a design language. As many of the students in this class are extremely talented programmers, but have not had much exposure to the design process, I thought it would be valuable to present an approach to design that is more quantitative.

After walking through many of the common techniques and practices of UI/UX design, I present the concept of a design language as a document that communicates visually, and quantitatively your proposed design. The intention is that this document guides the design of new features to your product in the future, and allows you to work more closely with developers during implementation. 

You can find the full document here.