Triggering a File Download from an XHR Post Request by Alexander Hadik

I came across a peculiar use case in a recent project in which I needed to POST data from a form to the server, and then trigger a download for the payload of the response. There might be a few reasons you would want to do this kind of trickery, but the key reason I can think of is generating something server-side based on form inputs from the client - such as an image or PDF - and then sending that generated asset back over the wire to the client. The desired experience for the user is just a simple "Download" button, so even though an asset is being dynamically generated, we want to give the illusion that they're just downloading. So here we go!

The Request

Step one is to set up the XHR request in Javascript. For some context, let's imagine a simple HTML form:

<form id="form">
    <input type="text"/>
    <button type="submit" form="form">Generate Report</button>
</form>

When this form is submitted, we're going to send the form data to the server, and expect a PDF back in return, generated dynamically, which is where the responseType comes in.

let xhr = new XMLHttpRequest();
//set the request type to post and the destination url to '/convert'
xhr.open('POST', 'convert');
//set the reponse type to blob since that's what we're expecting back
xhr.responseType = 'blob';
let formData = new FormData(this);
xhr.send(formData);

What is a Blob?

A Blob object represents a file-like object of immutable, raw data. Blobs represent data that isn’t necessarily in a JavaScript-native format. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user’s system.
— https://developer.mozilla.org/en-US/docs/Web/API/Blob

The Response

Great now that we've wired up the submission, let's take a bit of time to talk about what's happening on the server's side. I'm keeping this post backend agnostic, so I won't be discussing specific syntax. However, the basic mechanics are pretty simple. The server will accept the incoming POST request, and parse out the form data. It will then use that form data to generate some type of file, such as a PDF, represented in memory as Binary data. You'll pipe that binary data back to the user as the response to the POST request. Remember that on the client side, we've set the request to expect a Blob.

With that in mind, it's time to focus on what to do for the response. For this, we'll make use of the request's onload function.

xhr.onload = function(e) {
    if (this.status == 200) {
        // Create a new Blob object using the 
        //response data of the onload object
        var blob = new Blob([this.response], {type: 'image/pdf'});
        //Create a link element, hide it, direct 
        //it towards the blob, and then 'click' it programatically
        let a = document.createElement("a");
        a.style = "display: none";
        document.body.appendChild(a);
        //Create a DOMString representing the blob 
        //and point the link element towards it
        let url = window.URL.createObjectURL(blob);
        a.href = url;
        a.download = 'myFile.pdf';
        //programatically click the link to trigger the download
        a.click();
        //release the reference to the file by revoking the Object URL
        window.URL.revokeObjectURL(url);
    }else{
        //deal with your error state here
    }
};

All Together Now

Finally, let's combine all of this together into a single event listener that is fired with the form is submitted.

document.querySelector('#form').addEventListener('submit', (e)=>{
    e.preventDefault();
    let xhr = new XMLHttpRequest();
    //set the request type to post and the destination url to '/convert'
    xhr.open('POST', 'convert');
    //set the reponse type to blob since that's what we're expecting back
    xhr.responseType = 'blob';
    let formData = new FormData(this);
    xhr.send(formData); 

    xhr.onload = function(e) {
        if (this.status == 200) {
            // Create a new Blob object using the response data of the onload object
            var blob = new Blob([this.response], {type: 'image/pdf'});
            //Create a link element, hide it, direct it towards the blob, and then 'click' it programatically
            let a = document.createElement("a");
            a.style = "display: none";
            document.body.appendChild(a);
            //Create a DOMString representing the blob and point the link element towards it
            let url = window.URL.createObjectURL(blob);
            a.href = url;
            a.download = 'myFile.pdf';
            //programatically click the link to trigger the download
            a.click();
            //release the reference to the file by revoking the Object URL
            window.URL.revokeObjectURL(url);
        }else{
            //deal with your error state here
        }
    };
});

Environmental Variables in a Deployment Pipeline: A Workflow by Alexander Hadik

Part of my responsibility on the Bluemix Design team at IBM is to create small web apps on various cloud providers to learn how our products work - so we can design the best possible experience for them. After standing up a handful of Node apps with local, staging and production deployments, I got pretty tired of dealing with different sets of environmental variables for each environment - especially since you can't put API keys and passwords (a common type of data stored this way) in public repositories for obvious security reasons.

So how to reconcile open-source code bases that are part of a deployment pipeline that relies on these environmental variables? I'll explore that in this post.

After a lot of experimentation, the basic steps I've put together are:

  • Encryption
  • Selective, automated decryption

Let's set up some context. Consider this simple Node app with the following file hierarchy:

Notice the private directory that contains some basic Bash scripts where we store environmental variables for each deployment. So for example, this app might have a different hosted database for staging and production. The credentials needed to access these two databases are stored in env-staging and env as environmental variables like so:

export DB_CRED=abc123

On staging servers, during deployment, env-staging is sourced and on production servers, env is sourced. Then the Node app uses these values with process.env.DB_CRED. The problem is, if a developer has to modify the variables for some reason, they need to be included in the deployment pipeline so the files are sent to staging and production servers to be sourced. This means the files with sensitive information are either sucked up into version control, or have to be manually sent to the deployment environments through some other means.

If part of the deployment pipeline is a testing platform like Travis, options are even more limited because now the environmental variables are required to be included in the version control so they can be sourced for building in a Travis environment.

Encryption

So what to do? Encrypt! Instead of committing the plain-text environmental variables, instead commit an encrypted version of them. Then, ensure that each deployment environment has the password to decrypt the file for that environment.

There are a number of tools to encrypt and decrypt plain-text files. I'll be using a simple Node module called config-leaf which uses the crypto module. config-leaf exposes two commands inside the app's root directory, encrypt and decrypt. The full executions of this module can be aliased as scripts inside the package.json file:

"encrypt": "encrypt private/env private/env.cast5 && encrypt private/staging-env private/staging-env.cast5",
"decrypt": "decrypt private/env.cast5 private/env && decrypt private/staging-env.cast5 private/staging-env"

The argument order of the config-leaf commands is <PLAINTEXT INPUT> <ENCRYPTED OUTPUT> for encryption and <ENCRYPTED INPUT> <PLAINTEXT OUTPUT> for decryption.

So this module solves the encryption part of the problem. Before committing changes, encrypt the environmental variables with npm run encrypt, and make sure that the plain-text files are in a .gitignore file. Then add, commit and push the encrypted versions as part of the changes. Now the sensitive info is part of public source control with no concerns!

config-leaf is a great tool, however it relies on user interactivity to type in a password during encryption and decryption.

npm run encrypt
Enter the config password (env):

This becomes rather tiresome. On a local system, a password, perhaps stored as an environmental variables, can be piped in like so:

echo $PW | encrypt private/env private/env.cast5

On some cloud environments though, when executing this command automatically during a build process, this piping doesn't work (any explanation as to why would be very helpful). To remedy this, I created a modified version of config-leaf to which you can pass the name of an environmental variable that stores a password for you like this:

encrypt private/env private/env.cast5 --PW=ENV_PW

The only difference here is the --PW flag which indicates that the env file should be encrypted using the value found at $ENV_PW or whatever environmental variable you fancy.

Decryption

Now it's time for decryption. Again, let's get some context. Since we left off, new code along with sensitive, encrypted environmental variables were committed to the staging branch in GitHub. This triggered a deployment to the Staging environment which consists of several virtual servers. Another post is forthcoming on deploying to multiple servers with PM2.

The Staging deployment process tunnels into the staging servers, pulls the new code from the staging branch, and builds it using a build command, such as npm run build. Part of the build process is decrypting the environmental variables included in the new code. When the staging servers were originally stood up, the master password used for encryption was set up as an environmental variable. This is a single, static configuration that is done once and never again needs to be touched so long as the password used for encryption doesn't change.

The build process now simply uses the decrypt command like so:

decrypt private/env.cast5 private/env --PW=ENV_PW && decrypt private/env-staging.cast5 private/env-staging --PW=ENV_PW

Again, this is included in the package.json file as a script, or equivalent:

"decrypt": "decrypt private/env.cast5 private/env --PW=ENV_PW && decrypt private/env-staging.cast5 private/env-staging --PW=ENV_PW"

So a simple call of npm run decrypt will decrypt the environmental variable files and write them to disk.

In Review  

To recap the basic process described here, the steps are:

-Add the modifed version of config-leaf that uses environmental variables to package.json using the root: http://www.github.com/ahadik/config-leaf
-Add a password for encryption as an environmental variable to the local dev environment and each deployment environment.
-Add the same password under the same variable name to each testing, staging or production server.
-Ensure your build process includes decrypting the environmental variables into their original file names and sources the right config file for the environment its in (dev, testing, staging, production, etc.)
-Add plaintext files containing environmental variables to .gitignore.
-Encrypt the plaintext files using the encrypt <PLAINTEXT INPUT> <ENCRYPTED OUTPUT> --PW=<ENV VAR PW> or alias the command in package.json.
-Add, commit and push encrypted environmental variables to a repository.
-Let the automated deployment and build pipeline take care of the rest! If configured correctly, the build process will decrypt and source the appropriate environmental variable file.

This is a process I've worked out that works for small scale apps. I'm sure there are better, more robust processes for larger apps. More importantly, if don't have to worry about exposing API keys inside your repository, perhaps because it's not a public repo, then this process is a bit overkill. However, for the many apps I stand up and want to expose the repositories for, this workflow has become crucial.

What's up with @extend in node-sass v3.5.1 and beyond by Alexander Hadik

I've recently bumped up against a breaking change in node-sass v3.5.1. That's right - not a major version - but still a breaking change for some code bases. The issue is in @extend and the continual process of alignment between the Sass spec and the actual compilers that we all use on a daily basis. Here's an example of the error that node-sass might be throwing:

Error: source/scss/main.scss
Error: ".class" failed to @extend "%pseudoselector".
    The selector "%pseudoselector" was not found.
    Use "@extend %pseudoselector !optional" if the extend should be able to fail.
    on line 6 of source/scss/main.scss
>>         @extend %pseudoselector;
        --^

Tl;Dr;

If you just want a quick fix - uninstall node-sass with npm remove node-sass and then install node-sass v3.4.2 with npm install node-sass@3.4.2 make sure you install the most recent version of gulp-sass. That will make sure you code is parsed just like it used to be.

Why does this work? Explanation below...

AN Explanation

You might have bumped into this without knowing you use node-sass, if you use a plugin like gulp-sass. Infact, I'd take a guess that it's gulp-sass that got you here. gulp-sass is just a light-weight wrapper around node-sass which provides Node bindings to Libsass which is, finally, the C/C++ Sass compiler that takes all your Sass and makes it CSS.

This error is being thrown because the following is, according to the Sass spec, invalid Sass:

.class{
    @extend %pseudoselector;
    color : green;
}

So is this:

%pseudoselector{
    &.nested-class{

    }
}

.class{
    @extend %pseudoselector;
    color : green;
}

The problem in the first example is that %pseudoselector doesn't exist at all. Of course the same problem would happen for a normal selector, not just a pseudoselector. The problem in the second example is that %pseudoselector doesn't have any rules defined within it. If any logical person were to look at these examples, they might expect both of them to compile into the following:

.class{
    color : green;
}

And indeed it did for quite a while, all the way up through node-sass v3.5.1. However node-sass v3.5.1 is stricter with its parsing and compiling than its previous versions. So much so that the example above became flagged as invalid Sass, which infact it is - even though it can be logically resolved.

The fix here is actually provided by the error itself. By adding !optional to your @extend like (@extend %pseudoselector !optional) the inclusion isn't executed if:

a. The extend selector doesn't exist at all
b. The extend selector "exists", but is infact empty.

So if you're getting an error similar to the one above, check your Sass to make sure it's actually valid, or allow the @extend to be ignored with the !optional attribute.

If you don't want to fuss with this, you can fix the problem by making sure your version of node-sass is v3.4.2 or below.

A Note about Gulp-Sass

This problem was surfaced to me during a seemingly simple upgrade to gulp-sass - from v2.2.0 to v2.3.0. The big catch is that part of the gulp-sass v2.3.0 release is an update to node-sass v3.5.1. As we just explained, this update to node-sass introduced this breaking change in a dependency, making it look like a breaking change in gulp-sass itself. This was remedied in gulp-sass v2.3.1, which reverted the gulp-sass dependency back down to v3.4.2. So if you didn't update to gulp-sass v2.3.0 and you now update to the most recent version, you won't see a thing. But people that updated a bit earlier saw this node-sass change raise its ugly head.

Frequent Visitor: Resetting a Tree's Visited State by Alexander Hadik

If you've ever written an algorithm that involves traversing a tree or graph, you've dealt with the concept of marking a node as "visited". Often, this is act is taken care of of with a simple boolean flag in each node. This works great for single traversals of a tree; that is, you search a tree for something once, and then never look at the tree again. But what if you want to traverse the tree over and over again? Here I'll present a simple strategy for resetting the visited status to "unvisited" for all nodes in a tree in constant time.

Revisting Dijkstra's Algorithm

Let's first revisit the use case for marking a node as visited. A good example is one of my favorite algorithms - Dijkstra's Algorithm for finding the shortest distance between two nodes in a graph. The the core principles of Dijkstra's Algorithm is as follows:

  1. Start from a "root" node and assign it a distance value of 0. Assign all other nodes a distance of ∞.
  2. Add the root node to a set of visited nodes.
  3. For each unvisited child of the current node, add the value of the current node and the weight of the edge to the child.
  4. Compare the sum of the current node and outgoing edge to the value of the edge's target.
  5. If the sum is less than the target's value, update the value of the target to the sum.
  6. After considering all children, travel to the child node, with the smallest value. The smallest child is now the "current" node.
  7. If the now current node is the desired destination, return. Otherwise, return to step 3.

The challenge here is when we check if a node is unvisited. Let's consider a Python node class as follows:

class Node():
    def __init__(self, value):
        self.value = value
        self.children = []
        self.visited = False

Each node contains a visited flag that indicates if it's been visited in a traversal. Our constructor instantiates a Node with the default visited value of False. This sets us up well for a first traversal. However, what happens if, after you traverse the tree once, you want to traverse it again? Perhaps you want to find the shortest path to a different node.

There are a few simple ways to achieve this task.

Visited Set

One approach would be to keep a list of visited nodes. Whenever you want to mark a node as visited, you add a reference to the node to the Visited Set. To check if a node has been visited, see if it's in the Visited Set. When you're done searching, simply reset the Visited Set to an empty set.

There are a few problems with this approach. The first is size. A Boolean takes one byte of memory, so if a graph has 100 nodes, you'll spend 100 bytes of memory storing the visited boolean for each node. However, on a 64 bit system, a memory address is 8 bytes. So keeping track of pointers to each visted node requires 8 times the amount of memory as just keeping a boolean flag in each node. So thanks to memory, this solution is sub-optimal.

Furthermore, consider the time complexity of checking if a node is in the visited list. Each check is a linear runtime of O(n).

Boolean Flag

If we decide to stick with a boolean flag since it saves us memory, we have a problem with time complexity. Once we've traversed a graph, we must re-traverse it just to reset each node's visited flag to False. For an n node graph, the traversal time is now O(2n) where n is the number of nodes. This is still linear time, but if we're talking about 1,000,000 nodes instead of 100, twice the time takes, well, twice as long.

So what to do? It seems we're either stuck between doubling our traversal time, or taking up 8 times more memory than necessary. A simple trick is to make use of what I'll naively call an Iteration Key.

Iteration Key

I came up with this concept (which I'm sure is not original) after studying encryption techniques. A common security measure for encoding many documents is to encode groups of documents with different keys, and then encode each of those keys with a master key. Every now and then its a good idea to change your keys, like changing your password. The problem is, we're faced with re-encoding all those documents, which can take a long time. But we actually don't have to! Instead, we can re-encode the document keys with a new master key, effectively changing the decryption path for every document.

Using this approach on our graph, we can keep track of an iteration for the tree, and for each node. Here's an updated Node and Tree class:

class Node():
    def __init__(self, value):
        self.value = value
        self.children = []
        self.iteration = 0

    def isVisited(self, tree):
        if (self.iteration < tree.iteration):
            self.iteration = tree.iteration
            return False
        if (self.iteration > tree.iteration):
            raise Exception("Node iteration higher than tree iteration")
        return True
        
 class Tree():
    def __init__(self):
        self.root = new Node(None)
        self.iteration = 0

Now, when we want to traverse the graph, we increment the tree's iteration by 1. This immediately makes every node's iteration value less than the tree's iteration value. When we want to check if a node has been visited, we call its isVisited() method passing in a reference to the containing tree. isVisited() checks for three possible cases:

  1. The node's iteration value is less than the tree's.
  2. If a node's iteration value is greater than the tree's, we throw an error.
  3. The iteration value of the node and tree are equal.

The first case returns False indicating the node is unvisited. We set the node's iteration value equal to the tree's to mark it as visited.

The second case throws an error since no node should ever get ahead of the tree.

The third case evaluates to True if the node has been visited. This final case is called when the node's iteration is implictly equal to the tree's.

After our traversal, we simply increment the tree's iteration value by 1 once again to reset each node to unvisited.

Iteration Key Complexity

So how does this method stack up against our alternatives? Let's look at time complexity first. With no need to retraverse the tree to reset each node, this approach effectively reduces the complexity of resetting from O(n) to O(1).

Now for memory complexity. Let's say we use a unsigned char for storing the node's iteration. This consumes 1 byte of memory, so no more than a boolean visited flag, and can count up to 255 iterations. Using just 2 bytes of memory to store an unsigned integer, we can track up to 65,535 iterations. We could go even higher, using a 4 byte integer which gives us 4,294,967,295 iterations at half the memory needed to store a list of visted nodes.

Once we've maxed out whatever form of counter we're using, we do need to retraverse the tree, resetting the iteration of nodes to 0. However, 1 out of 4 billion times is a lot better than re-traversal every time.

So I lied, it's not constant time O(1), it's linear time O(n/4,294,967,295) which in my book, might as well be constant time.

Migrating GitHub Repositories with GitMover by Alexander Hadik

GitHub Enterprise has been a godsend to developers working at companies that need to keep code on their own servers. GitHub was a distant dream at IBM before GitHub Enterprise, and now many of our teams use it. In fact, GitHub is so popular as a project management tool, many teams have done away with Jira and the like in favor of GitHub issues and ZenHub.

I'm certainly onboard with this transition towards modern tooling at IBM, however with this adoption of "walled-garden" GitHubs has come a deterioration of the open-source ethos so central to the "original" GitHub. Many teams at IBM are maintaining organizations and repositories on IBM's GitHub Enterprise installation, and public GitHub. Why both? Internal projects are kept on GitHub Enterprise, and open-source initiatives, increasingly a focus of corporations, are kept on public GitHub.

We keep seeing a problem with this though at IBM. We are an organization so large (400,000 employees) that a project can easily start and grow in an open-source nature on GitHub Enterprise, with individuals from all over the company contributing. This is a fantastic example of how GitHub Enterprise is a great solution for large corporations. However, this also means that these projects can have tens or hundreds of issues with custom labels and associated milestones.

So what happens when we decide one of these internal projects should be open-sourced in the true sense of the word, and set free on public GitHub? The repo can be cloned over to public GitHub pretty easily, but all of the project management assets are lost. This can take the wind out of the sails of a project pretty quickly if would-be contributors can't see what needs to be worked on.

So in some senses, this is a plea to GitHub Enterprise users to be cautious of "internal" open sourcing. If a project has no reason to be hidden behind firewalls, lets keep it out in the open. It can be tempting to play it safe and just throw it on the Enterprise installation - but this has ramifications down the road.

If a project does end up in that sticky situation being born on GitHub Enterprise - I present a tool to help with migration called GitMover. It's a simple Python script that takes Git repos on any type of GitHub installation, and copies issues, labels and milestones from one to the other. It's the perfect tool to help automate the open-sourcing of an internal project.

Of course, GitMover has other uses too:

  1. Migrating repositories (private or public) from public GitHub to GitHub Enterprise if your team worked in private repos and is now hitting the big leagues.
  2. Merging repositories. If you want to combine issues from multiple repositories into a single one, this tool does its best to handle name clashes where they matter. It'll even keep assignees on issues if that user if found on the source and destination repo.

For the moment, the command line options of this tool are a bit complicated - they'll get better with time I promise! But here's the documentation you can get in the command line:

usage: git-mover.py [-h] [--destinationToken [DESTINATIONTOKEN]]
                    [--destinationUserName [DESTINATIONUSERNAME]]
                    [--sourceRoot [SOURCEROOT]]
                    [--destinationRoot [DESTINATIONROOT]] [--milestones]
                    [--labels] [--issues]
                    user_name token source_repo destination_repo

So use this tool as you see fit and please let me how it works (or if it doesn't!). Hopefully it helps you use GitHub Enterprise to its best abilities - and open work up to the rest of the world as well.

Some more detailed explanations if you feel the urge:

Migrate Milestones, Labels, and Issues between two GitHub repositories. To
migrate a subset of elements (Milestones, Labels, Issues), use the element
specific flags (--milestones, --lables, --issues). Providing no flags defaults
to all element types being migrated.

positional arguments:
  user_name             Your GitHub (public or enterprise) username:
                        name@email.com
  token                 Your GitHub (public or enterprise) personal access
                        token
  source_repo           the team and repo to migrate from:
                        <team_name>/<repo_name>
  destination_repo      the team and repo to migrate to:
                        <team_name>/<repo_name>

optional arguments:
  -h, --help            show this help message and exit
  --destinationToken [DESTINATIONTOKEN], -dt [DESTINATIONTOKEN]
                        Your personal access token for the destination
                        account, if you are migrating between GitHub
                        installations
  --destinationUserName [DESTINATIONUSERNAME], -dun [DESTINATIONUSERNAME]
                        Username for destination account, if you are migrating
                        between GitHub installations
  --sourceRoot [SOURCEROOT], -sr [SOURCEROOT]
                        The GitHub domain to migrate from. Defaults to
                        https://www.github.com. For GitHub enterprise
                        customers, enter the domain for your GitHub
                        installation.
  --destinationRoot [DESTINATIONROOT], -dr [DESTINATIONROOT]
                        The GitHub domain to migrate to. Defaults to
                        https://www.github.com. For GitHub enterprise
                        customers, enter the domain for your GitHub
                        installation.
  --milestones, -m      Toggle on Milestone migration.
  --labels, -l          Toggle on Label migration.
  --issues, -i          Toggle on Issue migration.

Coffee x Design by Alexander Hadik

A French Press, modeled in SolidWorks and referenced from a real product. Completed for ENGN 1740, Computer Aided Design, as Brown University. The course focuses on modeling from a mechanical engineering perspective; namely preparing the necessary assets for mass production.

Generative Toy by Alexander Hadik

DNA MODELING KIT

We all find an innate pleasure in play; in taking raw materials and assembling something new and original. Part of the intrigue of play are constraints, and the innovation that is bred by these constraints.

I found a very unique parallel between this sense of play, and the way that nature assembles the instructions for living organisms in the form of DNA. DNA's structure is bound by the chemical properties of its building blocks. It forms into its familiar double helix not because an outside force crafts it that way, but because this is the most natural combination of the elements. These natural, almost divine interactions that structure our world so closely mirror the way the human mind approaches the constraints of play.

This project brings this parallel to life with a collection of cut 2D pieces that can be assembled by the user into a multitude of shapes. However, these pieces in mirror the nucleic acids of DNA, and the chemical bonds that form between them. From this, one can see how  the structure of DNA forms, where its weaknesses and strengths lie, and what other forms it might try to take.

PROTOTYPING & PRODUCTION

Pieces were designed in Adobe Illustrator, and produced on a laser cutter. This quick production method allowed for fast prototyping and iteration.

SOLUTION

I wanted to explore the chemical properties of the DNA double helix in a physical form, so I decided to model the molecules and chemical bonds of the helix from laser cut masonite. The user could of course assemble a double helix, however the real exploration was to see what other structural shapes could be constructed from the components of a DNA molecule. DNA does take shapes beyond the familiar double helix, and this toy uses play to demonstrate these complex chemical properties. 

Swarm by Alexander Hadik

Visualizing Team Building Exercises

Each node attempts to position itself to make an equilateral triangle with two other randomly selected nodes.

There's a series of team building exercises to play with a group of people I was taught recently at my IBM Design internship. The concept is the entire group stands randomly within a room, and each player randomly selects two other players. Then everyone tries to follow a predetermined geometric rule. The fun part is that no one knows who everyone elses partner is. The ones we played with were:

  1. Make an equilateral triangle with yourself and your two partners.

  2. Keep one partner (your protector) between yourself and the other partner (your enemy).

  3. Keep yourself in a line between your two partners.

I was curious what a final solution for each of these games might look like if they were carried out to infinity. So I coded the first example up in HTML. The GitHub page for this repo (ahadik.github.io/swarm) offers a live demo. The characteristics are interesting such as the fact that the nodes appear to form a circle before condensing entirely, with a few points in the middle.

I'll be releasing the other two examples and hopefully some others in the near future.