Choosing The Right Tool For The Job


It has occurred to me recently that some tools and languages are better than others when starting a project (duh). For example, I now prefer to use rails than node.js for a project’s phase 1. Now, the reason for this is that even if Node might be faster(dunno that it is or not; just for argument’s sake) and that rails might offer a lot more “magic” (which some like and others don’t), rails has many mature tools to let a developer get a phase one built more quickly (Node.js might have similar tools built for it, but one can argue that Rail’s are more mature). And, obviously with a phase one, you can better know what a phase two will look like.
I’m not advocating for always choosing the tools that are faster. Making decisions this way can lead to more technical debt. There are many tools that take a long time to learn, but the payoff is worth it in the end. For example, something like Kubernetes might take a good while to learn and get good with. However, the benefits are huge. You would then have a powerful tool for managing distributed systems. However, your new product probably doesn’t need to be distributed yet (and it might never need to be). Wasting time on this at the start is wasting time and money that you might need to make your product amazing. Of course, I’m talking about building MVPs.
Another major consideration is using what you already know. If you are trying to build something, you probably shouldn’t always be looking at the newest tool/language on the block (looking at you javascript frameworks). Sometimes what you know now is the quickest tool for the job because you don’t have to spend your new product’s time learning a new tool or language. Plus, not having to learn something major would be one less road block to having your product out there in the hands of real users.

Thanks for reading,

John

Using Golang For Custom Wallpapers

If you’re like me, then you like to constantly have new wallpapers. I often even have scripts to automatically switch my wallpapers. So naturally, I decided that I wanted to always have custom wallpapers with my name on them. And, what better way to achieve this than with Golang’s draw libraries? So here were my project goals:
1. Allow for custom text and icons.
2. Allow for easy scripting.
3. Allow different image sizes

The code for this post is available on my Github page. I highly recommend the images from unsplash for your wallpaper needs. (That is where I downloaded the wallpapers for this post!) Now, before we get started,

Here’s our mask.
image mask

and here’s a sample output.
Sample Image
The go app takes the input image, flips it, inverts the colors, then draws this new image on the original in the area allowed by our mask. Let’s start by writing the code to read in our source images.

Here, we are using our command line arguments as file paths and passing them to our “readImage” function. This function simply reads in the file data and converts it to an Image.

Next, we need to scale our mask so that it will fit when our wallpaper is larger or smaller. We will use our “dst” image’s size.

Now we can actually transform our image. I decided to move these image functions to a different go package. Let’s start by flipping our “dst” image.

This function simply switches the pixels horizontally and vertically.

Next, let’s invert the colors in our image.

Here we make a struct to represent a pixel’s colors. We then simply make a new image and set each pixel to the corresponding pixel from our input with inverse RGB values.

Now that our image functions are done, we can continue. One small thing we need to take care of is converting our “dst” object from an “image.Image” to an “image.RGBA”. We do this by creating a new RGBA object and drawing our image.

Finally we can actually draw our image.

We pass our converted dst image and it’s bounds, our modified image, the image zero point, our mask, the image zero point (again), and finally the “Over” operation. This call does the actual work of drawing our modified image over our original through the mask we provided.

All that’s left to do is to save our final image.

And we should now have a fancy wallpaper to use. Here are some more sample outputs.

bridge

bridge2

hill

jump

trees

I hope this post was a good introduction into Golang’s image and draw libraries. I went over just a couple of possible transforms one could apply to an image. A challenge to the reader is to add different image manipulations to create even cooler output images. Also, remember that the code for this post can be found on my Github page.

Thanks for reading,

John

Scaling A Websocket Application With RabbitMQ

Socket.io is extremely powerful when it comes to communicating between the browser and a server in real-time. However, the problem of scaling quickly arises with the situations of very high numbers of clients or the need to implement load balancing. This problem can be easily and effectively addressed with RabbitMQ. This method also allows for a very extendable architecture when the project’s goals inevitably grow and/or change. We will go over some quick basics for these tools as well as extend an existing chat application to use RabbitMQ and multiple node processes. The demo application is available on my Github page.

Socket.io

Socket.io is a library that implements the websocket protocol. Websockets are meant for two way communication and are often used between a server and a web browser. This is a sharp contrast to the standard way a browser communicates with a server. Typically, a web browser makes requests over ‘http’ or ‘https’ and the server responds. When you type in “https://google.com” there is a server that receives your browser’s request and does its best to send back a document. Data (such as JSON) can be sent over AJAX requests. This requires the web browser to ask for the information. If the browser needs to wait for new information, it has to poll and ask the server for the updated information every X number of seconds.

With websockets however, communication is free to take place between a web browser and a server. This means that the server can push information to the web browser and vice versa. This type of communication is great for chat apps, simple games, and real time dashboards.

RabbitMQ

RabbitMQ is a message queue. There are many models for building applications that use RabbitMQ. Just take a look at their tutorials for some samples. For example, you might use the worker model for a web application that has some long running task like resizing an image. The RabbitMQ server can even implement acknowledgments to make sure the resize completes even if the worker process crashes mid way through completing. It can simply route the job to another worker. However, I won’t be covering acknowledgments in this post.

In this post, our chat application will use a publish and subscribe model. We will use it to send to and listen for messages from our chat application. Our chat servers will not need to know about each other. They will only need to know the IP address of RabbitMQ. RabbitMQ also offers a nice web UI and allows for clustering if our application ever requires it. Our application acts as a “producer” when it sends messages to RabbitMQ. These messages are sent to an exchange. This exchange routes messages to queues and then our application acts as a “consumer” and reads them.

producer -> exchange -> queue -> consumer

Our Base

For this demo, I have created a small chat application to extend to using RabbitMQ. It is available on my Github page. Currently, it uses Express JS as a server to serve our chat page and Socket.io for our messaging. Socket.io will actually handle the work of getting our browser connected via websockets. Let’s take a look at the “message_handler.js” file.

Here we are telling Socket.io to wait for connections. Once connected, we wait for a disconnect or a message from a socket. Once a message is received, we simply emit the message out to all listening clients.

Our client code is also very simple.

I use a small amount of jQuery to listen for the user to submit a chat message and to add new messages from the server to the page.
Note: I am not doing any input sanitization for the demo app.

Extending with RabbitMQ

We can start setting up this application to scale by creating a file to handle talking to RabbitMQ for us (“rabbitMQ_messaging.js”).

Here we start by importing an amqp library to communicate with RabbitMQ. Then we export our function that will setup our connection to RabbitMQ.

Next, we create a channel. This is what we talk to RabbitMQ through.

From here, we need to assert our exchange. Our exchange is what our application will send our chat messages to in RabbitMQ. We chose the ‘fanout’ method to tell rabbitMQ that we want our message delivered to several clients.

We use “assertQueue” with an empty string to define a temporary queue as described here. Finally, we bind our queue and our exchange. This tells the exchange to send our chat messages to this queue. Now we can start sending and receiving messages.

We create an “options” object that will contain our functions for sending and receiving messages. Using this method, we can replace the “onMessageReceived” function to do something more useful later.

Now that we have built this file, lets modify our “message_handler.js” file to use RabbitMQ.

We start by importing our file and passing in our address string for our message queue. Next, we replace the “onMessageReceived” function.

Since this function is now sending to clients, we need our application to send messages to RabbitMQ.

Adding More Servers

Now, lets test adding a few node servers. We can see that our applications are talking to each other by starting a few on different ports. My demo application is reading a “NODE_PORT” environment variable to know which port to run on.

final working application

Recap

For smaller applications, scaling in the way that we have discussed may not be necessary. The chat application could be further extended if logging or some other service was required by letting other node applications subscribe to these events in RabbitMQ. We went over some of the basics of RabbitMQ with Socket.io and applied them to a chat application to help it scale. If you thought this was awesome, please share it!

Until next time,

John

A Beginner’s Guide to rkt Containers

If you know me, then you know I am a Docker fan-boy. Docker offers a lot in terms of tooling. As an attempt to avoid vendor lock-in, I chose to research rkt and some of the features it offers. Rkt is maintained by Coreos. Coreos has also developed several other tools to assist in managing containers. (Some of which I might talk about in future blog posts.) Note: These scripts are available on my Github page.

I won’t go much into the installation of Rkt, there are great instructions available on the website. Once installed, we can run a simple

to see some of the available options.

Images

Rkt uses ACI files for its images. However, it also has the ability to download and convert docker images from Docker registries and Docker Hub! Let’s try running the nginx image from Docker Hub.

Note: the “–insecure-options=image” is needed because Docker images can’t be verified by rkt like ACI files. We use the “docker://” prefix to tell rkt that this is a docker image. See this link Now, let’s list our local images.

You should see nginx listed. We can also export this image to an ACI file by running..

There should now be an “nginx.aci” in your current directory. Now, lets create a container.

Creating containers

Note: Exit a running container by pressing ‘Ctrl’ and ’]’ three times.
You should see some output. Let’s try pointing our web browser to “localhost”…

error page

As you can tell, it seems like our service is not running. However, if we run

rkt list

We can see that the nginx container is accessable at “172.16.28.3”.

welcome to nginx

We can also run our container with the “–net=host” option to link the host network to the container and access it at “localhost” like so.

Accessing files

Let’s say we want to access our container throught the shell for debugging purposes similar to how one might access a running Docker container using “docker exec”. We can easily run…

(f2323923 is the uuid of my container. Yours will differ.)
Now, we are running “/bin/bash” in our container. Let’s add some text to our nginx welcome page.

Now we can see our edited page by refreshing our browser.

updated page

It would be very tedious to run these commands everytime we wanted to modify our html page. We can solve this by mounting a volume in our container.

If we try to navigate our browser to our new nginx container, we get a 403 page. This is because our new html folder is empty. We can add an “index.html” page on our host machine inside this html directory and reload our browser to see it updated.

new page

Creating ACIs

Now, let’s talk about building our own ACIs. I will be using a tool called (acbuild)[https://github.com/appc/acbuild]. This tool allows us to write a script file and build images similar to how a Dockerfile works. However, one benefit is that we gain the power and flexibility of the shell. So, I highly recommend you install this tool. Let’s start by creating a script for our container to run.

This script simply logs the current date and time to the console and to a file every 5 seconds. Next, we can write the script that will build our ACI file.

Here, we setup our script. I named mine “john.pettigrew.rocks/log”. Next, we’ll base it on Alpine Linux. An interesting thing to note is that quay.io/coreos/alpine-sh is also visitable by your web browser.

Now we create a directory and add our script that we created earlier.

And, finally we tell rkt to run our script when our container starts and write our image to a file.

Since this is a normal shell script, we could have done a lot of interesting things, like setting environment variables from our host and conditionally adding things to our final container image. Now, let’s try out our new container image. (I saved mine as “log_container.sh”.)

Note: We have to add the “–insecure-options=image” because we don’t have an asc file. See this page.

We can see that our container is printing to the console and if we use the following, we should also see the logs file being updated.

Multi-Container Pods

Let’s put together some of the information we have learned today and run two containers in the same pod and share a mounted volume between them. We have the option to define mount points while we build our image and also when we run our containers. Since our images are already built, we’ll do the latter.

This command seems complicated but when we break it down, it’s really not. We tell rkt to run just like we did before with the insecure option. We then create a volume “timelog” for our containers to share. (We specify “empty” instead of “host” since our volume won’t be on the host.) We run the nginx container specifying our mount point and binding it to the host’s network. Finally, we start our custom time log container and set a mount point for it.
The result can be seen by navigating to “localhost/logs” in our browser. A “logs” file should download containing our date logs.

There are many benefits to using rkt. For example, using scripts to create our images gives us a great level of flexibility for our containers. Also, with ACI files, we can setup container storage without having to run a special “registry”. As we’ve now seen, many of the common tasks we use Docker for can be accomplished with rkt, and I have just barely even scratched the surface of how powerful it really is. I highly recommend giving it a try and seeing if it fits in your workflow.

– John P.