<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Alex N. Jose]]></title><description><![CDATA[Journals, thoughts and ideas.]]></description><link>http://alexnj.com/blog/</link><generator>Ghost 0.6</generator><lastBuildDate>Sun, 30 Apr 2023 02:38:39 GMT</lastBuildDate><atom:link href="http://alexnj.com/blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Updating CA root certificate bundle on Synology]]></title><description><![CDATA[<p>I ran into the issue of my Synology NAS not being able to pull from my local Docker registry:  </p>

<pre><code class="language-shell">docker: Error response from daemon: Get "https://redacted-local-hostname.net/v2/": x509: certificate has expired or is not yet valid  
</code></pre>

<p>Turns out my Synology hasn't been picking up the latest CA root</p>]]></description><link>http://alexnj.com/blog/updating-root-certificates-on-synology/</link><guid isPermaLink="false">fbff25f2-2508-46db-9e2f-cab2753e0f76</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Thu, 20 Jan 2022 00:07:23 GMT</pubDate><content:encoded><![CDATA[<p>I ran into the issue of my Synology NAS not being able to pull from my local Docker registry:  </p>

<pre><code class="language-shell">docker: Error response from daemon: Get "https://redacted-local-hostname.net/v2/": x509: certificate has expired or is not yet valid  
</code></pre>

<p>Turns out my Synology hasn't been picking up the latest CA root certificates. I could verify that this is the issue by running <code>curl</code></p>

<pre><code class="language-shell">curl -I https://alexnj.com  
curl: (60) SSL certificate problem: certificate has expired  
More details here: https://curl.haxx.se/docs/sslcerts.html  
...
</code></pre>

<p>Fixing this turned out rather easy. The commands below download the up-to-date root certificates from curl.se, in PEM format. We move it to the place where Synology keeps the CA-certificate bundle, overwriting it. We create a backup of the origin CA-certificate bundle, with a <code>.backup</code> extension, just in case you'd want to revert for any reason.  </p>

<pre><code class="language-shell">cp /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt.backup  
wget --no-check-certificate https://curl.se/ca/cacert.pem  
mv cacert.pem /etc/ssl/certs/ca-certificates.crt  
</code></pre>

<p>After this, the same curl command started succeeding. However, Docker was still throwing the same error — meaning it didn't pick up the updated root certificates. Solution? Let's try restarting the Synology Docker daemon:</p>

<pre><code class="language-bash">synoservice --restart pkgctl-Docker  
</code></pre>

<p>That took care of it. If you run into the same issue with your Synology, hope this helps!</p>]]></content:encoded></item><item><title><![CDATA[Useful Docker commands]]></title><description><![CDATA[<h3 id="debugastoppedcontaineroropenashellintoone">Debug a stopped container, or open a shell into one</h3>

<p>Since Docker doesn't allow a container's entrypoint being modified, we need to create a new container on the same image, and modify entrypoint to be a shell. One can use the command below to shell into a Docker image to</p>]]></description><link>http://alexnj.com/blog/docker-useful-commands/</link><guid isPermaLink="false">ae2bab55-8b5d-4e2b-b512-b7847693822b</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Sun, 26 Jul 2020 18:40:03 GMT</pubDate><content:encoded><![CDATA[<h3 id="debugastoppedcontaineroropenashellintoone">Debug a stopped container, or open a shell into one</h3>

<p>Since Docker doesn't allow a container's entrypoint being modified, we need to create a new container on the same image, and modify entrypoint to be a shell. One can use the command below to shell into a Docker image to examine contents, or to debug the image. </p>

<pre><code class="language-shell">docker run -it --entrypoint sh node:10-alpine  
</code></pre>

<p>Replace the image name, <code>node:10-alpine</code> with the correct image name. Shell here is <code>sh</code>, which is available in Alpine images, but you can change it with any other command as well.</p>]]></content:encoded></item><item><title><![CDATA[Fix for wrong domain suffix for local devices in USG networks]]></title><description><![CDATA[<p>If you have the issue of your devices (especially Macs) picking up the wrong domain suffix during DHCP with your Ubiquity USG gateways, the issue is with USG's implementation.</p>

<p>USG remembers the preferred DNS hostname of local devices as they negotiate DHCP, and records it in its <code>/etc/hosts</code> file.</p>]]></description><link>http://alexnj.com/blog/fix-for-wrong-domain-suffix-in-usg-networks/</link><guid isPermaLink="false">40d7d7cf-9b2b-48ee-a527-e18086c5ed6f</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Sun, 19 May 2019 18:06:57 GMT</pubDate><content:encoded><![CDATA[<p>If you have the issue of your devices (especially Macs) picking up the wrong domain suffix during DHCP with your Ubiquity USG gateways, the issue is with USG's implementation.</p>

<p>USG remembers the preferred DNS hostname of local devices as they negotiate DHCP, and records it in its <code>/etc/hosts</code> file. When clients change their preferred name, USG doesn't seem to respect it, and continues to vendor the previously cached name from <code>hosts</code> file. </p>

<p>To fix this: <br>
1. Login to your USG gateway via SSH <br>
2. Open its <code>/etc/hosts</code> file and remove all the cached entries. <br>
3. Reboot USG gateway and  force your clients to reconnect and acquire new DHCP leases.</p>]]></content:encoded></item><item><title><![CDATA[Enable mouse-wheel scroll on tmux with iTerm2]]></title><description><![CDATA[<p>The trick is to enable mouse support. Add the following line to <code>~/.tmux.conf</code>:</p>

<pre><code class="language-shell">setw -g mouse on  
</code></pre>]]></description><link>http://alexnj.com/blog/enable-mouse-wheel-scroll-on-tmux-with-iterm2/</link><guid isPermaLink="false">2bdf0f08-4be4-4e6c-9824-a66e91bc107c</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Wed, 27 Mar 2019 17:56:12 GMT</pubDate><content:encoded><![CDATA[<p>The trick is to enable mouse support. Add the following line to <code>~/.tmux.conf</code>:</p>

<pre><code class="language-shell">setw -g mouse on  
</code></pre>]]></content:encoded></item><item><title><![CDATA[LetsEncrypt+Docker to issue certificates against DNS challenge]]></title><description><![CDATA[<p>Running LetsEncrypt in Docker is the best way to ensure DNS plugins are available, regardless of your platform.</p>

<p>First, create two folders <code>conf</code> and <code>lib</code> in the current folder. We'll setup these as two-way shares between Docker container and host, and use to get certificates once the steps are complete.</p>]]></description><link>http://alexnj.com/blog/use-letsencryptdocker-to-issue-certificates/</link><guid isPermaLink="false">b6034f41-f55d-4662-a307-613e5069123a</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Fri, 08 Feb 2019 19:19:09 GMT</pubDate><content:encoded><![CDATA[<p>Running LetsEncrypt in Docker is the best way to ensure DNS plugins are available, regardless of your platform.</p>

<p>First, create two folders <code>conf</code> and <code>lib</code> in the current folder. We'll setup these as two-way shares between Docker container and host, and use to get certificates once the steps are complete.</p>

<pre><code>sudo docker run -it \  
  --rm --name certbot \
  --mount src="$(pwd)/conf",target=/etc/letsencrypt,type=bind \
  --mount src="$(pwd)/lib",target=/var/lib/letsencrypt,type=bind  \
  certbot/certbot certonly --manual \
  --preferred-challenges dns
</code></pre>]]></content:encoded></item><item><title><![CDATA[Create two-way shares between Docker host and container]]></title><description><![CDATA[<p>If you are using Docker volumes to copy files over to a container and would like to have two-way sharing (i.e., container changes to reflect on host folder), here's the <code>docker-compose</code> syntax to be used:</p>

<pre><code>volumes:  
  shared:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/home/alexnj/projects/</code></pre>]]></description><link>http://alexnj.com/blog/create-two-way-shares-between-docker-host-and-container/</link><guid isPermaLink="false">71204abe-92e5-4ebb-a323-bb461bf5c356</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Wed, 06 Feb 2019 19:47:47 GMT</pubDate><content:encoded><![CDATA[<p>If you are using Docker volumes to copy files over to a container and would like to have two-way sharing (i.e., container changes to reflect on host folder), here's the <code>docker-compose</code> syntax to be used:</p>

<pre><code>volumes:  
  shared:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/home/alexnj/projects/test/shared"

services:  
  web:
    build: .
    command: npm run start
    volumes:
      - shared:/app/shared
</code></pre>]]></content:encoded></item><item><title><![CDATA[Install telegraf on Raspbian Lite]]></title><description><![CDATA[<p>First add InfluxDB repository:</p>

<pre><code>curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
source /etc/os-release
test $VERSION_ID = "7" &amp;&amp; echo "deb https://repos.influxdata.com/debian wheezy stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
test $VERSION_ID = "8" &amp;&amp; echo</code></pre>]]></description><link>http://alexnj.com/blog/install-telegraf-on-raspbian-lite/</link><guid isPermaLink="false">485e1c42-4bdc-4ca4-a788-cc1ef5d77bcc</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Sat, 19 Jan 2019 00:29:52 GMT</pubDate><content:encoded><![CDATA[<p>First add InfluxDB repository:</p>

<pre><code>curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
source /etc/os-release
test $VERSION_ID = "7" &amp;&amp; echo "deb https://repos.influxdata.com/debian wheezy stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
test $VERSION_ID = "8" &amp;&amp; echo "deb https://repos.influxdata.com/debian jessie stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
test $VERSION_ID = "9" &amp;&amp; echo "deb https://repos.influxdata.com/debian stretch stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
</code></pre>

<p>Followed by installing Telegraf service:</p>

<pre><code>sudo apt-get update 
sudo apt-get install telegraf
sudo service telegraf start
</code></pre>]]></content:encoded></item><item><title><![CDATA[Avoiding require('../../../relative/path') hell in Node.js]]></title><description><![CDATA[<p>A frequent problem that you might run into in your Node.js application is requiring your local dependencies using a relative path notation. I.e., similar to <code>require('../../../config/app')</code>. This can soon become confusing, error prone and a huge productivity loss trying to hit the right level of</p>]]></description><link>http://alexnj.com/blog/avoid-require-relativepath-hell-in-node-js/</link><guid isPermaLink="false">ee7316ec-ef71-4e97-acec-1264ab5c29eb</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Sat, 02 Jun 2018 01:54:09 GMT</pubDate><content:encoded><![CDATA[<p>A frequent problem that you might run into in your Node.js application is requiring your local dependencies using a relative path notation. I.e., similar to <code>require('../../../config/app')</code>. This can soon become confusing, error prone and a huge productivity loss trying to hit the right level of directory structure.</p>

<p>The most elegant way that I have come across so far is an npm package <a href="https://www.npmjs.com/package/module-alias"><code>module-alias</code></a>.</p>

<p>With the package installed, I can then configure aliases like <code>@app</code> and <code>@root</code> that makes local dependency locations manageable and much more pleasant to interact with.</p>

<p>To configure, install the package as usual:</p>

<pre><code class="language-console">npm i module-alias --save  
</code></pre>

<p>As the first line of your program entrypoint, i.e, <code>index.js</code> or <code>server.js</code>, register the package:</p>

<pre><code class="language-javascript">require('module-alias/register')  
</code></pre>

<p>Aliases are defined in <code>package.json</code>, as follows:</p>

<pre><code class="language-javascript">  "_moduleAliases": {
    "@app": "./src",
    "@root": "."
  },
</code></pre>

<p>This setup now allows you to refer to your local dependencies as:</p>

<pre><code class="language-javascript">const userRoutes = require('@app/routes/user')

// or using import
import User from '@app/models/User'  
</code></pre>

<p>That works for your server-side application. What about client? Fortunately, this package can also be used with Webpack, which allows use of alias via its <code>resolve</code> configuration. An example configuration <code>webpack.config.js</code> snippet is below:</p>

<pre><code class="language-javascript">const packageJson = require('./package.json')

let webpackConfig = {  
  // ...
  resolve: {
    alias: packageJson._moduleAliases
  },
  // ...
}
</code></pre>

<p>This pulls in the same alias definitions from <code>package.json</code>. From that point, you can refer to client-side dependencies also with the same <code>@app/path/to/file</code> syntax.</p>

<p>If you are using <code>less-loader</code> with Webpack resolver, the syntax for <code>@imports</code> are as below example (Note the preceding <code>~</code>, which enables <code>less-loader</code> version <code>4</code> and above to resolve using Webpack's resolver):</p>

<pre><code class="language-css">@import (reference) "~@app/styles/mixins/buttons.less";
</code></pre>

<p>Hope that helps. If you are aware of a better pattern, please share via comments below. </p>]]></content:encoded></item><item><title><![CDATA[Fix for OSX 10.11+ losing Wifi speed after sleep]]></title><description><![CDATA[<p>If your Wifi connection is going slow after a deep sleep after OSX 10.11 upgrade, fix is to disable "Wake for network access" in Energy Saver settings.</p>

<p><img src="http://alexnj.com/blog/content/images/2018/01/Screenshot-2018-01-22-17-32-09.png" alt=""></p>

<p>Uncheck "Wake for Wi-Fi network access" and reboot. Sleep shouldn't reduce the Wifi speed. </p>]]></description><link>http://alexnj.com/blog/fix-for-osx-10-11-losing-wifi-speed-on-sleep/</link><guid isPermaLink="false">f277cedd-028e-4756-b02a-7e0f31aef06b</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Tue, 23 Jan 2018 01:33:08 GMT</pubDate><media:content url="http://alexnj.com/blog/content/images/2018/01/image.png" medium="image"/><content:encoded><![CDATA[<img src="http://alexnj.com/blog/content/images/2018/01/image.png" alt="Fix for OSX 10.11+ losing Wifi speed after sleep"><p>If your Wifi connection is going slow after a deep sleep after OSX 10.11 upgrade, fix is to disable "Wake for network access" in Energy Saver settings.</p>

<p><img src="http://alexnj.com/blog/content/images/2018/01/Screenshot-2018-01-22-17-32-09.png" alt="Fix for OSX 10.11+ losing Wifi speed after sleep"></p>

<p>Uncheck "Wake for Wi-Fi network access" and reboot. Sleep shouldn't reduce the Wifi speed. </p>]]></content:encoded></item><item><title><![CDATA[Configuring IPTables to allow traffic to Parse with fail2ban]]></title><description><![CDATA[<p>If you have fail2ban installed to protect your server, run the following:</p>

<pre><code>iptables -A IN_public_allow -p tcp -m tcp --dport 1337 -j ACCEPT  
</code></pre>

<p>Once you test the configuration, make the ruleset permanent by:</p>

<pre><code>service iptables save  
</code></pre>]]></description><link>http://alexnj.com/blog/configuring-iptables-to-allow-traffic-to-parse-with-fail2ban/</link><guid isPermaLink="false">c785d3eb-11ab-4fac-9561-4aab5c9c9aff</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Thu, 04 May 2017 15:02:23 GMT</pubDate><content:encoded><![CDATA[<p>If you have fail2ban installed to protect your server, run the following:</p>

<pre><code>iptables -A IN_public_allow -p tcp -m tcp --dport 1337 -j ACCEPT  
</code></pre>

<p>Once you test the configuration, make the ruleset permanent by:</p>

<pre><code>service iptables save  
</code></pre>]]></content:encoded></item><item><title><![CDATA[Install Fail2Ban on Ubuntu and CentOS]]></title><description><![CDATA[<h3 id="ubuntu">Ubuntu</h3>

<p>Install Fail2Ban.  </p>

<pre><code>sudo apt-get update  
sudo apt-get install fail2ban  
</code></pre>

<p>Review configuration at <code>/etc/fail2ban/jail.conf</code> to see if you want to modify any default values.</p>

<p>Create a local jail config as below:</p>

<pre><code>vim /etc/fail2ban/jail.local  
</code></pre>

<p>with the following contents:  </p>

<pre><code>[ssh-iptables]

enabled  = true  
filter   = sshd  
action   = iptables[</code></pre>]]></description><link>http://alexnj.com/blog/install-fail2ban-on-ubuntu-and-centos/</link><guid isPermaLink="false">92d5033e-442b-4dd5-9e37-c565f6edc2e6</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Wed, 03 May 2017 05:11:18 GMT</pubDate><content:encoded><![CDATA[<h3 id="ubuntu">Ubuntu</h3>

<p>Install Fail2Ban.  </p>

<pre><code>sudo apt-get update  
sudo apt-get install fail2ban  
</code></pre>

<p>Review configuration at <code>/etc/fail2ban/jail.conf</code> to see if you want to modify any default values.</p>

<p>Create a local jail config as below:</p>

<pre><code>vim /etc/fail2ban/jail.local  
</code></pre>

<p>with the following contents:  </p>

<pre><code>[ssh-iptables]

enabled  = true  
filter   = sshd  
action   = iptables[name=SSH, port=ssh, protocol=tcp]  
logpath  = /var/log/secure  
maxretry = 5  
</code></pre>

<p>Start Fail2Ban service:  </p>

<pre><code>sudo service fail2ban start  
</code></pre>

<p>To see which IPs are blocked:  </p>

<pre><code>iptables -L -n  
</code></pre>

<h3 id="centos">CentOS</h3>

<p>Install Fail2Ban.  </p>

<pre><code>yum install epel-release  
yum install fail2ban  
</code></pre>

<p>Review configuration at <code>/etc/fail2ban/jail.conf</code> to see if you want to modify any default values.</p>

<p>Create a local jail config as below:</p>

<pre><code>vim /etc/fail2ban/jail.local  
</code></pre>

<p>with the following contents:  </p>

<pre><code>[ssh-iptables]

enabled  = true  
filter   = sshd  
action   = iptables[name=SSH, port=ssh, protocol=tcp]  
logpath  = /var/log/secure  
maxretry = 5  
</code></pre>

<p>Start Fail2Ban service:  </p>

<pre><code>chkconfig --level 23 fail2ban on  
service fail2ban start  
</code></pre>

<p>To see which IPs are blocked:  </p>

<pre><code>iptables -L -n  
</code></pre>]]></content:encoded></item><item><title><![CDATA[Keeping Docker container time in sync with host on OSX]]></title><description><![CDATA[<p>Docker containers on your Mac can get their time skewed over the time your Mac sleeps and wakes up. This happens because Docker doesn't sync container time with host hardware clock on wake up. This can be a nag when you are dealing with an API server like Amazon S3,</p>]]></description><link>http://alexnj.com/blog/keeping-docker-container-time-in-sync/</link><guid isPermaLink="false">a9fe867f-8c2a-4e9b-9ffa-0a4c91d16ce0</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Thu, 23 Mar 2017 01:17:52 GMT</pubDate><content:encoded><![CDATA[<p>Docker containers on your Mac can get their time skewed over the time your Mac sleeps and wakes up. This happens because Docker doesn't sync container time with host hardware clock on wake up. This can be a nag when you are dealing with an API server like Amazon S3, which requires requests to be signed with time and a very short expiry. </p>

<p>Fortunately, the fix isn't that hard with the <code>sleepwatcher</code> brew. </p>

<pre><code>brew install sleepwatcher  
brew services start sleepwatcher  
echo /usr/local/bin/docker run --rm --privileged \  
    node:argon hwclock -s &gt; ~/.wakeup
chmod +x ~/.wakeup  
</code></pre>

<p>What does this do? Using <code>sleepwatcher</code>, schedule a forced <code>hwclock -s</code> sync in containers each time the Mac wakes up from sleep. Here I'm using <code>node:argon</code> as my OS name. Be sure to replace that with the OS image you are using.</p>]]></content:encoded></item><item><title><![CDATA[Ember: Using RESTAdapter with a nested API endpoint to POST a new record]]></title><description><![CDATA[<p>Problem statement: <br>
1. You are using Ember Data and have two models, say <code>Blog</code> and <code>Post</code> where <code>Post</code> has defined a <code>DS.belongsTo('Blog')</code> relationship. <br>
2. Your API endpoint is nested, i.e., the endpoint to create a post is taking a POST request to <code>/blog/:blogId/posts</code>. <br>
3. You</p>]]></description><link>http://alexnj.com/blog/ember-using-restadapter-to-create-records-to-nested-api-endpoint/</link><guid isPermaLink="false">d819e6a4-9a28-4e1a-b39d-994892cf35e2</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Sun, 05 Mar 2017 12:29:31 GMT</pubDate><content:encoded><![CDATA[<p>Problem statement: <br>
1. You are using Ember Data and have two models, say <code>Blog</code> and <code>Post</code> where <code>Post</code> has defined a <code>DS.belongsTo('Blog')</code> relationship. <br>
2. Your API endpoint is nested, i.e., the endpoint to create a post is taking a POST request to <code>/blog/:blogId/posts</code>. <br>
3. You are using <code>RESTAdapter</code></p>

<p>The correct solution is to create an adapter override at <code>app/adapters/post.js</code> with the appropriate method to provide the correct endpoint URL. See the sample below:</p>

<pre><code>import ApplicationAdapter from './application';

export default ApplicationAdapter.extend({  
    urlForCreateRecord(modelName, snapshot) {
        let blogID = snapshot.belongsTo('blog').id;
        return `/${this.namespace}/blogs/${blogID}/posts`;
    }
});
</code></pre>

<p>Here we are using the <code>snapshot</code> object to get the parent model and its ID to construct the URL. This is about the minimal override I could find, compared to overriding the complete createRecord, for example. Hope it helps!</p>]]></content:encoded></item><item><title><![CDATA[Using multiple SSH identities with Git]]></title><description><![CDATA[<p>Starting with version 2.10, Git allows the SSH command used for a single repo to be configured within the repository configuration.</p>

<p>Within the repo, run:  </p>

<pre><code>git config core.sshCommand "ssh -i ~/.ssh/id_rsa.github.alexnjose -F /dev/null"  
</code></pre>

<p>For initial clone of the repo, you can make use</p>]]></description><link>http://alexnj.com/blog/using-multiple-ssh-identities-with-git/</link><guid isPermaLink="false">bcab75f2-8009-459b-a0c6-e024d686b5a3</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Thu, 24 Nov 2016 18:20:05 GMT</pubDate><content:encoded><![CDATA[<p>Starting with version 2.10, Git allows the SSH command used for a single repo to be configured within the repository configuration.</p>

<p>Within the repo, run:  </p>

<pre><code>git config core.sshCommand "ssh -i ~/.ssh/id_rsa.github.alexnjose -F /dev/null"  
</code></pre>

<p>For initial clone of the repo, you can make use of the <code>GIT_SSH_COMMAND</code> environment variable:</p>

<pre><code>GIT_SSH_COMMAND="ssh -i ~/.ssh/id_rsa.github.alexnjose -F /dev/null" \  
    git clone git@github.com:alexnj/dotfiles.git
</code></pre>]]></content:encoded></item><item><title><![CDATA[Self-hosting a Continuous Integration server using Drone.]]></title><description><![CDATA[<p>This article will help you setup a self-hosted continuous integration and delivery server using <a href="http://try.drone.io/">Drone</a>.</p>

<p><strong>What you'll need:</strong></p>

<ol>
<li>A DigitalOcean account.  </li>
<li>A GitHub or BitBucket account. For this article's purpose I'm going to assume your code is on GitHub. </li>
</ol>

<p>Once this setup is complete, you will be able to import</p>]]></description><link>http://alexnj.com/blog/hosting-your-own-drone-ci-instance/</link><guid isPermaLink="false">8b52b73f-6b5d-4dce-a7b7-b3ee6680a796</guid><dc:creator><![CDATA[Alex N. Jose]]></dc:creator><pubDate>Tue, 01 Nov 2016 09:20:58 GMT</pubDate><media:content url="http://alexnj.com/blog/content/images/2016/11/Screenshot-2016-11-01-02-27-00.png" medium="image"/><content:encoded><![CDATA[<img src="http://alexnj.com/blog/content/images/2016/11/Screenshot-2016-11-01-02-27-00.png" alt="Self-hosting a Continuous Integration server using Drone."><p>This article will help you setup a self-hosted continuous integration and delivery server using <a href="http://try.drone.io/">Drone</a>.</p>

<p><strong>What you'll need:</strong></p>

<ol>
<li>A DigitalOcean account.  </li>
<li>A GitHub or BitBucket account. For this article's purpose I'm going to assume your code is on GitHub. </li>
</ol>

<p>Once this setup is complete, you will be able to import your projects into the newly setup CI server and have it automatically build and deploy your commits.</p>

<h2 id="1createandconfigureadigitaloceandroplet">1. Create and configure a DigitalOcean droplet.</h2>

<p>Use the DigitalOcean UI to create an <code>Ubuntu 14.04</code> droplet. For demonstration purposes, I'm going to use the smallest instance available. For real life use cases, depending on the size and computation needs of your builds, feel free to pick a more configured box.</p>

<p><img src="http://alexnj.com/blog/content/images/2016/07/Screen-Shot-2016-07-25-at-10-27-30-PM.png" alt="Self-hosting a Continuous Integration server using Drone."></p>

<p>Make sure to check the <code>SSH Keys</code> section and provide your local development box's public SSH key (<code>id_rsa.pub</code> contents) to DigitalOcean so you can login without a password for <code>root</code>. The password-less login is only a preference here, but is highly recommended for added security. </p>

<p>Through the rest of the article, I will be referring to the public IP address of this droplet as <code>DROPLET_IP</code>. Anytime you see this, replace it with the public IP of your droplet.</p>

<p>Login to your newly created droplet with:  </p>

<pre><code>ssh root@DROPLET_IP  
</code></pre>

<p>PS: Please follow the guide at <a href="https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04">https://www.digitalocean.com/community/tutorials/how-to-add-swap-on-ubuntu-14-04</a> to setup more swap memory on the droplet. This will help you run more sizable builds and keep it from running out of memory during builds.</p>

<h2 id="2installdocker">2. Install Docker</h2>

<p>Drone ships as a single Docker image. We need to enable Docker support on the droplet first to install Drone. Start by updating your box:</p>

<pre><code>sudo apt-get update  
sudo apt-get -y upgrade  
</code></pre>

<p>Add docker repository key to apt-key for package verification:</p>

<pre><code>sudo apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D  
</code></pre>

<p>Add the docker repository to Apt sources:</p>

<pre><code>echo "deb https://apt.dockerproject.org/repo ubuntu-trusty main" | sudo tee /etc/apt/sources.list.d/docker.list  
</code></pre>

<p>Update the repository with the new addition:</p>

<pre><code>sudo apt-get update  
</code></pre>

<p>Finally, download and install docker:</p>

<pre><code>sudo apt-get -y install docker-engine  
</code></pre>

<h2 id="3setupanapplicationidongithub">3. Setup an Application ID on GitHub.</h2>

<p>Drone needs an application ID registered with one of the supported code hosting providers to be able to integrate and execute builds. For more information, read <a href="http://readme.drone.io/setup/remotes/">Remote Drivers</a> on Drone Wiki.</p>

<p>For the purpose of this article, I'm going to use GitHub. </p>

<p><img src="http://alexnj.com/blog/content/images/2016/11/Screenshot-2016-11-01-02-03-19.png" alt="Self-hosting a Continuous Integration server using Drone."></p>

<p>Go to GitHub's <code>Settings</code> > <code>OAuth applications</code> and provide a name for your CI server, the publicly accessible URL and an authorization callback URL. The most important one here is the <code>Authorization callback URL</code>. Make sure you provide <code>http://${ip-or-hostname-of-droplet}/authorize</code> as the URL here.</p>

<p>Once the application is created, note down the <code>Client ID</code> and <code>Client Secret</code> parameters of the application. We'll need it in the next step.</p>

<h2 id="4installdrone">4. Install Drone.</h2>

<p>Drone ships as a single binary file and is distributed as a minimalist 20 MB Docker image. Download the official Drone image from DockerHub:</p>

<pre><code>sudo docker pull drone/drone:0.4  
</code></pre>

<p>Create a <code>/etc/drone/dronerc</code> file. Docker will use this file to load environment variables from disk. </p>

<pre><code>mkdir /etc/drone  
vim /etc/drone/dronerc  
</code></pre>

<p>Please note these variables should never be quoted. Use the Client ID and Client Secret from the previous step here. Enter the following, by replacing with appropriate values:</p>

<pre><code>REMOTE_DRIVER=github  
REMOTE_CONFIG=https://github.com?client_id=....&amp;client_secret=....  
</code></pre>

<p>Create and run the Drone Container:</p>

<pre><code>sudo docker run \  
    --volume /var/lib/drone:/var/lib/drone \
    --volume /var/run/docker.sock:/var/run/docker.sock \
    --env-file /etc/drone/dronerc \
    --restart=always \
    --publish=80:8000 \
    --detach=true \
    --name=drone \
    drone/drone:0.4
</code></pre>

<p>At this point, if you direct your browser to the IP/host name of your droplet, you should be able to see Drone running and ready to setup your first project. Congratulations!</p>]]></content:encoded></item></channel></rss>