I have enjoyed the benefits of CI/CD for years at work, and having recently modernized my website I realized just how many cycles I was spending building and deploying files. Humans are notoriously bad at mundane tasks such as these; fortunately the robots at GitHub are well suited for this type of soul-crushing thing.
An infrequently-touched project – like a personal website – is one that is forgotten every time it's put down, and re-learned every time it's picked back up. Automating allows me to be more of a customer of my own apps, rather than a confused developer. This better developer experience nets me less friction over the life of the project, and easier deployments means more time spent writing and less time remembering "oh right I have to do that one weird thing with file permissions in that spot over there." Let's do it once, do it right, and even write a little documentation while we're at it.
My hope is that I might encourage you to infrequently dive down deep into automation to better set up longer-term projects for success.
The Flow
- New blog posts (or new features, or fixes, or whatever) are started by creating a feature branch based off the
staging
branch. - Merges into the
staging
branch kick off the "Deploy to Staging" workflow, which builds and publishes my site to my staging server. - Once things are confirmed looking good-and-not-broken in the
staging
environment, I'm ready to go live. - I kick off the "Deploy to Production" workflow once the
prod
branch is brought up to date withstaging
, which builds and publishes my site changes live to the internet.
Easy, right? Let's get automating.
Getting Set Up
In this approach I'm leveraging the tried-and-true reliability of SFTP. To get deploys to play nice with GitHub Actions I'm using an action called FTP-Deploy-Action
. This is going to require a few pieces of configuration information like an FTP address, username, and password among other things. So the first thing we'll do is get ourselves set up with a dedicated FTP user being mindful of security. Next we'll store some GitHub Secrets to ensure our credentials stay safe, and then it's time to get into configuring the workflow file. From there, it's all about testing testing testing, and then testing some more.
Keys to the Kingdom: Create FTP User + Access
It's important to consider security implications at this step. It is highly recommended to create a dedicated limited-access user for this use case so that damage can be minimized should disaster occur. I opted to create an FTP user on my hosting account which only has read/write access to the folders containing my staging
and production
environments for my site, otherwise it has no access elsewhere. If these credentials should leak, a malicious actor should only (theoretically*) have access to my staging and production sites, which are easy enough to redeploy.
If you have the option to set the root directory for this user, take special note of that. I imagine it would be extremely frustrating to start writing server paths based on the logical root, when in fact this FTP user's root started at the /domains
level. We can only wonder how frustrating that might be. 😅
Keep it Secret: GitHub Actions Secrets
Now that we have some super secret credentials, let's store them in a safe place: GitHub Actions Secrets. This allows for the creation and retrieval of encrypted sensitive information that can be used in GitHub Actions workflows. I've opted to set up two secrets called FTP_USERNAME
and FTP_PASSWORD
.
Autobots Assemble: GitHub Actions
Finally, the fun part. Here's where we'll set up workflow files, and get to testing. The following is my staging
workflow:
Pro tip: If you're not thrilled about polluting your commit history with a bunch of CI development commits, one strategy suggested on StackOverflow is to fork your repo, and do all of your CI/CD workflow testing in the forked repository. Then when things are working to your satisfaction, you can pick up the
.yaml
workflow files and bring them back to your project.
ci-staging.yml
name: Continuous Integration (Staging)
on:
workflow_dispatch:
branches:
- staging
pull_request:
types:
- closed
branches:
- staging
jobs:
ftp-deploy:
name: Deploy to Staging
if: github.event.pull_request.merged == true || contains(github.event_name, 'workflow_dispatch')
runs-on: ubuntu-latest
steps:
- name: Start deployment
uses: bobheadxi/[email protected]
id: deployment
with:
step: start
token: ${{ secrets.GITHUB_TOKEN }}
env: staging
- name: Checkout latest code
uses: actions/[email protected]
- name: Install dependencies
run: npm ci
- name: Build Gatsby
run: npm run build
- name: Sync files to staging
uses: SamKirkland/FTP-Deploy-[email protected]
with:
server: ftp.server-address.com
username: ${{ secrets.FTP_USERNAME }}
password: ${{ secrets.FTP_PASSWORD }}
protocol: ftps
local-dir: ./public/ # must end with trailing slash
server-dir: /path/to/staging/html/ # must end with trailing slash
dry-run: false # VERY useful
- name: Update deployment status
uses: bobheadxi/[email protected]
if: always()
with:
step: finish
token: ${{ secrets.GITHUB_TOKEN }}
status: ${{ job.status }}
env: ${{ steps.deployment.outputs.deployment_id }}
deployment_id: ${{ steps.deployment.outputs.deployment_id }}
Let's walk through the bits and pieces.
We'll set the name of the workflow file + job that shows up in GitHub Actions with the name
key.
name: Continuous Integration (Staging)
The on
key determines to which events this workflow responds. Here I'm setting up two types of events; the first is workflow_dispatch
, which means I'm able to run this workflow manually based off the staging
branch. I've also set it up to watch for the type pull_request
, specifically the closed
types of pull_request
s on the staging
branch.
on:
workflow_dispatch:
branches:
- staging
pull_request:
types:
- closed
branches:
- staging
The jobs
key configures the jobs, which are composed of steps. These steps are the individual units of work that will take place, like checking out the repo and building the production code.
I've set up one job here called ftp-deploy
, and have configured the if
conditional property to skip over this job from running if the expression evaluates to false
. Here I'm testing to see if a pull_request
event type's property merged
equals true
, or if the dispatched event name includes the key workflow_dispatch
. Basically this allows me to run the workflow manually, or if this dispatching event is a pull_request
, only run if the merged
property is true
(or; don't run on closed-and-not-merged PRs).
jobs:
ftp-deploy:
name: Deploy to Staging
if: github.event.pull_request.merged == true || contains(github.event_name, 'workflow_dispatch')
runs-on: ubuntu-latest
steps: ...
Now for the steps
:
The deployments
package allows me to hook up to the Environments GitHub functionality, which is helpful to quickly validate the status of each of my environments. This step starts a deployment run, which is visually indicated on GitHub. We'll close out the workflow file with a final update to this deployment status.
- name: Start deployment
uses: bobheadxi/[email protected]
id: deployment
with:
step: start
token: ${{ secrets.GITHUB_TOKEN }}
env: staging
Checkout
is a package by GitHub which allows the runner to checkout my code from my repository.
- name: Checkout latest code
uses: actions/[email protected]
Next I install the project dependencies with npm ci
. (Note the clean install versus simple install, this is in an attempt to stick as close to the package-lock.json
as possible)
- name: Install dependencies
run: npm ci
My site is powered by Gatsby, so now that I've got dependencies installed it's time to run npm run build
to run a gatsby build
behind the scenes to build the /public
folder.
- name: Build Gatsby
run: npm run build
Now the scary fun part: transferring files. Fortunately the FTP-Deploy-Action
has nice quality of life features like dry-run
, exclude
, and especially the ability to specify a state file. By default the action creates a file called .ftp-deploy-sync-state.json
at the root of your deploy (and warns you not to remove it). The action will read in this file on subsequent deploys to evaluate the changeset and will only create, update, and delete the files determined changed, keeping transfer times efficient.
Also notice that both username
and password
are references back to the GitHub Actions Secrets created previously.
- name: Sync files to staging
uses: SamKirkland/FTP-Deploy-[email protected]
with:
server: ftp.server-address.com
username: ${{ secrets.FTP_USERNAME }}
password: ${{ secrets.FTP_PASSWORD }}
protocol: ftps
local-dir: ./public/ # must end with trailing slash
server-dir: /path/to/staging/html/ # must end with trailing slash
dry-run: true # VERY useful
Finally, we'll update the deployments
status that we kicked off in step one. This will finally tag the result with the status, commit ID, and environment.
- name: Update deployment status
uses: bobheadxi/[email protected]
if: always()
with:
step: finish
token: ${{ secrets.GITHUB_TOKEN }}
status: ${{ job.status }}
env: ${{ steps.deployment.outputs.deployment_id }}
deployment_id: ${{ steps.deployment.outputs.deployment_id }}
Test, Test, and Test Some More
All that's left to do now is test. My strategy was to test like crazy, until I felt really good about the entire process front to back. I also wrote a fairly hefty README in such a way that I assumed I would remember absolutely nothing about the entire process, with easy pointers which allow me to focus on writing more often.
Future Plans
I've left myself plenty of opportunities to continue to build up functionality from here. I'd love to add:
- Cypress unit testing
- Automated spell-chekcing
- More customizable pipeline run notifications
The Result
If you're reading these words: it worked! My goal when I started this project was to set myself up with a shiny new CI/CD pipeline, and deploy the documentation of my work with the very pipeline I was building and documenting! Whoa.