On
free Side Projects

How I built a side project in 48 hours on no budget

Owen Kelly
Thursday, August 1, 2019

I built and deployed goodengineering.dev in less than 48 hours from idea, for a total running cost of nothing. The domain cost a small amount of money.

A few weeks ago I found myself pitching a dedicated engineering blog at the cloud consulting company I work. There’s so many of us at Kablamo that have something interesting to say, but our main site was really getting in the way.

I wanted to find a list of engineering blogs I could use to show off the concept. I wasn’t sure what subdomain we should us, if any. I didn’t know if there was a canonical name for this type of site — I still don’t, but the closest is definitely an engineering blog.

Awesome lists

Sure enough, a search for awesome engineering GitHub returned a couple massive awesome lists. Namely github.com/kilimchoi/engineering-blogs and github.com/crispgm/awesome-engineering-blogs. They both helped greatly in answering my questions, and helping me pitch the concept.

I couldn’t help notice though, that kilimchoi/engineering-blogs has an OPML file of the blogs. I thought that was cool, but I don’t have a feed reader. And at the time of writing about 30 outstanding stale pull requests. Even with fifteen thousand stars, the repository looks abandoned.

See, since Google Reader shut down I’ve just forgotten about RSS — XML-based spec notwithstanding. I tried Feedly, and Digg Reader, but Google had killed reader at the opportune time to try something besides RSS. So I moved on.

Fast-forward to the present, and that massive machine readable list of great, nee awesome, engineering blogs was quickly forming an idea in the back of my head.

I'd just finished this site, a fresh Gatsby setup. It was a great deal of fun writing basically nothing but business logic, and leaving the heavy lifting to Gatsby. I’ve written more than enough Webpack configs to appreciate when someone else does all the work.

What if I took that feed and made a site out of it?

I've also spent enough time with servers and serverless, to want to keep the site static.

I build and deploy this site with Netlify. I like the integrated CI/CD, among so many other things. I’m nearly at 10 sites now. I recently hit the Netlify build timeout of fifteen minutes, thanks to a rogue styling package. Never found out the actual bug, just that updating the package fixed it. Living on the edge here.

I have a max of 15 minutes of build time. I couldn’t find any overall time limits, or limits on frequency. Not that it mattered. In reality, I only need the site to update hourly. That requirement alone dramatically simplifies the solution.

I had a chat to some work mates about the right domain. I wanted a good domain to ground the concept in reality. I looked for engineering.blog, blogs.engineering engineeringblog.dev, blog.dev. They were either taken or just way to expensive for a side project like this. I’m not spending \$50+ on a domain for this. Not yet.

The I saw goodengineering.dev was free. I’m still not happy that .dev was bought by Google, and is now a public TLD. But that ship sailed a while ago.

So, I have domain, build and deploy theoretically worked out. Now I need to actually write some code.

The first thing, can I parse an OPML file. Thankfully opml-to-json exists on NPM. Honestly, if it didn’t, I may have stopped here. I’m not super keen on parsing xml — even when xml2js exists.

Okay, the OPML file is sorted, but now I need to actually get the RSS feeds. I found rss-parser which does a good enough job fetching the feed and returning JSON. I’m not honestly sure if I’m supporting JSON RSS feeds properly yet - but this was a quick build, so I’ll come back to that.

I know I’ve got a max of 15 minutes to get this and the site build all done. And for my own sanity, I really want it to be significantly quicker than that. Not only because I want to be able to add many more sites, but also because a 15 minute dev loop is beyond comical.

Now, Gatsby can source data from many many different sources, databases, markdown files, json files, anything really. I could download the RSS into a database, into a single file perhaps.

Ultimately I stayed the course of pragmatism and tried the simplest thing I could think of that worked. Download them all, in parallel. In a Promise.all.

typescript
goodengineering.dev/feeds/process.ts
1
return await Promise.all(
2
opml.children
3
.map(
4
folder =>
5
folder.children &&
6
folder.children.map(async feed => {
7
try {
8
const url = new URL(feed.htmlurl);
9
const items = await parser.parseURL(feed.xmlurl);
10
11
await fs.mkdirSync(path.join(__dirname, output, url.hostname), {
12
recursive: true,
13
});
14
15
await fs.writeFileSync(
16
path.join(__dirname, output, url.hostname, "feed.json"),
17
JSON.stringify({ ...feed, ...items }, null, 2),
18
{ encoding: "utf8" }
19
);
20
return true;
21
} catch (err) {
22
// If this blog continues to error, we'll remove it
23
console.error("FAILED: ", feed.xmlurl);
24
return true;
25
}
26
})
27
)
28
.reduce((a, b) => a.concat(b), []) // if only I could .flat()
29
);
30

Putting it all together

Not only did it work, using the filesystem as a database (my favourite thing for static sites), it made it really easy to inspect peoples feeds. This became necessary because, shock, not everyone follows the spec.

Some have small excerpts. Some have the full article. Others have nothing.

Sometimes there’s an author, sometimes not.

Honeslty, RSS is kind of a mess in 2019. Pick a standard and stick to it. Or make a new one, I’m sure there’s room for another way to syndicate blog posts.

I digress.

Thinking it through, I don’t need any of that junk anyway. I only need a few key pieces of information to know if I want to read an article.

The title. The domain. How long ago it was published. And because I have it for all of them, the title of the feed.

From there it’s pretty easy. A bit of messing with Gatsby to load all the json files. Some playing around in React to get a design I like.

Oh there was one other thing. I'm downloading the full feed of each site, but I don't want to end up with one site taking every spot in the feed. To combat this, I only let Gatsby import the three most recent posts. Then, sorting by date, I take most recent 100 posts from everything.

I used the IBM Plex fonts that were recently released to the world on Google Fonts. I like them. I have never used anything from IBM (with my knowledge) until now. But I’m very happy and thankful they were released.

I reused the scaffolding I did for this site, it’s got the same light and dark theme, with a switcher.

By this point, I’ve got it all working on my computer, I’m about a day in.

I hooked up Netlify and the domain, and boom the site’s live. Netlify allows you to trigger a build by POST’ing to a url. I briefly considered Zapier, but again I didn’t want to spend money maintaining this site.

I pulled up IFTTT, which has changed a lot since the last time I was on it. Even then, a few minutes later and I had a trigger setup on the hour.

And it’s done.

The site rebuilds from the tip of master, on the hour, and if the build succeeds it deploys.

Yeah I am pretty happy this all worked.

Of course, there was a few obvious bugs to fix. I am using a relative timestamp on the articles, but the site only rebuilds on the hour. So a few seconds ago isn’t a valid timestamp at 10:05am. The fix here is really simple, just bumping up the granularity. Instead of a few seconds ago, it now says within the last hour. And goes up in at least increments of an hour from there.

Then I realised I really want to be able to distinguish Personal and Company blogs, both are valuable to me. Personal blogs are kind of special, because they're almost never trying to sell you something.

Adding a menu and few pages with slightly different queries, and the small changes to the feed reader itself took maybe and hour.

After that, I had to fix a few styles to be properly responsive. But that was it.

And we're done

Is there more I could do?

Yep.

It could use an apple-touch-icon and a favicon while I’m at it. Much of the code is messy, and just dumped. That could be neater.

But also, it works.

And right now, it really doesn’t need to do anything other that what it’s doing right now.

The only thing, that really needs to change is the source.opml file.

I'm probably going to add or fix things over time, but I don’t need to.

This is all to say 2019 is very different to 2006 when I started playing with this stuff. The incredible amount of code and free services makes this so easy. It’s not all easy, I still work on complex front and back ends for real money

But to see an idea become a real thing, that I’m using every day.

It’s just nice.

OH!

And also, add your personal or company (or both) to the site via a Pull Request.

Unless it’s spammy or really unsuitable, I’ll merge it.

— OK
Tagged