Profile

When the certificate is valid, we send an API request to Forge
When the certificate is valid, we send an API request to Forge

We have just launched Custom Domains V2, and I’m going to share all the technical details with you. The highs and the lows, what I’ve learned, and how to do it yourself. The end result is a highly available and globally fast infrastructure. Our customers love it, and so do we. But let’s go back to the start of this story.

\n\n

I run a simple, ethical alternative to Google Analytics. One of the most common things people hate about analytics, outside of them being too complex to understand, is that their scripts get blocked by ad-blockers. Understandably, Google Analytics gets blocked by every single ad-blocker on the planet. Google has so many privacy scandals, and a lot of people are scared to send them their visitors’ data. But Fathom is privacy-focused, stores zero customer data, and therefore shouldn’t be treated the same way. So we had to introduce a feature called “custom domains” where our users could point their own subdomain (e.g. wealdstoneraider.ronnie-pickering.com) to our Laravel Vapor application. If you’re not familiar with Vapor, it’s a product built by the Laravel team that allows you to deploy your applications to serverless infrastructure on AWS (I actually have a course on it, that’s how much I love Vapor).

\n\n

AWS’ services don’t make this custom domain task very easy. If you’re using the Application Load Balancer (ALB), you’re limited to 25 different domains. And if you’re using AWS API Gateway, you get a few hundred, but AWS won’t let you go much higher with that limit. So we were in a position where we really needed to think out of the box.

\n\n

#Attempt 1: Vapor with support from Forge

\n\n

The first thing I did to try and implement custom domains was to create one environment within Vapor for each custom domain. An environment in Vapor is something like “production” or “staging” typically, but I decided otherwise. Instead of that, we would have “miguel-piedrafita-com” and “pjrvs-com” as our environments. It sounds absolutely stupid in hindsight, but it worked at the time and I thought it was pretty cutting edge.

\n\n

I built the whole thing out by looking into the Vapor source code on GitHub and playing with the API. I’m not going to share my code because it’s pointless now, but it consisted of the following steps:

\n\n
    \n
  1. User enters their custom domain in the frontend
  2. \n
  3. We run a background task to add their domain to Route53 via Vapor
  4. \n
  5. We initiate an SSL certificate request via Vapor
  6. \n
  7. We email the CNAME changes required for the SSL certificate
  8. \n
  9. The user makes the changes
  10. \n
  11. We check every 30 minutes to see if the SSL certificate has been issued by AWS
  12. \n
  13. When the certificate is valid, we send an API request to Forge (another Laravel service for deployment) which packages up a separate Laravel application and deploys it to Vapor, as a new environment.
  14. \n
  15. Forge then pings our Vapor set-up and says “We’re all done here”
  16. \n
  17. Vapor then checks the subdomain. Woohoo, it works, so we email the user telling them it’s ready to go
  18. \n
\n\n

This never made it to production, and I shamefully deleted all of the code.

\n\n

#Attempt 2 – Forge proxy servers

\n\n

After I realized that the first attempt just wasn’t going to scale, I went back to the drawing board. I accepted that I would have to have a proxy layer and, to Chris Fidao’s joy, I added EC2 servers to our infrastructure set-up.

\n\n

I provisioned a load balancer via Forge and put 2 servers inside it. The plan was that I would create an SSL certificate & add a site to each server, via Forge’s API, so if one server went offline, the other would take it’s place. The whole thing was going to run on NGINX and our only real “limitation” would be Forge’s API rate limits. Easy peasy.

\n\n

So I had this all built out. It was unbelievably easy. Forge’s API is incredible. So I did what any avid Twitter user would do, I shared where I was with everything.

\n\n

I was pretty proud of myself. And then Alex Bouma strolled by, threw a grenade in and walked off.

\n\n

And then Mattias Geniar jumped in (it was his article, after all) and then Owen Conti (who had apparently already told me that this was the way go to back in November 2019) and then Matt Holt (creator of Caddy) jumped in and, boom, that was my finished product gone. Thanks guys.

\n\n

Now I know what you’re thinking… There’s an awful lot of “not shipping” going on here. I don’t normally get caught up in perfectionism these days but I wanted to include the best possible solution as a video in my Serverless Laravel course. So if there was a better solution available, I had a duty to my course members and Fathom customers to make sure I knew about it.

\n\n

And this is where Caddy entered my life.

\n\n

#Attempt 3: Highly available Caddy proxy layer

\n\n

I wish I could travel back in time to 2019 and tell myself about this solution. Maybe it was the timing? After all, Caddy 2 has only just been released when I saw Alex’s tweet.

\n\n

Caddy 2 is an open-source web server with automatic HTTPS. So you don’t have to worry about managing virtual hosts or SSL certificates yourself, it automatically does it for you. It’s so much simpler than what we’re used to.

\n\n

So you’ve heard all the failed attempts, let’s get to how we solved things.

\n\n

#Step 1. SSL Certificate Storage

\n\n

The first thing we needed to do was to create a DynamoDB table. This was going to be our centralized storage. Why did I use DynamoDB? Because I didn’t want certificates stored on a server filesystem. I’m sure we could’ve managed some sort of NAS, but I have no idea how that would’ve worked across regions. Ultimately, I’m familiar with DynamoDB and Matt Holt was kind enough to upgrade the Caddy DynamoDB module to V2 (thanks mate!).

\n\n
    \n
  1. Open up DynamoDB in AWS
  2. \n
  3. Create a new table called caddy_ssl_certificates with a primary key named PrimaryKey
  4. \n
  5. Un-tick the default settings and go with on-demand (no free tier but auto scaling)
  6. \n
  7. When the table has been created, click the Backups tab and enable Point-in-time Recovery
  8. \n
  9. And you’re done
  10. \n
\n\n

#Step 2. Create an IAM user

\n\n

We now needed to create a user with limited access in AWS. We could, in theory, give Caddy an access key with root permissions, but it just doesn’t need it.

\n\n
    \n
  1. Create a user called caddy_ssl_user
  2. \n
  3. Tick the box to enable Programmatic Access
  4. \n
  5. Click Attach existing policies directly
  6. \n
  7. Click Create policy
  8. \n
  9. Choose DynamoDB as your service and add the basic permissions, along with limiting access to your table (you can find a video walkthrough in my course if you’re not too sure about this part)
  10. \n
  11. Boom. Make sure you save the Access key ID and Secret access key
  12. \n
\n\n

#Step 3. Create a route in your application to validate domains

\n\n

We don’t want any Tom, Dick or Harry to point their domains at our infrastructure and have SSL certificates generated for them. No, we only want to generate SSL certificates for our customers.

\n\n

For this write-up, I’m going to keep things super simple and hard code it all. If you were going to put this into production, you’d have database existence checks, obviously ;)

\n\n

routes/web.php