Wednesday, February 19, 2020

No Servers, Just Buckets: Hosting Static Websites on the Cloud

For over two decades, I've hosted websites on managed servers. Starting with web hosting providers, going to dedicated machines, then dedicated VMs, then cloud VMs. Maintaining these servers tend to come at a high cognitive cost -- machine and network setup, OS patches, web server configuration, replication and high-availability, TLS and cert management, security... the list goes on.

Last year, I moved [almost] [all] [my] [websites] to cloud buckets, and it has been amazing! Life just got simpler. With just a few commands I got:

  • A HTTP(s) web-server hosting my content.
  • Managed TLS certificates.
  • Compression, Caching, and Content Delivery.
  • Replication and High availability.
  • IPv6!
  • Fewer headaches, and more spending money. :-)

If you don't need tight control over how your data is served, I would strongly recommend that you host your sites on Cloud Buckets. (Yes, of course, servers are still involved, you just don't need to worry about them.)

In this post, I'll show you how I got the float64 website up and serving in almost no time.

What are Cloud Buckets?

Buckets are a storage abstraction for blobs of data offered by cloud providers. E.g., Google Cloud Storage or Amazon S3. Put simply, they're a place in the cloud where you can store directories of files (typically called objects.)

Data in buckets are managed by cloud providers -- they take care of all the heavy lifting around storing the data, replicating, backing up, and serving. You can access this data with command line tools, via language APIs, or from the browser. You can also manage permissions, ownership, replication, retention, encryption, and audit controls.

Hosting Websites on Cloud Buckets

Many cloud providers now allow you to serve files (sometimes called bucket objects) over the web, and let you distribute content over their respective CDNs. For this post, we'll upload a website to a Google Cloud Storage bucket and serve it over the web.

Make sure you have your Google Cloud account setup, command-line tools installed, and are logged in on your terminal.

gcloud auth login
gcloud config set project <your-project-id>

Create your storage bucket with gsutil mb. Bucket names must be globally unique, so you'll have to pick something no one else has used. Here I'm using float64 as my bucket name.

gsutil mb gs://float64

Copy your website content over to the bucket. We specify '-a public-read' to make the objects world-readable.

gsutil cp -a public-read index.html style.css index.AF4C.js gs://float64

That's it. Your content is now available at<BUCKET>/index.html. Like mine is here:

Using your own Domain

To serve data over your own domain using HTTPS, you need to create a Cloud Load Balancer (or use an existing one.) Go to the Load Balancer Console, click "Create Load Balancer", and select the HTTP/HTTPS option.

The balancer configuration has three main parts: backend, routing rules, and frontend.

For the backend, select "backend buckets", and pick the bucket that you just created. Check the 'Enable CDN' box if you want your content cached and delivered over Google's worldwide Content Delivery Network.

For the routing rules, simply use your domain name ( in the host field, your bucket (float64) in the backends field, and /* in Paths to say that all paths get routed to your bucket.

Finally, for the frontend, add a new IP address, and point your domain's A record at it. If you're with the times, you can also add an IPv6 address, and point your domain's AAAA record at it.

If you're serving over HTTPS, you can create a new managed certificate. These certs are issued by Let's Encrypt and managed by Google (i.e., Goole takes care of attaching, verifying, and renewing them.) The certificates take about 30 minutes to propagate.

Save and apply your changes, and your custom HTTPS website is up! A few more odds and ends before we call it a day.

Setup Index and Error Pages

You probably don't want your users typing in the name of the index HTML file ( every time they visit your site. You also probably want invalid URLs showing a pretty error page.

You can use gsutil web to configure the index and 404 pages for the bucket.

gsutil web set gs://my-super-bucket -m index.html -e 404.html

Caching, Compression, and Content Delivery

To take advantage of Google's CDN (or even simply to improve bandwidth usage and latency), you should set the Cache-Control headers on your files. I like to keep the expiries for the index page short, and everything else long (of course, also adding content hashes to frequently modified files.)

We also want to make sure that text files are served with gzip compression enabled. The -z flag compresses the file, and sets the content-encoding to gzip while serving over HTTP(s).

gsutil -h "Cache-control:public,max-age=86400" -m \
  cp -a public-read -z js,map,css,svg \
    $DIST/*.js $DIST/*.map $DIST/*.css \
    $DIST/*.jpg $DIST/*.svg $DIST/*.png $DIST/*.ico \

gsutil -h "Cache-control:public,max-age=300" -m \

  cp -a public-read -z html \
  $DIST/index.html gs://float64

If you've made it this far, you now have a (nearly) production-ready website up and running. Congratulations!

So, how much does it cost?

I have about 8 different websites running on different domains, all using managed certificates and the CDN, and I pay about $20 a month.

I use a single load balancer ($18/mo) and one IP address ($2/mo) for all of them. I get about 10 - 20k requests a day across all my sites, and bandwidth costs are in the pennies.

Not cheap, but not expensive either given the cognitive savings. And there are cheaper options (as you'll see in the next section).


There are many ways to serve web content out of storage buckets, and this is just one. Depending on your traffic, the number of sites you're running, and what kinds of tradeoffs you're willing to make, you can optimize costs further.

Firebase Hosting sticks all of this into one pretty package, with a lower upfront cost (however, the bandwidth costs are higher as your traffic increases.)

Cloudflare has a free plan and lets you stick an SSL server and CDN in front of your Cloud Storage bucket. However if you want dedicated certificates, they charge you $5 each. Also, the minimum TTL on the free plan is 2 hours, which is not great if you're building static Javascript applications.

And there's CloudFront, Fastly, Netlify, all which provide various levels of managed infrastructure, still all better than running your own servers.


Obviously, there's no free lunch, and good engineering requires making tradeoffs, and here are a few things to consider before you decide to migrate from servers to buckets:

  • Vendor lock-in. Are you okay with using proprietary technologies for your stack. If not, you're better off running your own servers.
  • Control and Flexibility. Do you want advanced routing, URL rewriting, or other custom behavior? If so you're better off running your own servers.
  • Cost transparency. Although both Google and Amazon do great jobs with billing and detailed price breakdowns, they are super complicated and can change on a whim.
For a lot of what I do, these downsides are well worth it. The vendor lock-in troubles me the most, however it's not hard to migrate this stuff to other providers if I need to.

If you liked this, check out some of my other stuff on this blog.

1 comment:

  1. Awesome tutorial on this; nice! Still a bit pricey for rank amateurs (where I can get hosting for < $20 *year* for play sites) but still...