I serve up a number of static websites by hosting them on Amazon's AWS S3. It is a local cost, efficient way to maintain a presence on the web.
For example, the .css files were being stored with a meta-data type (MIME type) of
text/plain rather than
My first solution was to use a small script to go through an S3 bucket and fix up the meta-data (by copying the file in-place).
aws s3 cp \ --exclude "*" \ --include "*.css" \ --content-type="text/css" \ --metadata-directive="REPLACE" \ --recursive \ s3://www.courseguide.info/ \ s3://www.courseguide.info
This worked pretty well, but it seemed wrong to have to fix things up after having uploaded them. I know I can specify the MIME type of individual files as I upload them, but when uploading a directory tree, that seems messy and complicated.
I then stumbled on a piece that suggested instead of using
s3cmd to guess the MIME Type doing something like this:
s3cmd sync -P --guess-mime-type --delete-removed \ $TARGET s3://www.courseguide.info/
However, before I got round to implement it, I then found out that the
s3cmd is considered 'old' and that instead of using that python tool to work with my S3 buckets, I should be using Amazon's CLI tool,
So my upload script was modified to include:
aws s3 sync $TARGET s3://www.courseguide.info/ --delete --only-show-errors
And now everything (all my file types, including images, PDFs, etc) seem to be getting the right MIME type set in their meta-data, and all is good with my websites.