Containerisation as a technology has sparked a new trend of decoupled services in cloud computing. To keep up with the new trends, this blog was converted to run as docker containers.
The Docker template used to run my websites (with some sanitisation for environment specific commands) can be found here:
The flexibility Docker introduced into my deployment processes greatly reduced the differences in my testing and production environments. This can be attributed to the entire stack being been reduced to php-fpm backends running behind nginx proxies.
Testing changes or platform upgrades is as easy as re-building the image and running the container as the code-base is guaranteed to remain the same when either in development or prod.
Promoting changes to production is as easy as pushing the image to my private container registry which enable the application servers to securely pull the latest version of the website.
The website code is built into the Docker container without configuration files which are mounted on run-time. This is done to avoid environment credential mix-ups or leakages if the registry service is accidentally exposed.
As the Docker image only contains the php-fpm service and website code, this will mean that assets will need to be accessed somewhere else. This is stored on AWS S3 with Cloudfront in front of it to act as my CDN for all website assets (images, javascript, css, fonts etc).
Part of the build process will ensure that all assets on S3 are up-to-date with the latest changes and that the CloudFront cache is invalidated. A small benefit from this design decision is that the Webserver does not have to process asset requests and will only focus on the application request. This optimisation is not required on a small site such as mine, but does help with larger-traffic websites where any performance increase is greatly noticed.