Chalice development and deployments

I’ve been playing with AWS Chalice and rebuilding a bookmarking web app I had earlier witten in PHP Laravel. I wanted to learn serverless development and this looked liked the simplest route to get started using a known language Python. I also thrown in some LLM support, by using the free version of ChatGPT, as an assistent in learning the new environment.

Developing and running it locally was easy, there is a docker image available for having your own DynamoDB for a backend database and a simple install of Chalice. The hard part was when development was finished to get it running on the actual AWS infrastructure. It all looked fairly easy but then the errors start appearing. I’m documenting my findings here so others will find the working solution easier then I did.

First error I ran into is that because I develop on an M1 Mac, everything was based on the ARM architecture, and there is no feature yet in Chalice to deploy to the ARM AWS infra. It just defaults everything to the X86 infrasturcure. There is a feature request to handle this but it looks like development on Chalice has slowed down. Quick thinking was that my Mac supports X86 with Rosetta2 and I would just need to spin up an X86 Ubuntu VM to build the Chalice package. Next issue, Multipass does not support this yet. Luckily I found OrbStack where the architecture of your VM is selectable, downside is you can’t set the amount of CPU or memory per VM. It does support cloud-init, just like Multipass so my setup scripts are re-usable.

Second error, permissioning. Chalice, by default, creates the roles it needs to execute the lambda functions. However it leaves out the access rights for the DynamoDB table I needed. ChatGPT went off the rails here by proposing several different suggestions, which followed after I responded that the solution didn’t work. First I had to include a policy.json in the root of the project, next solution was to include the policy in the config.json which also didn’t work. A classic search showed me this and this article which showed me how it is done. First in your config.json add "autogen_policy": false in the required stage and then create a policy-stage.json with the details in the .chalice directory. Later I found the official documentation which described the same solution.

A related problem was using different AWS Access Keys for each stage of development, so using different identities for dev, acceptance and production. Again ChatGPT gave different answers depending on the question but eventually found a solution that worked. You store your credentials in ~/.aws where you have two files: config for your region and credentials for your acces and secret key combo’s. For the credentials it is straightforward just use [stage] to define the applicable stage but for config it is different and you should use [profile stage]. See the following example for credentials:

[default]
aws_access_key_id = yourkey
aws_secret_access_key = yoursecret
[acceptance]
aws_access_key_id = yourkey
aws_secret_access_key = yoursecret

and this for config:

[default]
region = eu-west-1
[profile acceptance]
region = eu-west-1

Third and last error, packaging static files. As I’m using jinja templates to create the webpages in Python, they need to be included in the packaging for deployment. Again ChatGPT gave non-working solutions and the web had no definite answer as well. Some indicate using the vendor directory where you can store specific non standard python packages, other specify using the chalicelib directory .
The official documentation is not quite clear on the route. I’m currently using the vendor route but will try the chalicelib option as well.