Written in Django Rest Framework. Based on battlecode19/api.
You can run docker-compose up --build in the root directory of battlecode20 to run the entire website stack. If for some reason you want to run Django outside of Docker, follow the instructions below.
For a nice interface to test the backend, go to localhost:8000/docs/.
Create a virtual environment by following the instructions below.
pip3 install virtualenv(orpipif your default Python installation is 3)virtualenv venv -p python3source venv/bin/activatepip install -r requirements.txt(pip3is not necessary since the default Python version within the virtualenv is 3)
The pip install might fail, since psycopg2 requires a working build environmment for the included C code. On Mac, this StackOverflow answer has a solution (command should be env LDFLAGS="-I/usr/local/opt/openssl/include -L/usr/local/opt/openssl/lib" pip install psycopg2==2.8.3 --upgrade) (if you still have problems with psycopg2 on mac after this, try brew reinstall openssl and brew install postgresql)
Any time you start the backend, there must be a Postgres instance up on localhost:5432 (or whatever credentials are used in battlecode/dev_settings.py) with a database named battlecode. It is easy to run Postgres in Docker:
docker run -p 5432:5432 -e POSTGRES_USER=battlecode -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_DB=battlecode --name bc20db -d postgres
To stop or start the database container: docker stop bc20db docker start bc20db. Postico and pgAdmin are two useful tools for managing the database.
Run the following to set up the database:
python manage.py migrate
This must be run the first time the database is setup, or anytime models are changed. When models are changed, you also need to run python manage.py makemigrations and then commit the migrations. If run through docker-compose, you need to first docker exec -it battlecode20_backend_1 /bin/bash and then perform the python manage.py makemigrations.
This will automatically create a new league with league ID 0. This is something we might want to change in the future, if we decide to make use of the multiple leagues.
Make sure you work in your virtual environment, make sure all packages are up to date, start the database, and set the necessary environment variables (only needed once per terminal session):
source venv/bin/activate
pip install -r requirements.txt
python manage.py migrate
docker start bc20db
export DJANGO_SETTINGS_MODULE="dev_settings"
export EMAIL_PASS="passwordtobattlecodegmail"
Then, start the server:
python manage.py runserver
The backend should now be running on localhost:8000. You can open http://localhost:8000/docs in your browser to verify that it works.
You can also test with uWSGI (which is what is used in production) by running uwsgi --ini uwsgi-dev.ini.
coverage run --source='.' manage.py test
coverage report
To use the Python debugger: from nose.tools import set_trace; set_trace() (Note that Nose breaks the regular pdb.)
Django 2.0.3 and Webpack 4.0.1 are both very recently released. You may run into backwards compatibility issues with 3rd party packages, or deprecated functions. Try to refer to the most up-to-date documentation when possible. You may also run into problems if i.e. you have different version node modules installed globally instead of locally.
When installing a new Python package:
pip install <package>
pip freeze > requirements.txt
Always commit the most recent requirements.txt.
- Push to master.
- Go to Cloud Build Triggers on Google Cloud.
- Click "Run trigger" under the
battlecode/battlecode20trigger. - Go to Cloud Build History. You should see a spinning blue icon next to a recently started build, which should reference the most recent commit ID on master on this repo. Wait until the spinning icon turns into a green checkmark (this usually takes 2-3 minutes).
- Go to the battlecode20-backend-true instance group in the Compute Engine. Press
Rolling Restart/Replace. - Change operation from
RestarttoReplace. Let maximum surge be 1 and maximum unavailable be 0 (we don't want our server to go down). - Wait until all spinning blue icons have turned into green checkmarks (this takes like 10 minutes I think).
This procedure is currently very long and requires too much manual intervention. We should write a script that does all of this for us (which shouldn't be too hard).
A database should be created.
We currently have continuous builds triggered by pushes to master. Therefore, make sure that everything is actually working before pushing. Also, make sure that any new database migrations are also applied to the production server before deploying. A good way to ensure this is to always test locally with the production database, before committing and pushing to master.
The images are then deployed as an instance group on GCE. To update the instances to use the newly built image, perform a rolling update of the instance group.
Pls pls use SHA256 digests in the Dockerfile. Otherwise, the image might be rebuilt, from the same commit tag as before, but not working anymore (this happened, and it was not fun).
Ideally, we would like to move to Kubernetes for everything, as that would make everything much easier, but it doesn't currently support having a load balancer that also points to storage buckets. This is a deal-breaker, since the frontend is static.