Development
This chapter describes how to set up a development environment to run and develop Open Pectus.
Ecosystem
Source code is managed at Github
Issues are tracked at Github
Continuous Integration tasks are performed using Github Actions
Docker images are published to the Github Container Registry
Documentation is built by Read the Docs
Python packages are published to the Python Packaging Index
Frontend Setup
Prerequisites: Node 22 (LTS) must be installed.
Follow the steps below to install Node 22 on Ubuntu using Node Version Manager.
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.3/install.sh | bash
source ~/.bashrc
nvm install v22.11.0
On Windows it is possible to download node as a standalone binary. Extract the ZIP folder, navigate to the binaries and add the folder to PATH:
SET PATH=%PATH%;C:\Users\User\Downloads\node-vxx.yy.zz-win-x64\node-vxx.yy.z-win-x64
Follow the steps below to install packages and build the frontend.
cd Open-Pectus/frontend
npm ci
npm run build
Backend Setup
Prerequisites:
(Optional) A conda installation is highly recommended although it is possible to do without. Download Miniconda.
(Optional) Java SE SDK is needed for parser generation when updating P-code grammar.
(Optional) The simplest way to get going using VS Code is this:
Install Java
conda install -c conda-forge openjdk
Install VS Code extension ANTLR4 grammar syntax support. This should cause the Antlr plugin to automatically regenerate parser code whenever pcode.g4 is modified.
openjdk-21.0.2 is known to work.
All the following commands can only be run from within the conda prompt, and from the Open-Pectus folder.
Create a new conda environment and install all dependencies:
conda env create --name openpectus --file=environment.yml
or
conda env create --prefix=./conda --file=environment.yml
Activate the created Open Pectus conda environment:
conda activate openpectus
or
conda activate ./conda
Install open pectus in the environment:
pip install -e ".[development]"
(Optional) Set the
SENTRY_DSN
environment variable:To enable the Sentry logger, the
SENTRY_DSN
environment variable needs to be set. Save the value as an environment variable on your developer pc:setx SENTRY_DSN value
Documentation Setup
Create a new conda environment and install all dependencies:
conda env create --name openpectus --file=environment.yml
or
conda env create --prefix=./conda --file=environment.yml
Activate the created Open Pectus conda environment:
conda activate openpectus
or
conda activate ./conda
Install open pectus in the environment:
pip install -e ".[docs]"
Change directory to the docs directory
cd docs
Generate
openapi.yml
specificationpython generate_openapi_yml.py
(Optional) Spell check
make.bat spelling
on Windowsmake spelling
on LinuxBuild documentation
make.bat html
on Windowsmake html
on Linux
The built documentation is in docs/html
.
Build status for pull requests and pushes to main
branch on Github can be monitored at https://app.readthedocs.org/projects/open-pectus/builds/.
Other Commands
Update conda environment
To update an existing conda environment with all dependencies (e.g. when requirements.txt
has changed):
conda env update -p=./conda --file=environment.yml --prune
Build Distribution
Docker and Pypi builds are normally built via a Github Actions workflow. To build it in the development environment:
python -m build -o openpectus/dist
Note
To include the frontend in the build, copy the contents of openpectus/frontend/dist
to openpectus/aggregator/frontend-dist
before building.
Alembic Database Migrations
In the following it is described how to create a new migration script.
Change the database model(s) in openpectus/aggregator/data/models.py
first, then run:
cd openpectus/aggregator
alembic revision --autogenerate -m "<migration script name>"
This will create a new migration script in aggregator/data/alembic/versions/
based on the model changes.
You must check that the changes within are acceptable, and change them if they are not.
It is a good idea to ensure the downgrade step will leave data as it was.
See SQLAlchemy documentation for what autogenerate will and will not detect.
You can then test your migration with alembic upgrade head
and alembic downgrade -1
.
alembic upgrade head
is automatically run when aggregator starts, in openpectus/aggregator/main.py
main()
function.
Currently, automatic tests touching the database do not use the migration scripts, so you can’t trust those to verify the migrations.
SQLite has some severe limitations on what schema changes it supports. e.g. it doesn’t support altering a column besides renaming it. To alter e.g. a column type, you will need to create a new table, copy the data over, and then drop the old one. Alembic supports this with “batch” migrations. The autogenerate feature has been configured to generate with batch migrations as described here https://alembic.sqlalchemy.org/en/latest/batch.html#batch-mode-with-autogenerate
The python driver for SQLite (pysqlite) does NOT support transactional DDL, i.e. running schema changes in a transaction so a failure during a schema change will roll all the changes back.
Alembic will run each migration separately, so if something fails, only the last change will require cleanup.
There is possibly a workaround for this but Alembic would likely still not use it correctly as its behavior in alembic/runtime/migration.py
depends on the transactional_ddl
flag set to False
in alembic/ddl/sqlite.py
.
Even though the autogenerated migrations will include foreign key constraints, they are not enforced by SQLite by default, and while enabling them is possible in SQLAlchemy, it has some severe downsides.
Even though Mapped[]
Python enum types produce Alembic Enums in the autogenerated migrations, they will not actually be enforced on database level without manually writing some CHECK constraints, or foreign keys to an enum table. It’s unclear whether this would be worth the added complexity and management.
Running Open Pectus
It is possible to run the aggregator as-is or in a Docker container. The engine can only be run as-is.
Aggregator
Run Aggregator to serve frontend from its default build directory. This also starts the WebSocket protocol allowing Engines to connect.
cd Open-Pectus
pectus-aggregator -fdd .\openpectus\frontend\dist\
When Aggregator is running, the aggregator services are available, including:
Frontend: http://localhost:9800/
OpenAPI UI: http://localhost:9800/docs
OpenAPI spec: http://localhost:9800/openapi.json
To start aggregator services in Docker, run the following commands:
Note
This depends on the frontend and backend builds being up-to-date.
cd Open-Pectus/openpectus
docker compose up --build
pectus-aggregator Command Reference
usage: pectus-aggregator [-h] [-host HOST] [-p PORT] [-fdd FRONTEND_DIST_DIR]
[-sev {DEBUG,INFO,WARNING,ERROR}] [-db DATABASE]
[-secret SECRET]
Named Arguments
- -host, --host
Host address to bind frontend and WebSocket to. Default: 127.0.0.1
Default:
'127.0.0.1'
- -p, --port
Host port to bind frontend and WebSocket to. Default: 9800
Default:
9800
- -fdd, --frontend_dist_dir
Frontend distribution directory. Default: /home/docs/checkouts/readthedocs.org/user_builds/open-pectus/envs/latest/lib/python3.11/site-packages/openpectus/aggregator/frontend-dist
Default:
'/home/docs/checkouts/readthedocs.org/user_builds/open-pectus/envs/latest/lib/python3.11/site-packages/openpectus/aggregator/frontend-dist'
- -sev, --sentry_event_level
Possible choices: DEBUG, INFO, WARNING, ERROR
Minimum log level to send as sentry events. Default: ‘WARNING’
Default:
'WARNING'
- -db, --database
Path to Sqlite3 database. Default: ./open_pectus_aggregator.sqlite3
Default:
'/home/docs/checkouts/readthedocs.org/user_builds/open-pectus/checkouts/latest/docs/open_pectus_aggregator.sqlite3'
- -secret, --secret
Engines must know this secret to connect to the aggregator
Default:
''
Engine
Run Engine to connect a local engine to the Aggregator above:
cd Open-Pectus
pectus-engine --aggregator_host localhost --aggregator_port 9800
When the container is running, the aggregator services are available, including:
Frontend: http://localhost:8300/
OpenAPI UI: http://localhost:8300/docs
OpenAPI spec: http://localhost:8300/openapi.json
pectus-engine Command Reference
usage: pectus-engine [-h] [-ahn AGGREGATOR_HOSTNAME] [-ap AGGREGATOR_PORT]
[-s | --secure | --no-secure] [-uod UOD]
[-validate | --validate | --no-validate | -rd | --show_register_details | --no-show_register_details]
[-sev {DEBUG,INFO,WARNING,ERROR}] [-secret SECRET]
Named Arguments
- -ahn, --aggregator_hostname
Aggregator websocket host name. Default is 127.0.0.1
Default:
'127.0.0.1'
- -ap, --aggregator_port
Aggregator websocket port number. Default is 9800 or 443 if using –secure
- -s, --secure, --no-secure
Access aggregator using https/wss rather than http/ws
- -uod, --uod
Filename of the UOD
Default:
'openpectus/engine/configuration/demo_uod.py'
- -validate, --validate, --no-validate
Run Uod validation and exit. Cannot be used with -rd
- -rd, --show_register_details, --no-show_register_details
Show register details for UOD authoring and exit. Cannot be used with -validate
- -sev, --sentry_event_level
Possible choices: DEBUG, INFO, WARNING, ERROR
Minimum log level to send as sentry events. Default is ‘WARNING’
Default:
'WARNING'
- -secret, --secret
Secret used to get access to aggregator
Default:
''
Build Validation
Linting and type checking is configured for Open Pectus.
Linting
Open Pectus python code is linted using flake8 which is configured in openpectus/.flake8
:
cd Open-Pectus/openpectus
flake8
Type Checking
Python code is type checked using pyright which is configured in pyproject.toml
:
cd Open-Pectus/openpectus
pyright
# If pyright complains about being out of date:
# pip install -U pyright
Code generation from API Specification
The frontend generates and uses typescript skeleton interfaces from the aggregator API specification.
To ensure that the implemented backend, the API specification file and the typescript interfaces all match, the flow for modification is as follows:
A change is made in the Aggregator API implementation.
The script generate_openapi_spec_and_typescript_interfaces.sh must be manually invoked. This updates the API spec file and generates updated typescript interfaces from it.
The frontend build must be run to check the updated interfaces. If the frontend build fails, the build server build will fail. This indicates an integration error caused by an incompatible API change. This should be fixed before the branch is merged, either by updating the frontend to support the API change or by reworking the API change to be compatible with the frontend.
Steps 1-3 must be repeated until both frontend and backend build successfully.
All changes must be committed to Git.
To ensure that step 2 is not forgotten, the aggregator test suite contains a test that generates a new API specification file and checks that it matches the specification file last generated by the script. If it doesn’t, the test fails and with it the backend build.
Version Numbering
Open Pectus adopts the major-minor-patch version number format.
A new package is published to Pypi on each push to main
with the least significant version digit being the Github Actions run number.
The least significant digit is dev
in the source code to distinguish from releases.
If relevant, the major and minor digits must be updated manually in the following file:
openpectus/__init__.py
Run the following file afterwards to update the version number in the OpenAPI specification:
python openpectus/aggregator/generate_openapi_spec_and_typescript_interfaces.py