LLM containers¶
In NICHD’s Bioinformatics and Scientific Programming Core, we wanted to use multiple LLM agent tools in a secure way on multiple systems – specifically, in isolated containers to reduce risk – but there was nothing available for our particular requirements.
So, this repo now supports:
Multiple agent harnesses
Codex CLI, using models hosted by OpenAI enterprise using ChatGPT Enterprise authentication
Claude Code CLI, using models hosted by Amazon Bedrock using AWS SSO
Pi coding agent, also using models hosted by Amazon Bedrock using AWS SSO.
Multiple container runtimes
Podman images, for running containers on a local Mac
Singularity images, for running containers on a Linux HPC system
Tools
refresh.pyto refresh your credentials and optionally push them to a remote systemlaunch.pyto launch a container running the LLM toolbuild.pyto build container images (only required if you want to build your own; you can use our hosted images)
Additional features
Handle enterprise SSL/TLS interception
Mount existing conda environments and prepend them to the PATH so agents can use them
When everything is set up, usage looks like this:
refresh.py # refresh credentials if needed
launch.py codex # run Codex in a container
launch.py claude # or Claude Code
launch.py pi # or pi
Or, to use on a remote machine:
# on local machine, refresh and push credentials to the
# right place on the remote. For Bedrock on systems where
# direct SSO refresh inside the container is unreliable,
# export short-lived AWS credentials into ~/.aws instead:
refresh.py --remote $REMOTE_HOST --export-creds
# then log in to the remote host and run:
launch.py claude # or pi
See https://nichd-bspc.github.io/llm for documentation.
See https://github.com/nichd-bspc/llm for code.
Note
While most of this documentation is generally applicable, there are some NIH-specific components that are indicated by NIH-specific.
Contents¶
Getting started
Next steps
Details