Getting started: Claude Code¶
This guide assumes you have already completed the Getting started: Codex section and can run Codex in a Podman container locally as well as on a remote system.
NIH-specific At NIH, setting up Claude Code is more complicated than Codex because of the hosting mechanism and the login mechanism. Here, we describe how to host Anthropic models on Amazon Bedrock and authenticate with AWS SSO. These models are hosted in the STRIDES environment. It is possible to use Azure Foundry or Google Vertex AI hosted models in a similar fashion on STRIDES; that is not yet documented here.
NIH-specific At NIH, you will need an AWS STRIDES account. See STRIDES enrollment.
Step 1. Initial AWS SSO setup¶
This first section needs to be done once to make sure accounts are connected and you can authenticate.
Set up AWS SSO. See NIH-specific Setting up AWS STRIDES Single Sign-On for the full walkthrough. This includes setting up your group for access, installing AWS CLI v2, and authenticating. You should be able to successfully log in with
aws sso login.
Step 2. Export env vars¶
Export these environment variables, for example in ~/.bashrc:
# Env vars that will be passed to Claude Code export CLAUDE_CODE_USE_BEDROCK=1 # Tells Claude to expect Bedrock export CLAUDE_CODE_NO_FLICKER=1 # Improves interface export CLAUDE_CODE_DISABLE_AUTOUPDATER=1 # Don't autoupdate export CLAUDE_CODE_DISABLE_INSTALLATION_CHECKS=1 # Don't check installation # Env vars to configure which Amazon Bedrock models to use. # # Otherwise we get the message: "Sonnet: Sonnet 4.5 not available — using # Sonnet 4 for this session; Haiku: Haiku 4.5 not available — using Claude # 3.5 Haiku for this session." # # In v2.1.119, it seems OK to not set the Opus default model; selecting # it with the /model command sets it to the value we use below. export ANTHROPIC_DEFAULT_OPUS_MODEL="us.anthropic.claude-opus-4-6-v1" export ANTHROPIC_DEFAULT_SONNET_MODEL="us.anthropic.claude-sonnet-4-6" export ANTHROPIC_DEFAULT_HAIKU_MODEL="us.anthropic.claude-haiku-4-5-20251001-v1:0" # These should have been exported already during the previous step export AWS_PROFILE="AWSPowerUserAccess-00001" # use your own account here export AWS_REGION=us-east-1
Tip
You are complete with this phase when aws sso login opens the browser
and you get the confirmation, and running echo
$CLAUDE_CODE_USE_BEDROCK gives 1.
Note
Although the Claude Code on Amazon Bedrock docs describe adding
"awsAuthRefresh": "aws sso login --profile myprofile" to your config,
this is only for when you’re running Claude Code without a container. The
refresh.py script will take care of this for us.
Step 3. Claude Code locally (Podman container)¶
Since we’re using AWS SSO to authenticate, we do not need to install Claude Code locally. This is in contrast to Codex, which we had to install locally in order to be able to use codex login.
On a local Mac, ensure you have Podman Desktop installed and running and that you are in a directory you are comfortable giving Claude access to.
Run
launch.py claude. This will prompt you to set up the color scheme and allow permissions in the directory.Submit a prompt like “testing” to confirm that the model responds.
What did this do?
launch.pydetected that you’re running on a Mac and that Podman is the right container runtimeIf you didn’t have any previous Claude Code config, it created a
~/.claude.jsonfile with an empty JSON array ({}) and/or an empty~/.claudedirectory.The default podman image was downloaded if needed, a container was created
The
~/.claude.jsonfile and any existing~/.claudedirectory was mounted into the containerHost variables starting with
CLAUDE_CODEwere passed through to the containerBecause
CLAUDE_CODE_USE_BEDROCK=1was set,~/.awsand relevant hostAWS_*settings were also passed through so Claude could use AWS credentials. IfAWS_PROFILEis used, launcher avoids passing host session-key variables that would override the mounted profile.
Step 4. Claude Code remote (Singularity)¶
Run the following locally (this example uses the NIH-specific host, biowulf.nih.gov):
refresh.py --remote biowulf.nih.gov
Log in to the remote system. If Using NIH’s Biowulf, get an interactive node and load the Singularity module:
ssh biowulf.nih.gov # log in sinteractive # allocate interactive node module load singularity # make Singularity available
If you don’t already have it available, download the launch.py script from the repo to the remote.
Run the following:
launch.py claude
What did this do?
refresh.pyranaws sso loginif needed, then exported the current short-lived AWS session credentials to~/.aws/credentials.jsonon the remote and configured thellm-exportprofile in~/.aws/configto read them viacredential_process.launch.pydetected that you’re running on Linux so Singularity is the appropriate container runtimeThe default Singularity image was downloaded
Similar to running locally in a Podman container, the appropriate configs were mounted into the running Singularity container. With
CLAUDE_CODE_USE_BEDROCK=1, that includes~/.aws; if noAWS_PROFILEis set on the remote host,launch.pywill automatically use thellm-exportprofile when~/.aws/credentials.jsonis present.
Step 5. Configure Claude Code¶
See Configure Claude Code for details.
Step 6. Routine usage¶
Your AWS credentials will eventually time out, and when this happens Claude Code
will have connection issues. See Credentials expired or missing for how to
diagnose this. If this happens mid-session, you can run refresh.py on
your local machine, with the --remote argument if your session is on
a remote system.
This will update the credentials files in place, and since they are mounted “live” into the container, the running Claude Code session will see the update, and will be able to connect on the next prompt submission.
Each time you start the container, you will use the latest built image from
this repo, ghcr.io/nichd-bspc/llm:latest for podman or
oras://ghcr.io/nichd-bspc/llm-sif:latest for Singularity.