Building a GitHub CLI Extension for Opinionated Repository Creation
READER BEWARE: THE FOLLOWING WAS WRITTEN ENTIRELY BY AI WITHOUT HUMAN EDITING.
Introduction
Every engineering team eventually arrives at the same problem: spinning up a new repository is tedious, inconsistent, and error-prone. Developers copy-paste from old repos, forget to enable branch protection, ship projects without a .devcontainer, or miss the standard GitHub Actions workflow that enforces CI. A few months later the codebase is a patchwork of different conventions, and onboarding a new engineer means a long tour of “we do it this way here, but not there.”
GitHub CLI extensions let you ship an opinionated gh repo-create command that belongs to your team—one command that creates the repository, applies the right settings, and seeds it with the exact set of starting files the project type needs. Because the starter file sets live inside the extension’s own repository, updating them is a normal pull request; every future repo created from that point forward picks up the changes automatically.
This post walks through the architecture of such an extension and shows two concrete starters: one for a Python API deployed to Kubernetes with EKS, and one for a Python API deployed as an AWS Lambda function.
How GitHub CLI Extensions Work
A GitHub CLI extension is an executable that lives in a repository whose name starts with gh-. Once installed with gh extension install <owner>/<repo>, the extension is available as a top-level gh sub-command matching the suffix of the repo name. An extension named gh-new-repo is invoked as gh new-repo.
Extensions can be written in any language. Shell scripts are the simplest starting point; Go is a popular choice for cross-platform binaries. The examples in this post use a shell script backed by Go for binary distribution, but the architecture is the same regardless of language.
Extension Architecture
The extension is structured around three responsibilities that are deliberately kept separate:
gh-new-repo/
├── cmd/
│ └── root.go # CLI flag parsing and orchestration
├── internal/
│ ├── repo/
│ │ └── create.go # Step 1 – create the GitHub repo via API
│ ├── settings/
│ │ └── configure.go # Step 2 – configure repo settings
│ └── starters/
│ └── apply.go # Step 3 – seed starter files
├── starters/
│ ├── starter-api-python-eks/
│ │ ├── manifest.yaml
│ │ └── files/
│ │ ├── .devcontainer/
│ │ ├── .github/
│ │ ├── Dockerfile
│ │ └── helm/
│ └── starter-api-python-lambda/
│ ├── manifest.yaml
│ └── files/
│ ├── .devcontainer/
│ ├── .github/
│ ├── Dockerfile
│ └── sam/
└── go.mod
Step 1 – Create the Repository
The repo/create.go module wraps the GitHub REST API (or the organisation endpoint) to create the repository with the requested visibility, description, and initial configuration.
// internal/repo/create.go
package repo
import (
"github.com/cli/go-gh/v2/pkg/api"
)
type CreateOptions struct {
Owner string
Name string
Description string
Private bool
AutoInit bool
}
func Create(opts CreateOptions) (string, error) {
client, err := api.DefaultRESTClient()
if err != nil {
return "", err
}
body := map[string]interface{}{
"name": opts.Name,
"description": opts.Description,
"private": opts.Private,
"auto_init": opts.AutoInit,
}
var response struct {
HTMLURL string `json:"html_url"`
}
path := "user/repos"
if opts.Owner != "" {
path = "orgs/" + opts.Owner + "/repos"
}
if err := client.Post(path, body, &response); err != nil {
return "", err
}
return response.HTMLURL, nil
}
Step 2 – Configure Repository Settings
Rather than exposing every GitHub repository option as a CLI flag, the settings step applies a sensible default configuration and lets teams override individual values through flags.
// internal/settings/configure.go
package settings
import (
"github.com/cli/go-gh/v2/pkg/api"
)
type RepoSettings struct {
Owner string
Repo string
DeleteBranchOnMerge bool
AllowSquashMerge bool
AllowMergeCommit bool
AllowRebaseMerge bool
}
func Configure(s RepoSettings) error {
client, err := api.DefaultRESTClient()
if err != nil {
return err
}
body := map[string]interface{}{
"delete_branch_on_merge": s.DeleteBranchOnMerge,
"allow_squash_merge": s.AllowSquashMerge,
"allow_merge_commit": s.AllowMergeCommit,
"allow_rebase_merge": s.AllowRebaseMerge,
}
return client.Patch("repos/"+s.Owner+"/"+s.Repo, body, nil)
}
func ApplyBranchProtection(owner, repo, branch string) error {
client, err := api.DefaultRESTClient()
if err != nil {
return err
}
body := map[string]interface{}{
"required_status_checks": map[string]interface{}{
"strict": true,
"contexts": []string{"ci"},
},
"enforce_admins": true,
"required_pull_request_reviews": map[string]interface{}{
"required_approving_review_count": 1,
},
"restrictions": nil,
}
path := "repos/" + owner + "/" + repo + "/branches/" + branch + "/protection"
return client.Put(path, body, nil)
}
Step 3 – Apply Starter Files
The starters/apply.go module clones the extension repository locally (it is already on disk because the extension was installed from it), reads the requested starter directory, and pushes the files to the newly created repository as an initial commit.
// internal/starters/apply.go
package starters
import (
"embed"
"fmt"
"os"
"os/exec"
"path/filepath"
)
//go:embed ../../starters
var starterFS embed.FS
func Apply(starter, repoURL, tmpDir string) error {
starterPath := filepath.Join("starters", starter, "files")
entries, err := starterFS.ReadDir(starterPath)
if err != nil {
return fmt.Errorf("unknown starter %q: %w", starter, err)
}
// Clone the empty repo
if err := run("git", "clone", repoURL, tmpDir); err != nil {
return err
}
// Copy starter files into the clone
if err := copyFS(starterFS, starterPath, tmpDir, entries); err != nil {
return err
}
// Commit and push
cmds := [][]string{
{"git", "-C", tmpDir, "add", "."},
{"git", "-C", tmpDir, "commit", "-m", "chore: apply " + starter + " starter"},
{"git", "-C", tmpDir, "push"},
}
for _, c := range cmds {
if err := run(c[0], c[1:]...); err != nil {
return err
}
}
return nil
}
func run(name string, args ...string) error {
cmd := exec.Command(name, args...)
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
return cmd.Run()
}
The key design decision here is using Go’s embed directive. The starter files are compiled directly into the extension binary, so users do not need network access to the extension repository at runtime. When a starter changes, a new release is cut and users run gh extension upgrade gh-new-repo to pull in the updated binary.
The Root Command
// cmd/root.go
package cmd
import (
"fmt"
"os"
"github.com/spf13/cobra"
"github.com/yourorg/gh-new-repo/internal/repo"
"github.com/yourorg/gh-new-repo/internal/settings"
"github.com/yourorg/gh-new-repo/internal/starters"
)
var rootCmd = &cobra.Command{
Use: "new-repo <name>",
Short: "Create an opinionated GitHub repository",
Args: cobra.ExactArgs(1),
RunE: run,
}
var (
owner string
desc string
private bool
starter string
)
func init() {
rootCmd.Flags().StringVarP(&owner, "owner", "o", "", "Organization or user (defaults to authenticated user)")
rootCmd.Flags().StringVarP(&desc, "description", "d", "", "Repository description")
rootCmd.Flags().BoolVar(&private, "private", true, "Create a private repository")
rootCmd.Flags().StringVarP(&starter, "starter", "s", "", "Starter file set to apply (e.g. starter-api-python-eks)")
}
func Execute() {
if err := rootCmd.Execute(); err != nil {
os.Exit(1)
}
}
func run(cmd *cobra.Command, args []string) error {
name := args[0]
// 1. Create repository
fmt.Printf("Creating repository %s/%s…\n", owner, name)
repoURL, err := repo.Create(repo.CreateOptions{
Owner: owner,
Name: name,
Description: desc,
Private: private,
AutoInit: true,
})
if err != nil {
return fmt.Errorf("creating repository: %w", err)
}
fmt.Printf("Repository created: %s\n", repoURL)
// 2. Configure settings
fmt.Println("Configuring repository settings…")
if err := settings.Configure(settings.RepoSettings{
Owner: owner,
Repo: name,
DeleteBranchOnMerge: true,
AllowSquashMerge: true,
AllowMergeCommit: false,
AllowRebaseMerge: false,
}); err != nil {
return fmt.Errorf("configuring settings: %w", err)
}
if err := settings.ApplyBranchProtection(owner, name, "main"); err != nil {
return fmt.Errorf("applying branch protection: %w", err)
}
// 3. Apply starter files
if starter != "" {
fmt.Printf("Applying starter %q…\n", starter)
tmp, err := os.MkdirTemp("", "gh-new-repo-*")
if err != nil {
return fmt.Errorf("creating temp directory: %w", err)
}
defer os.RemoveAll(tmp)
if err := starters.Apply(starter, repoURL, tmp); err != nil {
return fmt.Errorf("applying starter: %w", err)
}
}
fmt.Println("Done! 🎉")
return nil
}
Usage
# Create a private repo with no starter
gh new-repo my-service --owner myorg
# Create a repo and seed it with the EKS starter
gh new-repo payment-api \
--owner myorg \
--description "Payment processing API" \
--starter starter-api-python-eks
# Create a repo and seed it with the Lambda starter
gh new-repo notification-api \
--owner myorg \
--description "Notification dispatch Lambda" \
--starter starter-api-python-lambda
Starter: starter-api-python-eks
This starter targets teams shipping a Python web API as a container to Amazon EKS. It provides everything needed to develop, containerise, and deploy the service on day one.
File Tree
starters/starter-api-python-eks/files/
├── .devcontainer/
│ ├── devcontainer.json
│ └── Dockerfile
├── .github/
│ └── workflows/
│ ├── ci.yaml
│ └── deploy.yaml
├── helm/
│ ├── Chart.yaml
│ ├── values.yaml
│ └── templates/
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── ingress.yaml
│ └── hpa.yaml
├── app/
│ ├── __init__.py
│ └── main.py
├── Dockerfile
├── pyproject.toml
└── README.md
Key Files
Dockerfile – multi-stage build that keeps the final image small:
# syntax=docker/dockerfile:1
FROM python:3.12-slim AS builder
WORKDIR /app
COPY pyproject.toml .
RUN pip install --no-cache-dir build && python -m build --wheel
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /app/dist/*.whl .
RUN pip install --no-cache-dir *.whl
USER 1000
ENTRYPOINT ["python", "-m", "app.main"]
.github/workflows/ci.yaml – lint, test, and build on every pull request:
name: CI
on:
pull_request:
branches: [main]
jobs:
ci:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
cache: pip
- run: pip install .[dev]
- run: ruff check .
- run: pytest --cov=app
build:
runs-on: ubuntu-latest
needs: ci
steps:
- uses: actions/checkout@v4
- uses: docker/setup-buildx-action@v3
- uses: docker/build-push-action@v6
with:
push: false
tags: ${{ github.repository }}:${{ github.sha }}
.github/workflows/deploy.yaml – build, push to ECR, and roll out to EKS on merge to main:
name: Deploy to EKS
on:
push:
branches: [main]
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_DEPLOY_ROLE_ARN }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Login to ECR
id: ecr-login
uses: aws-actions/amazon-ecr-login@v2
- name: Build and push image
uses: docker/build-push-action@v6
with:
push: true
tags: ${{ steps.ecr-login.outputs.registry }}/${{ github.event.repository.name }}:${{ github.sha }}
- name: Deploy with Helm
run: |
aws eks update-kubeconfig \
--name ${{ secrets.EKS_CLUSTER_NAME }} \
--region ${{ secrets.AWS_REGION }}
helm upgrade --install ${{ github.event.repository.name }} ./helm \
--set image.tag=${{ github.sha }} \
--set image.repository=${{ steps.ecr-login.outputs.registry }}/${{ github.event.repository.name }} \
--atomic --timeout 5m
helm/Chart.yaml:
apiVersion: v2
name: api
version: 0.1.0
appVersion: "0.1.0"
description: Python API service deployed on EKS
.devcontainer/devcontainer.json – a fully configured development container so the team can code in a consistent environment regardless of host OS:
{
"name": "Python API (EKS)",
"build": { "dockerfile": "Dockerfile" },
"features": {
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
"ghcr.io/devcontainers/features/aws-cli:1": {},
"ghcr.io/devcontainers/features/kubectl-helm-minikube:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {}
},
"postCreateCommand": "pip install -e .[dev]",
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.ruff",
"ms-azuretools.vscode-docker"
]
}
}
}
Starter: starter-api-python-lambda
This starter targets teams shipping a Python function as an AWS Lambda. Rather than Helm and EKS, it uses the AWS SAM CLI and the AWS CLI for deployment, and the CI pipeline validates the SAM template and runs a local invocation of the function.
File Tree
starters/starter-api-python-lambda/files/
├── .devcontainer/
│ ├── devcontainer.json
│ └── Dockerfile
├── .github/
│ └── workflows/
│ ├── ci.yaml
│ └── deploy.yaml
├── src/
│ ├── __init__.py
│ └── handler.py
├── tests/
│ └── test_handler.py
├── Dockerfile
├── template.yaml
├── pyproject.toml
└── README.md
Key Files
template.yaml – the SAM template that describes the Lambda function and its API Gateway trigger:
AWSTemplateFormatVersion: "2010-09-09"
Transform: AWS::Serverless-2016-10-31
Description: Python API Lambda function
Globals:
Function:
Timeout: 30
MemorySize: 512
Runtime: python3.12
Architectures: [arm64]
Environment:
Variables:
LOG_LEVEL: INFO
Resources:
ApiFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src/
Handler: handler.lambda_handler
Events:
ApiGateway:
Type: HttpApi
Properties:
Path: /{proxy+}
Method: ANY
Outputs:
ApiUrl:
Description: API Gateway endpoint URL
Value: !Sub "https://${ServerlessHttpApi}.execute-api.${AWS::Region}.amazonaws.com/"
src/handler.py – a minimal handler ready to extend:
import json
import logging
import os
logger = logging.getLogger()
logger.setLevel(os.environ.get("LOG_LEVEL", "INFO"))
def lambda_handler(event: dict, context) -> dict:
logger.info("Event: %s", json.dumps(event))
return {
"statusCode": 200,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({"message": "Hello from Lambda"}),
}
Dockerfile – builds a Lambda-compatible container image using the AWS base image:
FROM public.ecr.aws/lambda/python:3.12
COPY pyproject.toml .
RUN pip install --no-cache-dir .
COPY src/ ${LAMBDA_TASK_ROOT}/
CMD ["handler.lambda_handler"]
.github/workflows/ci.yaml – lint, test, and validate the SAM template:
name: CI
on:
pull_request:
branches: [main]
jobs:
ci:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
cache: pip
- run: pip install .[dev]
- run: ruff check .
- run: pytest --cov=src
validate-sam:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: aws-actions/setup-sam@v2
with:
use-installer: true
- run: sam validate --lint
.github/workflows/deploy.yaml – build, push to ECR, and deploy the SAM stack via the AWS CLI and SAM CLI:
name: Deploy Lambda
on:
push:
branches: [main]
permissions:
id-token: write
contents: read
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ secrets.AWS_DEPLOY_ROLE_ARN }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Set up SAM CLI
uses: aws-actions/setup-sam@v2
with:
use-installer: true
- name: Login to ECR
id: ecr-login
uses: aws-actions/amazon-ecr-login@v2
- name: Build container image
run: |
docker build -t ${{ steps.ecr-login.outputs.registry }}/${{ github.event.repository.name }}:${{ github.sha }} .
docker push ${{ steps.ecr-login.outputs.registry }}/${{ github.event.repository.name }}:${{ github.sha }}
- name: SAM deploy
run: |
sam deploy \
--stack-name ${{ github.event.repository.name }} \
--image-repository ${{ steps.ecr-login.outputs.registry }}/${{ github.event.repository.name }} \
--parameter-overrides ImageTag=${{ github.sha }} \
--capabilities CAPABILITY_IAM \
--no-confirm-changeset \
--no-fail-on-empty-changeset \
--region ${{ secrets.AWS_REGION }}
- name: Print API URL
run: |
aws cloudformation describe-stacks \
--stack-name ${{ github.event.repository.name }} \
--query "Stacks[0].Outputs[?OutputKey=='ApiUrl'].OutputValue" \
--output text
.devcontainer/devcontainer.json:
{
"name": "Python API (Lambda)",
"build": { "dockerfile": "Dockerfile" },
"features": {
"ghcr.io/devcontainers/features/docker-in-docker:2": {},
"ghcr.io/devcontainers/features/aws-cli:1": {},
"ghcr.io/devcontainers/features/github-cli:1": {}
},
"postCreateCommand": "pip install -e .[dev] && pip install aws-sam-cli",
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.ruff",
"AmazonWebServices.aws-toolkit-vscode"
]
}
}
}
Adding a New Starter
Because the starter files are embedded in the binary, adding a new starter is a three-step pull request:
- Create the directory under
starters/<your-starter-name>/files/and add all the template files. - Add a
manifest.yamlthat documents what the starter includes and any required GitHub Actions secrets:
# starters/starter-api-python-eks/manifest.yaml
name: starter-api-python-eks
description: Python API service containerised and deployed to Amazon EKS with Helm
required_secrets:
- AWS_DEPLOY_ROLE_ARN
- AWS_REGION
- EKS_CLUSTER_NAME
- Cut a release. GitHub Actions builds the cross-platform binaries and attaches them to the release. Users upgrade with
gh extension upgrade gh-new-repo.
No changes to the extension code are required for a new starter—the embed glob picks up anything placed under starters/.
Installing and Using the Extension
# Install
gh extension install myorg/gh-new-repo
# List available starters (reads the embedded manifest files)
gh new-repo --list-starters
# Create a repo with a starter
gh new-repo payment-api \
--owner myorg \
--description "Payment processing API deployed on EKS" \
--starter starter-api-python-eks
# Upgrade to pick up new starters and file changes
gh extension upgrade gh-new-repo
Design Decisions and Trade-offs
Why embed starters in the binary?
An alternative is to fetch starters at runtime from the extension repository. This means users always get the latest files without upgrading, but it requires network access and introduces a runtime dependency on the GitHub API. Embedding ensures hermetic, offline-capable operation and makes the exact set of files auditable in the release binary.
Why not use gh repo create with a template repository?
GitHub’s template repositories are a solid choice for simple cases, but they have limits: you cannot apply branch protection or merge settings automatically, you cannot choose among multiple starters with a single flag, and the template must itself be a GitHub repository that every user has read access to. The CLI extension approach keeps all of this logic in code, testable, and versionable.
Keeping starter files generic
Starter files use placeholder values (e.g. ${{ github.event.repository.name }} in workflows) so they work correctly in any repository without post-processing. For values that cannot be expressed as GitHub Actions context variables, the manifest.yaml can declare template_vars that the extension substitutes at apply time using Go’s text/template package.
Conclusion
A GitHub CLI extension for repository creation solves a real team-scale problem: ensuring every new repository starts from the same baseline, no matter who creates it or how late it is on a Friday afternoon. By separating the three concerns—creation, configuration, and file seeding—into distinct modules, the extension stays easy to maintain and extend. And by embedding starter file sets directly in the binary, updates are a simple gh extension upgrade away.
The two starters shown here—starter-api-python-eks and starter-api-python-lambda—demonstrate how different deployment targets can share a common Python project structure while diverging cleanly at the infrastructure layer. Adding a third starter for, say, a Go gRPC service or a React frontend is just a pull request to the starters/ directory.