Managing the API

Packages

Like the other services, the API is shipped as a Docker image with port 8080 exposed.

$ docker run -d -p 9000:8080 screwdrivercd/screwdriver:stable
$ open http://localhost:9000

Our images are tagged for their version (eg. 1.2.3) as well as a floating latest and stable. Most installations should be using stable or the fixed version tags.

Configuration

Screwdriver already defaults most configuration, but you can override defaults using a config/local.yaml or environment variables. All the possible environment variables are defined here.

Authentication / Authorization

Configure how users can and who can access the API.

Key Required Description
JWT_ENVIRONMENT No Environment to generate the JWT for. Ex: prod, beta. If you want the JWT to not contain environment, don’t set this environment variable (do not set it to '').
SECRET_JWT_PRIVATE_KEY Yes A private key uses for signing jwt tokens. Generate one by running $ openssl genrsa -out jwt.pem 2048
SECRET_JWT_PUBLIC_KEY Yes The public key used for verifying the signature. Generate one by running $ openssl rsa -in jwt.pem -pubout -out jwt.pub
SECRET_COOKIE_PASSWORD Yes A password used for encrypting session data. Needs to be minimum 32 characters
SECRET_PASSWORD Yes A password used for encrypting stored secrets. Needs to be minimum 32 characters
IS_HTTPS No A flag to set if the server is running over https. Used as a flag for the OAuth flow (default to false)
SECRET_WHITELIST No Whitelist of users able to authenticate against the system. If empty, it allows everyone. (JSON Array format)
SECRET_ADMINS No List of admins with elevated access to the cluster. If empty, it allows everyone. (JSON Array format)
# config/local.yaml
auth:
    jwtPrivateKey: |
        PRIVATE KEY HERE
    jwtPublicKey: |
        PUBLIC KEY HERE
    cookiePassword: 975452d6554228b581bf34197bcb4e0a08622e24
    encryptionPassword: 5c6d9edc3a951cda763f650235cfc41a3fc23fe8
    https: false
    whitelist:
        - github:batman
        - github:robin
    admins:
        - github:batman

Bookend Plugins

You can globally configure which built-in bookend plugins will be used during a build. By default, scm is enabled to begin builds with a SCM checkout command.

If you’re looking to include a custom bookend in the API, please refer here.

Key Default Description
BOOKENDS_SETUP None The ordered list of plugins to execute at the beginning of every build. Take the forms of '["first", "second", ...]'
BOOKENDS_TEARDOWN None The ordered list of plugins to execute at the end of every build. Take the forms of '["first", "second", ...]'
# config/local.yaml
bookends:
    setup:
        - scm
        - my-custom-bookend

Coverage bookends

We currently support SonarQube for coverage bookends.

Sonar

In order to use Sonar in your cluster, set up a Sonar server (see example at our sonar pipeline). Then configure the following environment variables:

Key Required Description
COVERAGE_PLUGIN Yes Should be sonar
URI Yes Screwdriver API url
COVERAGE_SONAR_HOST Yes Sonar host URL
COVERAGE_SONAR_ADMIN_TOKEN Yes Sonar admin token

You’ll also need to add the screwdriver-coverage-bookend along with the screwdriver-artifact-bookend as teardown bookends by setting the BOOKENDS_TEARDOWN variable (in JSON format). See the Bookend Plugins section above for more details.

Serving

Configure the how the service is listening for traffic.

Key Default Description
PORT 80 Port to listen on
HOST 0.0.0.0 Host to listen on (set to localhost to only accept connections from this machine)
URI http://localhost:80 Externally routable URI (usually your load balancer or CNAME)
HTTPD_TLS false SSL support; for SSL, replace false with a JSON object that provides the options required by tls.createServer
# config/local.yaml
httpd:
    port: 443
    host: 0.0.0.0
    uri: https://localhost
    tls:
        key: |
            PRIVATE KEY HERE
        cert: |
            YOUR CERT HERE

Ecosystem

Specify externally routable URLs for your UI, Artifact Store, and Badge service.

Key Default Description
ECOSYSTEM_UI https://cd.screwdriver.cd URL for the User Interface
ECOSYSTEM_STORE https://store.screwdriver.cd URL for the Artifact Store
ECOSYSTEM_BADGES https://img.shields.io/badge/build–.svg URL with templates for status text and color
# config/local.yaml
ecosystem:
    # Externally routable URL for the User Interface
    ui: https://cd.screwdriver.cd
    # Externally routable URL for the Artifact Store
    store: https://store.screwdriver.cd
    # Badge service (needs to add a status and color)
    badges: https://img.shields.io/badge/build--.svg

Datastore Plugin

To use Postgres, MySQL, and Sqlite, use sequelize plugin.

Sequelize

Set these environment variables:

Environment name Required Default Value Description
DATASTORE_PLUGIN Yes   Set to sequelize
DATASTORE_SEQUELIZE_DIALECT No mysql Can be sqlite, postgres, mysql, or mssql
DATASTORE_SEQUELIZE_DATABASE No screwdriver Database name
DATASTORE_SEQUELIZE_USERNAME No for sqlite   Login username
DATASTORE_SEQUELIZE_PASSWORD No for sqlite   Login password
DATASTORE_SEQUELIZE_STORAGE Yes for sqlite   Storage location for sqlite
DATASTORE_SEQUELIZE_HOST No   Network host
DATASTORE_SEQUELIZE_PORT No   Network port
# config/local.yaml
datastore:
    plugin: sequelize
    sequelize:
        dialect: TYPE-OF-SERVER
        storage: STORAGE-LOCATION
        database: DATABASE-NAME
        username: DATABASE-USERNAME
        password: DATABASE-PASSWORD
        host: NETWORK-HOST
        port: NETWORK-PORT

Executor Plugin

We currently support kubernetes, docker, VMs in Kubernetes, nomad, Jenkins, and queue executors. See the custom-environment-variables file for more details.

Kubernetes (k8s)

If you use this executor, builds will run in pods in Kubernetes.

Environment name Default Value Description
EXECUTOR_PLUGIN k8s Default executor (eg: k8s, docker, k8s-vm, nomad, jenkins, or queue)
LAUNCH_VERSION stable Launcher version to use
EXECUTOR_PREFIX   Prefix to append to pod names
EXECUTOR_K8S_ENABLED true Flag to enable Kubernetes executor
K8S_HOST kubernetes.default Kubernetes host
K8S_TOKEN Loaded from /var/run/secrets/kubernetes.io/serviceaccount/token by default JWT for authenticating Kubernetes requests
K8S_JOBS_NAMESPACE default Jobs namespace for Kubernetes jobs URL
K8S_CPU_MICRO 0.5 Number of CPU cores for micro
K8S_CPU_LOW 2 Number of CPU cores for low
K8S_CPU_HIGH 6 Number of CPU cores for high
K8S_MEMORY_MICRO 1 Memory in GB for micro
K8S_MEMORY_LOW 2 Memory in GB for low
K8S_MEMORY_HIGH 12 Memory in GB for high
K8S_BUILD_TIMEOUT 90 Default build timeout for all builds in this cluster (in minutes)
K8S_MAX_BUILD_TIMEOUT 120 Maximum user-configurable build timeout for all builds in this cluster (in minutes)
K8S_NODE_SELECTORS {} K8s node selectors for pod scheduling (format { label: 'value' }) https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#step-one-attach-label-to-the-node
K8S_PREFERRED_NODE_SELECTORS {} K8s node selectors for pod scheduling (format { label: 'value' }) https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
# config/local.yaml
executor:
    plugin: k8s
    k8s:
        options:
            kubernetes:
                host: YOUR-KUBERNETES-HOST
                token: JWT-FOR-AUTHENTICATING-KUBERNETES-REQUEST
                jobsNamespace: default
            launchVersion: stable

VMs in Kubernetes (k8s-vm)

If you use the k8s-vm executor, builds will run in VMs in pods in Kubernetes.

Environment name Default Value Description
EXECUTOR_PLUGIN k8s Default executor (set to k8s-vm)
LAUNCH_VERSION stable Launcher version to use
EXECUTOR_PREFIX   Prefix to append to pod names
EXECUTOR_K8SVM_ENABLED true Flag to enable Kubernetes VM executor
K8S_HOST kubernetes.default Kubernetes host
K8S_TOKEN Loaded from /var/run/secrets/kubernetes.io/serviceaccount/token by default JWT for authenticating Kubernetes requests
K8S_JOBS_NAMESPACE default Jobs namespace for Kubernetes jobs URL
K8S_BASE_IMAGE   Kubernetes VM base image
K8S_CPU_MICRO 1 Number of CPU cores for micro
K8S_CPU_LOW 2 Number of CPU cores for low
K8S_CPU_HIGH 6 Number of CPU cores for high
K8S_MEMORY_MICRO 1 Memory in GB for micro
K8S_MEMORY_LOW 2 Memory in GB for low
K8S_MEMORY_HIGH 12 Memory in GB for high
K8S_VM_BUILD_TIMEOUT 90 Default build timeout for all builds in this cluster (in minutes)
K8S_VM_MAX_BUILD_TIMEOUT 120 Maximum user-configurable build timeout for all builds in this cluster (in minutes)
K8S_VM_NODE_SELECTORS {} K8s node selectors for pod scheduling (format { label: 'value' }) https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#step-one-attach-label-to-the-node
K8S_VM_PREFERRED_NODE_SELECTORS {} K8s node selectors for pod scheduling (format { label: 'value' }) https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature
# config/local.yaml
executor:
    plugin: k8s-vm
    k8s-vm:
        options:
            kubernetes:
                host: YOUR-KUBERNETES-HOST
                token: JWT-FOR-AUTHENTICATING-KUBERNETES-REQUEST
            launchVersion: stable

Jenkins (jenkins)

If you use the jenkins executor, builds will run using Jenkins.

Environment name Default Value Description
EXECUTOR_PLUGIN k8s Default executor. Set to jenkins
LAUNCH_VERSION stable Launcher version to use
EXECUTOR_JENKINS_ENABLED true Flag to enable Jenkins executor
EXECUTOR_JENKINS_HOST   Jenkins host
EXECUTOR_JENKINS_PORT 8080 Jenkins port
EXECUTOR_JENKINS_USERNAME screwdriver Jenkins username
EXECUTOR_JENKINS_PASSWORD   Jenkins password/token used for authenticating Jenkins requests
EXECUTOR_JENKINS_NODE_LABEL screwdriver Node labels of Jenkins slaves
EXECUTOR_JENKINS_DOCKER_COMPOSE_COMMAND docker-compose Path to the docker-compose command
EXECUTOR_JENKINS_DOCKER_PREFIX '' Prefix to the container
EXECUTOR_JENKINS_LAUNCH_VERSION stable Launcher container tag to use
EXECUTOR_JENKINS_DOCKER_MEMORY 4g Memory limit (docker run --memory option)
EXECUTOR_JENKINS_DOCKER_MEMORY_LIMIT 6g Memory limit include swap (docker run --memory-swap option)
     
EXECUTOR_JENKINS_BUILD_SCRIPT '' The command to start a build with
EXECUTOR_JENKINS_CLEANUP_SCRIPT '' The command to clean up the build system with
EXECUTOR_JENKINS_CLEANUP_TIME_LIMIT 20 Time to destroy the job (in seconds)
EXECUTOR_JENKINS_CLEANUP_WATCH_INTERVAL 2 Intercal to detect the stopped job (in seconds)
# config/local.yaml
executor:
    plugin: jenkins
    jenkins:
        options:
            jenkins:
                host: jenkins.default
                port: 8080
                username: screwdriver
                password: YOUR-PASSWORD
            launchVersion: stable

Docker (docker)

Use the docker executor to run in Docker. sd-in-a-box also runs using Docker.

Environment name Default Value Description
EXECUTOR_PLUGIN k8s Default executor. Set to docker
LAUNCH_VERSION stable Launcher version to use
EXECUTOR_DOCKER_ENABLED true Flag to enable Docker executor
EXECUTOR_DOCKER_DOCKER {} Dockerode configuration (JSON object)
EXECUTOR_PREFIX   Prefix to append to pod names
# config/local.yaml
executor:
    plugin: docker
    docker:
        options:
            docker:
                socketPath: /var/lib/docker.sock
            launchVersion: stable

Queue (queue)

Using the queue executor will allow builds to be queued using a Redis instance containing Resque.

Environment name Default Value Description
EXECUTOR_PLUGIN k8s Default executor. Set to queue
QUEUE_REDIS_HOST 127.0.0.1 Redis host
QUEUE_REDIS_PORT 9999 Redis port
QUEUE_REDIS_PASSWORD “THIS-IS-A-PASSWORD” Redis password
QUEUE_REDIS_TLS_ENABLED false TLS enabled flag
QUEUE_REDIS_DATABASE 0 Redis database
# config/local.yaml
executor:
    plugin: queue
    queue:
        options:
            redisConnection:
              host: "127.0.0.1"
              port: 9999
              options:
                  password: "THIS-IS-A-PASSWORD"
                  tls: false
              database: 0

Nomad (nomad)

Set these environment variables:

Environment name Default Value Description
EXECUTOR_PLUGIN nomad Nomad executor
LAUNCH_VERSION latest Launcher version to use
EXECUTOR_NOMAD_ENABLED true Flag to enable Nomad executor
NOMAD_HOST nomad.default Nomad host (e.g. http://192.168.30.30:4646)
NOMAD_CPU 600 Nomad cpu resource in Mhz
NOMAD_MEMORY 4096 Nomad memory resource in MB
EXECUTOR_PREFIX sd-build- Nomad job name prefix
# config/local.yaml
executor:
    plugin: nomad
    nomad:
        options:
            nomad:
                host: http://192.168.30.30:4646
            resources:
                cpu:
                    high: 600
                memory:
                    high: 4096
            launchVersion:  latest
            prefix:  'sd-build-'

Notifications Plugin

We currently support Email notifications and Slack notifications.

Email Notifications

Configure the SMTP server and sender address that email notifications will be sent from.

# config/local.yaml
notifications:
    email:
        host: smtp.yourhost.com
        port: 25
        from: example@email.com

Configurable authentication settings have not yet been built, but can easily be added. We’re using the nodemailer package to power emails, so authentication features will be similar to any typical nodemailer setup. Contribute at: screwdriver-cd/notifications-email

Slack Notifications

Create a screwdriver-bot Slack bot user in your Slack instance. Generate a Slack token for the bot and set the token field with it in your Slack notifications settings.

# config/local.yaml
notifications:
    slack:
        token: 'YOUR-SLACK-USER-TOKEN-HERE'

Custom Notifications

You can create custom notification packages by extending notifications-base. The format of the package name must be screwdriver-notifications-<your-notification>.

The following is an example snippet of local.yaml configuration when you use email notification and your custom notification:

# config/local.yaml
notifications:
    email:
        host: smtp.yourhost.com
        port: 25
        from: example@email.com
    your-notification:
        foo: bar
        abc: 123

If you want to use scoped package, the configuration is as below:

# config/local.yaml
notifications:
    your-notification:
        config:
            foo: bar
            abc: 123
        scopedPackage: '@scope/screwdriver-notifications-your-notification'

Source Control Plugin

We currently support Github and Github Enterprise, Bitbucket.org, and Gitlab

Note: Gitlab is a user-created plugin

Step 1: Set up your OAuth Application

You will need to set up an OAuth Application and retrieve your OAuth Client ID and Secret.

Github:
  1. Navigate to the Github OAuth applications page.
  2. Click on the application you created to get your OAuth Client ID and Secret.
  3. Fill out the Homepage URL and Authorization callback URL to be the IP address of where your API is running.
Bitbucket.org:
  1. Navigate to the Bitbucket OAuth applications: https://bitbucket.org/account/user/{your-username}/api
  2. Click on Add Consumer.
  3. Fill out the URL and Callback URL to be the IP address of where your API is running.

Step 2: Configure your SCM plugin

Set these environment variables:

Environment name Required Default Value Description
SCM_SETTINGS Yes {} JSON object with SCM settings
Github:
# config/local.yaml
scms:
    github:
        plugin: github
        config:
            oauthClientId: YOU-PROBABLY-WANT-SOMETHING-HERE # The client id used for OAuth with github. GitHub OAuth (https://developer.github.com/v3/oauth/)
            oauthClientSecret: AGAIN-SOMETHING-HERE-IS-USEFUL # The client secret used for OAuth with github
            secret: SUPER-SECRET-SIGNING-THING # Secret to add to GitHub webhooks so that we can validate them
            gheHost: github.screwdriver.cd # [Optional] GitHub enterprise host
            username: sd-buildbot # [Optional] Username for code checkout
            email: dev-null@screwdriver.cd # [Optional] Email for code checkout
            privateRepo: false # [Optional] Set to true to support private repo; will need read and write access to public and private repos (https://developer.github.com/v3/oauth/#scopes)

If users want to use private repo, they also need to set up SCM_USERNAME and SCM_ACCESS_TOKEN as secrets in their screwdriver.yaml.

Bitbucket.org
# config/local.yaml
scms:
    bitbucket:
        plugin: bitbucket
        config:
            oauthClientId: YOUR-APP-KEY
            oauthClientSecret: YOUR-APP-SECRET

Extending the Docker container

There are some scenarios where you would prefer to extend the Screwdriver.cd Docker image, such as using custom Bookend plugins. This section is not meant to be exhaustive or complete, but will provide insight into some of the fundamental cases.

Using a custom bookend

Using a custom bookend is a common case where you would extend the Screwdriver.cd Docker image.

In this chosen example, we want to have our bookend execute before the scm (which checks out the code from the configured SCM). Although the bookend plugins can be configured by environment variables, we will show how to accomplish the same task with a local.yaml file.

This is shown in the following local.yaml snippet:

# local.yaml
---
  ...
bookends:
  setup:
    - my-custom-bookend
    - scm

For building our extended Docker image, we will need to create a Dockerfile that will have our extra dependencies installed. If you would prefer to save the local.yaml configuration file in the Docker image instead of mounting it in later, you may do so in the Dockerfile as well.

# Dockerfile
FROM screwdrivercd/screwdriver:stable

# Install additional NPM bookend plugin
RUN cd /usr/src/app && /usr/local/bin/npm install my-custom-bookend

# Optionally save the configuration file in the image
ADD local.yaml /config/local.yaml

Once you build the Docker image, you will need to deploy it to your Screwdriver.cd cluster. For instance, if you’re using Kubernetes, you would replace the screwdrivercd/api:stable image to your custom Docker image.

The following is an example snippet of an updated Kubernetes deployment configuration:

# partial Kubernetes configuration
  ...
spec:
  replicas: 1
  template:
    spec:
      containers:
      - name: screwdriver-api
        # The image name is the one you specified when built
        # The tag name is the tag you specified when built
        image: my_extended_docker_image_name:tag_name