V2? V3? 👷

We're in a period of transitioning from HOPS V2 to HOPS V3. The documentation might be outdated or self-contradictory.

Read more about HOPS V3

What is HOPS?

Headless Operations, HOPS, previously known as Iterapp, is a platform that runs your applications reliably and easily on the internet. Headless Operations connects to the tools you already use, such as GitHub and Slack, and reacts as new versions are checked in, contionusly packaging and deploying your code.

We're passionate about ease of use. Headless Operations was built to support venture builders at Iterate. While they've been creating their products, we've been handling the complexity of database management and backups, load balancing, certificate management, logging, server upgrades, networking, deployment pipelines, logging, secret management, and more.

Configuring your application should be a breeze. Here's a web application backed by Postgres, deployed to awesome-product.dev:

# iterapp.toml
port=80
domains=["awesome-product.dev"]

[postgres]

That's it! 🚀

Get started

To create your next app, head over to our getting started-guide. 🧙‍♂️

To get support, check our troubleshooting guide, or talk to a human at the #hops-support channel.

To report bugs with our code, create an issue. 🕵️ Feel free to improve our documentation!

Good luck and have fun using HOPS!

Getting started

Tip

As always, if you encounter any problems or if you are stuck at any point, please check out the how to get help section.

The aim of this tutorial is to take you through the bare minium of what your app needs in order for Iterapp to build, deploy and manage it.

Minimum setup to deploy app

We will be working on our cats example app, but feel free to adopt the concepts to your own app.

Remember to change cats to the name of your app.

  1. Add your apps repository to the Iterate organisation

    Go ahead and create your repo by:

    https://github.com/organizations/iterate/repositories/new
    
  2. Register your app

    V2✨ V3 ✨

    The primary means of interacting with Iterapp is through the slack-bot. To register your application from step 1, open slack-channel #iterapp-logs, change cats to whatever name your app is and write the command:

    /iterapp register cats
    

    The primary means of interacting with HOPS is through the CLI. To register your application from step 1:

    1. Install the CLI: (see CLI documentation for more)

      curl -SsLf https://cli.headless-operations.no/install.sh | sh
      
    2. Log in:

      hops v3 login
      
    3. change cats to whatever name your app is and write the command:

      hops v3 register --cluster iterapp iterate/cats
      
  3. Add a Dockerfile

    Your repo must have a Dockerfile which Iterapp will use to create a container to run your application. This file must be in the root folder of your app. The content of the Dockerfile depends on the language your app is created in. See examples of apps using Dockerfile..

    Our example app has this content:

    FROM nginx:1.15.12-alpine
    
    # Copy landingpage-file to /usr/share/nginx/html.
    COPY index.html /usr/share/nginx/html
    
  4. Add iterapp.toml

    Your repo must have an iterapp.toml-file which instructs Iterapp on how to deploy the app in kubernetes. This file must be in the root folder of your app. Use it to only override settings that differentiate from the default settings. See iterapp.toml for a full overview.

    Example:

    port = 80
    readiness_path = "/health"
    

    Read more about readiness_path.

    Commit and push iterapp.toml and Dockerfile to master-branch and your build will automatically start. You can see more details about your build by visiting your repo and click on the yellow-build button at your commit.

  5. The build

    V2✨ V3 ✨

    When your build has finished, a slack-message will appear at #iterapp-logs. Iterapp will automatically start a deploy of your app to the test-environment.

    When this has finished, a slack-message will appear with a URL for your app to the test-environment. It will look similar to this: Cats example app.. The SSL-certificate will be ready after a minute or two.

    When your build starts, it will appear on the web site.

    When the build finishes, a deployment wil be created for the test environment, and it will also appear on the web site.

    Your app will be public on a URL similar to https://cats.test.terate.no. The SSL-certificate will be ready after a minute or two.

  6. Deploy your app to prod

    V2✨ V3 ✨

    You must use #iterapp-logs whenever you want to deploy your app. If you are ready to deploy your app to production, you can write this command directly in the #iterapp-logs-channel.

    /iterapp deploy cats prod master
    

    /iterapp deploy appname env branch -> instructs iterapp to deploy appname to a given env ('prod', 'test', 'snap0' to 'snap-9'), and with branch-name specifying what code is going to be pushed.

    You must use the CLI whenever you want to deploy your app. If you are ready to deploy your app to production, you can write this command:

    hops v3 deploy -a cats -e prod -r master
    

    hops v3 deploy -a appname -e env -r branch instructs HOPS to deploy appname to a given env (prod, test, snap0 to snap9), and with branch-name specifying what code is going to be pushed.


Congratulations!

Hopefully you will now have an app deployed to production using Iterapp.

But Iterapp offers more! Go ahead and pick your next read from the Get started section.

Getting started with Go in HOPS

Go is a small, statically typed, compiled, garbage collected, C-like language that is great for concurrent work. Go compiles to a single binary, is easy to use with containers and is fun to work with. It's a perfect match for HOPS!

Building a web server

We'll build a small web server that counts how many visitors have been to our site. Everything we need is provided by HOPS and Go's fantastic standard library.

The code for the tutorial can be found on GitHub, and is deployed with HOPS!

In HOPS, applications are configured using environment variables. Environment variables are pretty much magical global variables that an executable can see. HOPS sets a bunch of these variables (you can too!), so that your application can read them to get information about how it should behave.

We'll listen to the port specified in the environment variable PORT. Additionally, we need to respond to health checks. For now, optimistically responding 200 to everyone seems sufficient. The health checks are performed against a path conveniently specified in HOPS_READINESS_PATH.

// main.go
package main

import (
	"database/sql"
	"fmt"
	"log"
	"net/http"
	"os"
)

func main() {
	_ := migratedDB()

	// Count visitors.
	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {panic("todo!")})

	// Handle health checks.
	http.HandleFunc(os.Getenv("HOPS_READINESS_PATH"), func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(http.StatusOK) })

	log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", os.Getenv("PORT")), nil))
}

// migratedDB returns a database connection that is ready for use.
func migratedDB() *sql.DB {
	panic("todo!")
}

Let's implement the visitor counter. Nothing magical here, we just increment a counter.

In HOPS, we collect logs that your application prints to stdout, so we don't need to think about log files.

// main.go

const update = `UPDATE visits SET visits = visits + 1 WHERE id = 'hits' RETURNING visits`

// Count visitors.
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
    // Don't count requests to all sites, we don't want to count favicons and robots.
    if r.URL.Path != "/" {
        http.NotFound(w, r)
        return
    }

    // Increment and return counter
    var hits int
    if err := db.QueryRowContext(r.Context(), update).Scan(&hits); err != nil {
        log.Printf("Could not update number of visitors: %v", err)
        http.Error(w, err.Error(), http.StatusInternalServerError)
        return
    }

    w.Header().Set("Content-Type", "text/plain; charset=utf-8")
    w.WriteHeader(200)
    fmt.Fprintf(w, "Welcome, visitor number %d!\n", hits)
})

Finally, we'll create the database connection and create the schema. For simplicity, we just try this each time the app starts. When you request Postgres database, you automatically get the DATABASE_URL environment variable added to your apps! You'll see how we request a database later.

// main.go

// migratedDB returns a database connection that is ready for use.
func migratedDB() *sql.DB {
	db, err := sql.Open("pgx", os.Getenv("DATABASE_URL"))
	if err != nil {
		log.Fatalf("could not connect to database: %v", err)
	}

	const migrate = `CREATE TABLE IF NOT EXISTS visits (id TEXT PRIMARY KEY, visits int4 NOT NULL);
INSERT INTO visits (id, visits) VALUES ('hits', 0) ON CONFLICT (id) DO NOTHING;`

	if _, err := db.Exec(migrate); err != nil {
		log.Fatalf("could not migrate database: %v", err)
	}
	return db
}

All done! 🚀 The next step is getting the app online.

Creating a Dockerfile

HOPS deploys containers, which makes it easy to build, ship and run your code anywhere. You can build the container and run it locally, and if it does run, you can be pretty confident it will run on our servers as well.

We'll create a simple multi-layer Dockerfile.1 This builds the program in a context where it has the heavy Go compiler, and then creates a new, blank slate, and copies just the finished program into it. This gives us a very small image that takes only seconds to move around.

Save the following as Dockerfile in the base your project folder. HOPS finds this and automatically builds it.

# Dockerfile
# Build in Alpine, a small Linux distribution.
FROM golang:1-alpine AS build
COPY . /build
RUN CGO_ENABLED=0 go build -C /build  -o /usr/local/bin/visitcounterd .

# Copy the compiled application into a Distroless container.
FROM gcr.io/distroless/static-debian11:nonroot
COPY --from=build /usr/local/bin/visitcounterd /usr/local/bin/visitcounterd
CMD ["/usr/local/bin/visitcounterd"]

Creating the iterapp.toml file

We need a configuration file. As we don't need a test environment, we can specify that we want the main branch to deploy to production. HOPS needs to know that our app requires a PostgreSQL database. Adding the [postgres] is all it takes to make a Postgres database available. You can read more about our Postgres setup here.

In HOPS, your app describes your entire application, and can have one or more applings, which are different containers running at the same time. In our example, we only have one appling, so we configure it directly in iterapp.toml.2

We'll leave the rest of the settings blank – we've already told our app how to deal handle the critical pieces, the port and the health checks.

# iterapp.toml
default_environment="prod"

[postgres]

Deploying our app

Create a repository on GitHub under the iterate organization, and push your code to it.

V2✨ V3 ✨

Register

In Slack, go to #iterapp-logs. Type /iterapp register {repository}, where {repository} is the part after iterate/. This configures HOPS to listen to your repository.

Deploy

To deploy your app for the first time, go to #iterapp-logs. Type /iterapp deploy {repository} prod main. This should deploy your application. Check it out at https://{repository}.app.iterate.no!

Register

The primary means of interacting with HOPS is through the CLI. To register your application from step 1:

  1. Install the CLI: (see CLI documentation for more)

    curl -SsLf https://cli.headless-operations.no/install.sh | sh
    
  2. Log in:

    hops v3 login
    
  3. Register:

    hops v3 register --cluster iterapp {repository}
    

    (Change {repository} to the the repository name including the organization, for example: iterate/dogs)

Deploy

To deploy your app for the first time, run

hops v3 deploy -a {repository} -e prod -r main`.

This should deploy your application. Check it out at https://{repository}.app.iterate.no!

1

TODO: This step is too bit complex.

2

TODO: This is confusing. Explain why it is important.

Starting with a React app

React is a great library for building websites, and really shines when building single-page applications. Create React App makes it really easy to get started.

Many services can host static websites. HOPS' also manages the build process and connects to GitHub, and makes the release process smooth and fast.

This example uses create-react-app@^5.0 with TypeScript and the pnpm package manager. We use Caddy to serve the packaged app.

Creating our application

First, initialize a new project directory, my-hops-app, with the create-react-app tool.

$ npx "create-react-app@~5.0" my-hops-app --template typescript

# Output (partial):
# Success! Created my-hops-app at /home/mirg/projects/my-hops-app (...)

$ cd my-hops-app
$ pnpm import && rm -rf node_modules package-lock.json && pnpm install

Unfortunately, there's a bug in one of the 11891 transitive dependencies used by create-react-app. We'll update .npmrc to resolve this before proceeding:

# https://github.com/pnpm/pnpm/issues/4920#issuecomment-1226724790
cat <<EOF > .npmrc
public-hoist-pattern[]=*eslint*
public-hoist-pattern[]=*prettier*
public-hoist-pattern[]=@types*
EOF

That's it, we're good to go!

Creating a Dockerfile

A Dockerfile is a recipe for how to create a container image, which is what we use to run applications in HOPS.

In this example, we use a multi-stage build to create a very small image containing just a web server and the packaged code.

Save the following as Dockerfile in the base your project folder. HOPS finds this and automatically builds it whenever you commit to your repository.

# syntax=docker/dockerfile:1
# This line describes the capabilities of our builder. Do not remove it, it has
# to be the first line in our Dockerfile.

# This is a multi-stage build. A container image is a stack of layers. Each
# action creates a layer. For projects like this, we can end up with images
# that contain a lot of files that are not necessary, like the Node runtime,
# the entire node_modules directory, etc. We can work around this by building
# in one "stage", and then moving only the required files for running the
# application into a new, clean and minimal image.
# See: https://docs.docker.com/build/building/multi-stage/

# You can use variables in Dockerfiles!
ARG app_name="my-react-app"

# Build in the Node 18 runtime environment.
FROM node:18.15.0-alpine as build
ARG app_name

# Install the pnpm package manager.
# Set pnpm's home directory explicitly to get consistent behaviour in case we
# change something, like the build OS or user.
ENV PNPM_HOME /var/lib/hops/pnpm
# See: https://pnpm.io/installation#using-npm
RUN npm install -g pnpm@7.33.1

# Copy in our source code.
COPY . /opt/${app_name}

# Fetches our dependencies.
# - The --mount=type=cache flag makes our builders store cached files
#   (downloaded packages) between builds. The location is configured by the
#   PNPM_HOME variable above.
# - The --prefer-offline flag prefers packages already in PNPN_HOME.
# - The --package-import-method=copy flag fixes a filesystem issue with caches.
# - The --frozen-lockfile flag ensures no surprise package updates occur.
RUN --mount=type=cache,id=hops_cra,target=/var/lib/hops/pnpm/store \
    pnpm install \
      --dir /opt/${app_name} \
      --package-import-method copy \
      --prefer-offline \
      --frozen-lockfile

# Run the create-react-app build script. This outputs the finished application
# into the /opt/${app_name}/build directory. We don't specify NODE_ENV here,
# because create-react-app does this for us automatically.
RUN pnpm run --dir /opt/${app_name} build


# create-react-app's output only requires JavaScript in the browser, not on the
# server, so we can serve the project with a regular web server, like Caddy.
# We only need to copy in our build directory and update the configuration.
# See: https://caddyserver.com/
FROM caddy:2.6.4-alpine
ARG app_name

# Copy our build directory from the build step.
COPY --from=build /opt/${app_name}/build /usr/share/${app_name}

# Write our custom Caddy config to /etc/caddy/Caddyfile
COPY <<EOF /etc/caddy/Caddyfile
{
	# We don't need the admin endpoint.
	admin off
	servers {
		# Enable metrics
		metrics
	}
}
:{\$PORT:3000} {
	# https://caddyserver.com/docs/caddyfile/patterns#single-page-apps-spas
	encode gzip
	root * /usr/share/${app_name}

	# HOPS requires an endpoint for health checks.
	handle {\$HOPS_READINESS_PATH:/health} {
		respond OK 200
	}

	handle {
		# If there's no file at {path}, use /index.html.
		try_files {path} /index.html
		file_server
	}
}
:{\$HOPS_METRICS_PORT:3001} {
	# Expose metrics on this internal port and path
	metrics {\$HOPS_METRICS_PATH:/metrics}
}
EOF

Creating the iterapp.toml file

The iterapp.toml file describes how to run our app, and which resources we need to do that. Because the app is configured to listen to the PORT and HOPS_READINESS_CHECK variables, we can manage this with our environment variables.

We prefer to release our apps straight to production when the main branch is updated, so we'll specify that. In addition, we need to opt in to collecting metrics. We'll expose them on an internal port, 3001.

This is our iterapp.toml file for this project:

# iterapp.toml
default_environment="prod"

[metrics]
port = 3001
path = "/metrics"

Deploying our app

Create a repository on GitHub under the iterate organization, and push your code to it.

V2✨ V3 ✨

Register

In Slack, go to #iterapp-logs. Type /iterapp register {repository}, where {repository} is the part after iterate/. This configures HOPS to listen to your repository.

Deploy

To deploy your app for the first time, go to #iterapp-logs. Type /iterapp deploy {repository} prod main. This should deploy your application. Check it out at https://{repository}.app.iterate.no!

Register

The primary means of interacting with HOPS is through the CLI. To register your application from step 1:

  1. Install the CLI: (see CLI documentation for more)

    curl -SsLf https://cli.headless-operations.no/install.sh | sh
    
  2. Log in:

    hops v3 login
    
  3. Register:

    hops v3 register --cluster iterapp {repository}
    

    (Change {repository} to the the repository name including the organization, for example: iterate/dogs)

Deploy

To deploy your app for the first time, run

hops v3 deploy -a {repository} -e prod -r main`.

This should deploy your application. Check it out at https://{repository}.app.iterate.no!

🚀 Happy hacking!

Resources

1

As of Thursday, March 16th 2023.

Applings

Applings is an Iterapp feature which makes it possible to have different apps within one github-repository, and thus let applings also share code.

Say you would like to have an app for frontend and a different app for backend. This can easily be created by using the applings-feature. All your code will live in one repository with one commit log.


FrontendBackend
Github-repogithub.com/iterate/yourappgithub.com/iterate/yourapp
Mappe~/dev/iterate/yourapp/frontend~/dev/iterate/yourapp/backend/
URL i testhttps://frontend.yourapp.test.iterate.no/https://backend.yourapp.test.iterate.no/
URL i prodhttps://frontend.yourapp.app.iterate.no/https://backend.yourapp.app.iterate.no/

Applings are independent applications. They can have their own database and their own specific properties. Every time the app is deployed, all applings will be deployed.

V2✨ V3 ✨

So for instance all changes in all applings within github.com/iterate/yourapp will be deployed when running: /iterapp deploy yourapp prod main

So for instance all changes in all applings within github.com/iterate/yourapp will be deployed when running: hops v3 deploy -a yourapp -e prod -r main

Examples of what an appling can be:

  1. An appling for hosting documentation
  2. An appling as admin-panel

How to create applings?

Create the respective subdirectories where each applings will live. Each appling will need to have their own iterapp.toml-file! Then create an iterapp.toml-file on the root level with content:

applings = ["appling-a", "appling-b"]

Remember to update your Docker file

The Docker-build is run from the root of the github directory. Remember to update any paths within the dockerfile. Paths will need to be absolute.

Tip

See this repo for an example: Dockerfile for testapp-applings/appling-a

FROM nginx:alpine
ADD appling-a/public /usr/share/nginx/html

EXPOSE 80

Note

You must remember to use ADD appling-a/public and not ADD public.

Testing the build locally

Run the command from the appling-folder you would like to build.

  • -f "$(pwd)/Dockerfile" will use the Dockerfile in the current folder.
  • ../ will set the build-context to a level up (root-level) to make all applings code available.
docker build -f "$(pwd)/Dockerfile" ../

And if the build is a success, then that should be it. Commit, push, deploy and enjoy!

Use the Command Line Interface (CLI)

V2✨ V3 ✨

Do you want to deploy from the command line? Headles Operations / Iterapp has a CLI that can be used to deploy from the CLI, without using Slack. Slack is easier to get started with, but at some point you might want to have a CLI.

HOPS has a CLI that can be used to deploy.

Install

Using magic pipe-to-bash

curl -SsLf https://cli.headless-operations.no/install.sh | sh

Using magic pipe-to-bash, but with more control

Run the following to get the help and see the parameters

curl -SsLf https://cli.headless-operations.no/install.sh | sh -s -- -h

Download yourself

Get the URL from one of the following links

Upgrade

Run

hops self-update

(or download using one of the above links and replace the binary)

Uninstall

rm $HOME/bin/hops

Use the CLI

Login

Find the name of your cluster (for iterapp, the name is iterapp)

Run

V2✨ V3 ✨
hops login --cluster=CLUSTER_NAME
hops v3 login

and follow the instruction. You will be asked to create a auth token in the frontend for your cluster.

Switch the current cluster

V2✨ V3 ✨

If you have logged in to multiple clusters, you can change the current cluster by running.

hops config set-cluster CLUSTER_NAME

List all your logged in cluster by running

hops config list-clusters

In V3, clusters are selected per-app when they are registered.

Get help

V2✨ V3 ✨
hops help
hops v3 help

or get help with a command

V2✨ V3 ✨
hops deploy --help
hops v3 deploy --help

Deploy a app

If you are in the git-folder of the app, and want to deploy main to prod, you can run

V2✨ V3 ✨
hops deploy
hops v3 deploy

If you want to specify app, branch or environment, you can do that using flags

V2✨ V3 ✨
hops deploy -a my_app -e snap0 -r my-cool-branch
hops v3 deploy -a iterate/my_app -e snap0 -r my-cool-branch

List all your apps

V2✨ V3 ✨
hops list-apps
hops v3 list-apps

(Re)build a branch

This is only required if the previous build failed. It should happen automatically.

V2✨ V3 ✨
hops build -a my_app -r my-cool-branch
hops v3 build create -a iterate/my_app my-cool-branch

Connecting with Cloud SQL Proxy

It is possible to connect to iterapp instances using Cloud SQL Proxy. It provides a secure connection and easier connection management (according to the documentation). An in-depth explanation can be found here and the installation can be found here.

After installation, credentials must be set up with gcloud to gain access to the iterapp instances which can be installed following this link. When gcloud is installed, start up the Cloud SDK Shell and run gcloud init. The Cloud SDK Shell will prompt a login where you use your iterate mail. When logged in, choose app-iterate as your cloud project and run gcloud auth application-default login on the SDK Shell (which might prompt a new login) this will set the chosen account to be your default auth credentials for the SQL Proxy.

Now that the credentials are set, run the cloud_sql_proxy executable again with the following command: ./cloud-sql-proxy app-iterate:europe-west1:app-iterate --port 5432. With this up and running, open up a new command line tab and connect to your chosen instance. For example, when connecting to a postgres database you can type the following command on the new command line tab: psql -u myInstanceUser --host 127.0.0.1 --port 5432.

Note

One important thing to note here is the port number that is being used to run the Cloud SQL Proxy. You might not be able to connect to instances if the port is already being used. With the example of connecting to a postgres database you might get connection errors if you already have a local postgres database running. This can be fixed by running the Cloud SQL Proxy on a different port like so ./cloud-sql-proxy app-iterate:europe-west1:app-iterate --port 5433 and connecting to the database on that port psql -u myInstanceUser --host 127.0.0.1 --port 5433.

Note

Upgrading Google Cloud CLI (for example from 372.0.0 to 382.0.0 might break the cloud proxy script.

Following step 1 and 2 here https://cloud.google.com/sql/docs/mysql/connect-admin-proxy might fix that.

Scheduled tasks

If you need to do something every once in a while, such as sending e-mail updates, you can set up scheduled tasks using cron syntax. These "jobs" are triggered by sending HTTP requests to your application.

[cronjobs.update-thing]
    schedule="13 * * * *"
    path="/api/v1/run_scheduled_task"
    method = "POST"

    [cronjobs.update-thing.headers]
        Authorization = "Bearer asdf123"

This will execute a POST HTTP-request to /api/v1/run_scheduled_task 13 minutes after every hour. Timestamps for cron jobs are specified in UTC, so if you want to run a task 8 AM GMT summer time, you'll need to convert that to 6 AM UTC, and then to 0 6 * * * in cron syntax.

Your app must respond to the request within 10 seconds. For long-running jobs, you should return 202 early and then start the job in the background.

The HTTP requests will come from within the cluster. Each environment has its own set of independent jobs.

Retries and guarantees

If your application fails to handle the request, it is retried two times with 10 second intervals. A request is considered to have failed if it times out or if the status code is 400 or higher, but not one of 401, 403 or 429 or 501.

Your app can occasionally receive multiple requests. This might be fine if you're using jobs to remove expired data, but if your job sends a daily e-mail update, it could cause issues. One way to resolve this could be to store the time when the job last started in a database, and checking if the previous job happened too recently.

When deploying a new version of your app, the request can be delivered to both the new and the old pods.

Mission-critical work or jobs that require at-least once or at-most once guarantees should not use scheduled jobs.

Postgres

Iterapp supports Postgresql 14 out of the box. Opt in by adding an empty section tag in your iterapp.toml.

[postgres]

And Iterapp will create a fresh new database for your app and add a schema for your app on Iterapps Postgres cluster.

Warning

Any values beneath a section-tag will relate to the section. So move properties that do not relate to any tag to the top of the file.

For instance:

[postgres]

applings = ["app1", "app2"]

will not work as applings property is semantically under postgres-section, which is incorrect. The easiest solution is to move [postgres] to the end of the file. `

Connect to your database

The following environment variables are available when you would want to connect to the schema:

PGHOST
PGPORT
PGDATABASE
PGUSER
PGPASSWORD
DB_DATABASE
DB_PASSWORD
DB_HOST
DB_PORT
DB_USERNAME
DATABASE_URL

Graceful shutdown

When your application receives SIGTERM, database connections are given 10 seconds to deregister before being dropped.

Connect to your database locally

So to connect to your database locally, you can use the DATABASE_URL environment variable in your app. Just change user, password, host, port and database to what corresponds to your local database setup:

DATABASE_URL = postgres://user:password@host:port/database

Create tables and add data

Your database is initially empty. Your app is responsible for both creating tables and adding data into them. There are several ways of doing this. One approach would be to add migration-sqls when the app starts.

Environments

You will have one database per environment, which means that production and in test have their own database.

Postgres Extensions

You can enable postgresql extensions in your app by adding something like this to iterapp.toml

[postgres]
extensions=["uuid-ossp"]

We support all postgresql extensions supported by Google Cloud SQL

Note that when removing extensions from the list, they will not yet be removed from postgres if they are already deployed. This will probably be fixed at some later time.

Connect to your database on Google Cloud

In addition to the above, you can also connect to your own database on google cloud. To get started:

First you need to create your cloud SQL database in google-cloud. Notice your connection string (it looks something like app-iterate:europe-west1:app-iterate)

You need to give the service account cloud-sql-connect@app-iterate.iam.gserviceaccount.com access to connect to the database. To do that, add it to IAM with the Cloud SQL Client role.

Then add the following to iterapp.toml

[cloud_sql_postgres_instances]
5432="iterate-vake:europe-west1:vake-cloud-sql"

You will then be able to connect to your cloud sql instance on localhost:5433.

You also need credentials to login to the database, the above will only create the network-level connection. You should encrypt the credentials and add them as environment variables (see secrets).

Use both your own and iterapp db

If you want to connect to both to the db from iterapp and your own cloud sql instance, you can add both [postgres] and [cloud_sql_postgres_instances]. iterapp postgres will always listen on port 5432, so if you try to use that as a port number in cloud_sql_postgres_instances, you will get an error.

Using your own domain

When you are going to launch your new cat venture, you need a separate domain. The domain needs to be setup with so it points to APPNAME.app.iterate.no or 35.195.169.101. You can either buy the domain yourself or ask in the #ops slack channel.

After the DNS is updated, you must add it to iterapp.toml:

domains=["cats.cool"]

The first domain in the list will be the main domain.

After you have deployed to test and prod, the prod version will be available on cats.are.cool. SSL is automatically setup thanks to Let's Encrypt.

Your configured domains are also available to your app as environment variables:

  • HOPS_DOMAIN: The main domain the app listens to
  • HOPS_DOMAINS: Comma separated list of all the domains the app listens to

WWW redirect

Iterapp supports redirects both from and to www:

  1. Make sure both DNS alternatives are setup, so both www.cats.cool and cats.cool will point to 35.195.169.101
  2. Add the domain you will use in domains to iterapp.toml.
    1. Do not add the domain you are redirecting from
  3. Add the from_to_www_redirect=true to iterapp.toml

So if you like www

from_to_www_redirect=true
domains=["www.cats.cool"]

And if you do not want www

from_to_www_redirect=true
domains=["cats.cool"]

DNS and from_to_www_redirect

If you use from_to_www_redirect, Iterapp will ask for an ssl certificate on all redirects.

The certificate lookup will fail if not all domains are setup correctly.

That means that for each domain you add, you must setup DNS for www.DOMAIN. So if your cats domain also has my.cats.cool, then iterapp.toml must have these properties:

from_to_www_redirect=true
domains=["cats.cool", "my.cats.cool"]

And you must setup DNS for

  • cats.cool
  • www.cats.cool
  • my.cats.cool
  • www.my.cats.cool

Or do it by adding DNS with wildcard: *.cats.cool

Use DNS for verification

If you are moving a domain to another location and do not want any downtime, you can use a dns01 verification.

Wait to change the A-record for your domain, but set up the following if your domain is cats.cool

_acme-challenge.cats.cool CNAME acme-auth.iterate.no
_acme-challenge.www.cats.cool CNAME acme-auth.iterate.no

Note

Remove www if you do not want the www redirect.

Update iterapp.toml and deploy to prod.

from_to_www_redirect=true
domains=["cats.cool"]
dns01_for_acme = true

Once the certificate is in place, you can update the A records as mentioned above. Check with #ops if you do not know how to verify it yourself.

Direct Kubernetes access with kubectl

kubectl is the CLI for directly inspecting and editing your workloads in Kubernetes.

Danger zone

Do not rely on being able to access your apps using kubectl. kubectl access is provided as a workaround while we improve servicability though the CLI, API and website.

One of the main motivations for building HOPS was to let you avoid dealing with Kubernetes's complexity. But here we are - you're about to do just that. You can read more about our Kubernetes environment here.

We wish you did not have to follow this how-to. If you're in the unfortunate position where this is the only way to do what you're trying to accomplish, reach out to us, we're interested in hearing about your use case.

Returning users: Jump to Logging in again

Get access to the Kubernetes cluster

HOPS' access controls are configured in the access.toml-file in the iterapp-config repository.

  1. Add an entry for the project you wish to access in the access.toml file. The entry has the following format:

    [namespaces.{{ Namespace }}]
    edit = ["{{ iterate.no Google account}}"]
    

    The namespace format is described in our Kubernetes reference.

  2. Ask someone in #ops on Slack to apply the changes. (The documentation for syncing is in the same repo).

    (If you have a problem getting access to the app-iterate-project, you are probably not added to the all-no@iterate.no email. #ops can fix that.)

Install and configure the gcloud CLI

kubectl uses credentials on your computer to authenticate with the Kubernetes cluster. You need to set up these credentials. In Google Cloud, Kubernetes credentials are created using the gcloud CLI.

  1. Install gcloud, either:

    a. by using brew on macOS :

    $ brew install --cask google-cloud-sdk
    

    NB: You might have to add gcloud to your PATH. Run brew info google-cloud-sdk and follow the caveats section.

    b. by following the instructions on Google Cloud's developer documentation site.

  2. Verify that gcloud works, and that it returns version information:

    $ gcloud version
    
  3. We highly recommend you create a configuration for this purpose. In a bash or zsh shell:

    # Set this first:
    export ITERAPP_USERNAME=SET_YOUR_USERNAME@iterate.no
    
    # Copy this entire thing:
    sh -i -s <<'EOF'
        set -ux;
        # This creates a new configuration
        gcloud config configurations create hops-iterate || true;
    
        # The following configures the configuration.
        # Project does not refer to your app. It refers to which Google Cloud project the
        # cluster belongs to. Do not change this value.
        gcloud config set project "app-iterate" || true;
        gcloud config set account "$ITERAPP_USERNAME" || true;
        gcloud config set compute/zone "europe-west1-d" || true;
        gcloud config set compute/region "europe-west1" || true;
        gcloud config set container/cluster "iterapp-gke" || true;
    EOF
    
  4. We'll use gcloud later to finish configuring kubectl.

Install kubectl and the credential helpers

  1. Install kubectl. This is the primary Kubernetes interface.

  2. Verify that kubectl works, and that it returns version information:

    $ kubectl version
    
  3. Install gke-gcloud-auth-plugin. This is an authentication helper which provides kubectl with valid credentials for the Kubernetes cluster. We'll create these credentials using the gcloud CLI later on.

    Install the plugin using gcloud:

    $ gcloud components install gke-gcloud-auth-plugin
    
  4. Verify that gke-gcloud-auth-plugin works, and that it returns version information:

    $ gke-gcloud-auth-plugin --version
    

Configuring kubectl for the HOPS cluster

  1. Create a context for the HOPS cluster ("iterapp-gke") for kubectl using gcloud:

    $ gcloud --project=app-iterate \
        container clusters --zone=europe-west1-d \
        get-credentials \
        iterapp-gke
    
    # Output:
    # kubeconfig entry generated for iterapp-gke.
    
  2. Optionally, change the name of the kubectl context that was created:

    $ kubectl config rename-context gke_app-iterate_europe-west1-d_iterapp-gke hops-iterate
    
    # Output:
    # Context "gke_app-iterate_europe-west1-d_iterapp-gke" renamed to "hops-iterate".
    
  3. Finally, verify that your context is present:

    $ kubectl config get-contexts
    
    # Output (truncated):
    # CURRENT   NAME           CLUSTER ...
    # *         hops-iterate   gke_app-iterate_europe-west1-d_iterapp-gke
    

Logging in again

Your authentication runs out after 24 hours, after which you need to log back in.

  1. Activate the correct gcloud configuration and log in:

    $ gcloud config configurations activate hops-iterate && gcloud auth login
    
    # Output:
    # (...)
    # You are now logged in as my.name@iterate.no
    
  2. Ensure you're in the correct kubectl context:

    $ kubectl config use-context hops-iterate
    
    # Output:
    # Switched to context "hops-iterate".
    

    Errors? Go to Configuring kubectl for the HOPS cluster.

  3. Check if you have access by running a command that lists namespaces.

    $ kubectl get ns
    

Debugging with kubectl

Make sure you have access to the namespace for your app, and that you have the required tools installed. See the how-to article on installing kubectl.

Make sure you know your namespace name, it's described in our Kubernetes reference.

Finding your pods

Taking a look at your pods can be a quick way to figure out what's wrong with your app.

To get the pods in your application:

$ kubectl get -n MY_NAMESPACE pods

# Output: (truncated)
# NAME           READY   STATUS
# my-app-ntfq5   1/1     Running

To get information about that pod:

$ kubectl describe -n MY_NAMESPACE pods/my-app-ntfq5 | less

# Output:
# /* Fills the screen */

View logs with kubectl access

An exciting thing in you can do with kubectl is to view logs.

Logs from containers can be found by somehow selecting containers. To find containers, we must first determine which namespace to use. Let's say we're interested in an app named olas-create-react-app, which is crashing in test:

$ kubectl get namespaces | grep olas-create-react-app

# Output:
# apps-olas-create-react-app-prod          Active   30d
# apps-olas-create-react-app-snap1         Active   1d
# apps-olas-create-react-app-test          Active   31d

We now know that our namespace is "apps-olas-create-react-app-test".

Read the docs!

Depending on how you installed kubectl, you might have access to the man pages.

$ man kubectl-logs

Listing logs per container

Find all containers in the namespace:

$ kubectl get pods --namespace apps-olas-create-react-app-test

# Output:
# NAME                                    READY   STATUS             RESTARTS        AGE
# olas-create-react-app-5c4b8d64-ftm5k     1/1     Running            0               2d
# olas-create-react-app-687b84cf85-mkl46   0/1     CrashLoopBackOff   9 (3m22s ago)   24m

We're interested in olas-create-react-app-687b84cf85-mkl46:

$ kubectl logs pods/olas-create-react-app-687b84cf85-mkl46 \
    --namespace apps-olas-create-react-app-test

# Output:
# panic: http: invalid pattern
#
# goroutine 1 [running]:
# net/http.(*ServeMux).Handle(0xc00007a200, {0x0, 0x0}, {0x6f2060?, 0x6b1f28})
#         /usr/local/go/src/net/http/server.go:2510 +0x25f
# net/http.(*ServeMux).HandleFunc(...)
#         /usr/local/go/src/net/http/server.go:2553
# /* snip */

Fetching logs for a deployment

Find the deployment name, and then find the logs for that deployment

# Setting the namespace for all requests:
$ kubectl config set-context --current \
    --namespace=apps-olas-create-react-app-test

# What's the deployment name?
$ kubectl get deployments -o name

# Output:
# deployments.apps/apps-olas-create-react-app-test

# Get the logs using black magic:
# - Get the "selector", which describes how a deployment identifies resources that belong to it
# - Turn the selector from JSON to a key=value,key=value list
# - Get pods selected by this selector
# - Get logs for those pods

$ kubectl get deployments.apps/olas-create-react-app --output json \
    | jq '[ .spec.selector.matchLabels | to_entries | .[] | "\(.key)==\(.value)" ] | join(",") | @sh' -r \
    | xargs -n1 -I "{}" kubectl get pods --selector="{}" --output name \
    | xargs -n1 kubectl logs --prefix --tail=100

Streaming logs

If you want streaming logs, you can install stern.

Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod.

Stern matches pods and log lines using regular expressions.

# Set the default namespace, so we don't need to pass --namespace all the time:
$ kubectl config set-context --current \
    --namespace=apps-olas-create-react-app-test


# stern pod-query [flags]

# All pods, all containers
$ stern '.*'

# All pods, but not one particular container
$ stern '.*' --exclude-container '^cloudsql-proxy$'

# Ignore health checks
$ stern '.*' --exclude '/health'

# Only "waiting" (failing?) "backend" pods (backend is part of the appling name)
$ stern 'backend' --container-state "waiting"

Nats (Message Bus / PubSub)

NATS is a Message Bus which can optionally be enabled in Iterapp. It makes communicating between your pods easier. More importantly it supports pub/sub, which means that one of your pods can broadcast a message to all of your other pods. This is essential for real-time-updates, like for collaboration or chat. For instance this is what makes collaborative editing possible in Icecalc, and powers the chat in Anywhere.

When enabled, your app get its own nats-account for each of its environments. Since a nats-account also functions as a namespace, apps cannot communicate with each other, and cannot communicate between environments.

This means that when for instance broadcast anything to a subject called test, only subscribers in pods for the same app and same environment will receive the broadcast.

Nats is at most once delivery. That is, it gives no guarantees of message delivery. In most cases the message will be delivered, but for instance if your pod was reconnecting to NATS the second the package was sent, it will not receive it. If you need to have guarantees, you can do that using a number of ways. For instance sending acks when the package was received, or using sequence numbers to discover missed packages.

In the future, we might want to add JetStream to the NATS-cluster in Iterapp. JetStream will give at least once delivery. However right now that technology is a bit too immature.

Tip

For a basic example of a iterapp-app with nats, take a look at iterate/example-nats

Enabling

Add the following to your iterapp.toml

[nats]

When deploying your app, an account will be created for you, and the app will get the environment-variables needed to connect.

Environment Variables

These are the env-variables your app will receive.

NameDescription
NATS_URLURL used to connect to the nats-server
NATS_CREDENTIALSThe credentials required to connect to the nats-server
NATS_CREDENTIALS_FILEPath of a file with the same content as in NATS_CREDENTIALS
NATS_CA_FILEPath to a file with the CA-certificate used to sign the TLS-certificate used by NATS

Note

NATS is using a self-signed certificate, so you'll need to add the CA to the certificate store when connection, if not you'll get an error.

Running locally

When developing an application that uses NATS, you should have something runninc locally that NATS can connect to. You don't need to configure anything, just start a local nats-instance.

Using docker

> docker run -p 4222:4222 -ti nats:latest

Using homebrew

> brew install nats-server
> nats-server

Downloading a release

Download the latest release from the release-page for NATS.

Run it without any configuration

nats-server

Build from source

> GO111MODULE=on go get github.com/nats-io/nats-server/v2
> nats-server

Connecting to NATS

Here is examples for how to connect to the terapp NATS server in different languages

NATS has clients in most languages, and it should be relatively straightforward to take the ideas from here and write in other languages, but please if you do, update this documentation.

NODE.js (Javascript)

Note that we only use credentials if they are defined. This means that when developing locally you can connect to a local nats server that does not require authentication, nor SSL, which is the default nats-server config.

(If someone converts this to typescript, please add the example here :) )

const { connect, StringCodec, credsAuthenticator } = require('nats');

let authenticator;
// NATS_CREDENTIALS is undefined in development, but defined in production
if (process.env.NATS_CREDENTIALS) {
  authenticator = credsAuthenticator(
    new TextEncoder().encode(process.env.NATS_CREDENTIALS)
  );
}

const nc = await connect({
  servers: process.env.NATS_URL,
  // NATS_CA_FILE is undefined in development, but defined in production
  tls: process.env.NATS_CA_FILE && { caFile: process.env.NATS_CA_FILE },
  authenticator,
});

Rust

Note that we only use credentials if they are defined. This means that when developing locally you can connect to a local nats server that does not require authentication, nor SSL, which is the default nats-server config.

This uses the async nats, it should be relatively straightforward to convert to using the sync nats packages, since the API is similar.

#![allow(unused)]
fn main() {
// We use anyhow for easier error-management
use anyhow::Result;
use async_nats::Connection;


pub async fn create_connection() -> Connection {
    loop {
        match try_create_connection().await {
            Ok(conn) => return conn,
            Err(err) => {
                println!("Error connecting to nats: {}. Retrying...", err);
                tokio::time::sleep(Duration::from_secs(2)).await;
            }
        }
    }
}

pub async fn try_create_connection() -> Result<Connection> {
    let nats_system_account_cred_file = std::env::var("NATS_CREDENTIALS_FILE").ok();
    let nats_ca_file = std::env::var("NATS_CA_FILE").ok();
    let nats_url = env_or_die("NATS_URL")?;

    let mut options = match nats_system_account_cred_file {
        Some(cred_file) => async_nats::Options::with_credentials(&cred_file),
        None => async_nats::Options::new(),
    };

    if let Some(ca_root) = nats_ca_file {
        options = options.add_root_certificate(&ca_root);
    }

    let nc = options.max_reconnects(None).connect(&nats_url).await?;

    Ok(nc)
}
}

Register a new Organization

The first time you use a github-organization for iterapp, you need to connect the two.

Register this app on your github-organization: https://github.com/apps/headless-operations. If you choose to just install the app for a subset of repos, they will be the only one you will be able to deploy.

You can use /iterapp as for an interal app, except that you always need to use the full name of the app (i.e yourorg/yourapp) and that you will get a domain on iterapp.no instead of app.iterate.no/test.iterate.no

Secrets

Security

Please use [env.prod] and [env.common] sections in iterapp.toml actively when dealing with secrets. This adds an extra layer of security as the app will have different secrets for production and other environments.

Remember not to:

  • Print the secret value, this will expose it in the logs.
  • Move the secret file to another location. This will add the secret to the final docker-image.

Intro

This page goes through the different types of secrets supported by Iterapp.

Secrets should not be put directly into the repository, neither in iterapp.toml nor in any other files. Instead, we encrypt the secrets used in iterapp, and include the encrypted secrets in iterapp.toml. This ensures that the secrets are not easily stolen, even if someone gains access to the repo or containers of the application.

Encryption of secrets is done here: https://apps.iterapp.no/encrypt_secret.

  • Runtime secrets are used when the app is running.
  • Buildtime secrets are used when building the app in the Dockerfile.
  • Direct secrets in kubernetes.
  • Secret files secrets mounted in kubernetes.

Go down to the relevant chapter depending on your needs.

Runtime secrets

Runtime secrets are values your app needs when running. Such secrets can be API-keys to remote APIs (firebase, sanity ++).

How to use runtime secrets

Encrypt the runtime value and add it to iterapp.toml. You can either add it as an environment value which the app can use, or as a file which will be available in the filesystem as the app.

Example: environment variable

[env.prod] ENV_VAR= { encrypted = "MBZ53sHc3dNOd9KhArzTy..." }

[env.common] ENV_VAR= { encrypted = "O1H6jkrLdPxrORgdnNa3e..." }

[env.prod] overrides values in [env.common], read more on how overrides work

Buildtime secrets

Before going into how to use your environment variables and secret files for Docker builds, you should know that using secrets with Docker can result in your image containing sensitive information. Although we store your images securely, Docker registries should be treated like code repositories: it’s best practice to not store secrets in them. You should avoid using secrets in your Docker builds to eliminate the chance of accidentally storing sensitive material.

The best way to use secrets in your Docker build is with secret files. Unlike build arguments, secret mounts aren’t persisted in your built image. Docker services can access environment variables and secret files like other kinds of services at run time. However, because of the way that Docker builds work, you won’t have access to environment variables and secret files as usual at build time.

Build secrets are used when you want to have access to a secret value in the build-process of the app. This can for instance be an access key to a repo to install extra packages, or a git token to fetch common components.

How to use build-secrets

Encrypt your secret (remember to select Build-secrets and not All environments) and add it to your iterapp.toml as shown below:

[build_secrets]
"your-build-secret-id" = { encrypted = "nb-DGpWdtc9-m0N8BMV7F-SX5Yksa53y7KRBjox1TFEPjHIV4w_Nb8KxVl4xhh3jdButhUiN7W681z5uNWngemIsibbya-8aLa8bNaf7xYppHpFDhBaVwpvPL5rufaLeddBrtt4OgDVLYUgPl6tU6IqgC3oPopIOYLDc9UERSA" }

Note

The secret value is specific for building your app. It cannot be used for another app, or as a runtime-secret for your app.

your-build-secret-id is an identifier to the secret that was built. You will use this ID in the dockerfile.

You will have a command like this in your dockerfile:

RUN --mount=type=secret,id=$1,dst=/secret-file-rename \
    $1=$(cat /secret-file-rename) \
    && export $1
    && $2

Replace $1 with the secret you want to be available (e.g. your-build-secret-id). Then replace $2 with the command you want to run with the secret available to it (e.g. yarn build).

So what does this command do? Good question!

The --mount flag will mount the secret-file into the docker container, so the file will be available in the Dockerfile when building the image.

id is the identifier to the secret file which we set in iterapp.toml. Docker does not use the filename of where the secret is kept outside of the Dockerfile, since this may be sensitive information.

dst specifies where to mounts the secret file. The Dockerfile RUN command will have the file available at that location.

For more information about the syntax, take a look at the buildkit docs

Making the build work locally

You still want your app to work locally when running docker build. To make this work, you need to enable buildkit and point to the secret value. If you have the secret in a file called $HOME/.secrets/my_secret.txt, you can build like this

DOCKER_BUILDKIT=1 docker build --secret id=your-build-secret-id,src=$HOME/.secrets/my_secret.txt .

Note

You need a relatively recent version of docker.

Use kubernetes-secrets directly

There might be cases where you need to use secrets that already are in the kubernetes namespace of the app.

If so, this is how to do it:

  1. Get access to your apps namespace through (https://ops.iter.at/iterapp/kubectl-access.html)

  2. Make a secret in kubernetes

kubectl -n apps-myapp-test create secret generic db-keys --from-literal=password=asdf1234password
kubectl -n apps-myapp-prod create secret generic db-keys --from-literal=password=asdf1234password
  1. Update iterapp.toml file
[env.common]
DB_PASSWORD= { secret = "db-keys", key = "password" }

Use kubernetes-secrets as files

This is how you mount a secret file and make it available to your app in docker::

  1. Make a secret in kubernetes (notice how a secret can have several files!)
kubectl -n apps-myapp-test create secret generic my-secret --from-file=./my-file.json --from-file=./my-other-file.json
kubectl -n apps-myapp-prod create secret generic my-secret --from-file=./my-file.json --from-file=./my-other-file.json
  1. Add the following to iterapp.toml:
[[files.common]]
mount_path = "/app/secrets/"
secret = "my-secret"

Warning

Everything under mount_path will be changed to whatever the content of the secret will be. Therefore, use an empty folder.

Iterapp.toml

The iterapp.toml file is the single point of entry-file containing the necessary features that your app will need to live in the Iterapp-universe. This needs to be in your repository's root-directory. When Iterapp discovers that your app has a file like this, it will use it to setup the application.

Note

An empty file is a perfectly valid configuration, if you can use everything as default.

Features

Here is an overview over everything you can put in iterapp.toml, with defaults.

# Change to deploy multiple applings. See the appling-docs.
applings=[]
# Which environment to auto-deploy. Set to `none` to disable auto-deploy. (will still build and you
# can deploy from slack or manually)
default_environment="test"
# Which port is the app listening
port = 3000
# How many instances of your app to run in prod (default is 2)
replicas = 2
# How many instances of your app to run in test/snap (default is 1)
replicas_test = 1
readiness_path = "/health"

# Liveness probes. You probably only want this if your app suddenly stops to a halt without crashing
# See https://play.sindre.me/kubernetes-liveness
liveness_path = "/liveness"
# If you use liveness, it can be a good idea to have a separate http server (in the same process) for that
# one. If you have that, you can specify the port here.
liveness_port = 3001


domains=[]
# See Bruk ditt eget domene
from_to_www_redirect = false
# See Bruk ditt eget domene
dns01_for_acme = false

[env.common]
# Port is set to the value of port.
PORT="3000"

[env.prod]
# Default is empty
ENV_NAME="ENV_VALUE"

[env.test]
# Default is empty
ENV_NAME="ENV_VALUE"

[build_secrets]
# Default is empty. Generate secrets on https://apps.iterapp.no/encrypt_secret
"my-secret" = { encrypted = "FSXRT02ouBlR4edBprBiUxgP1ii5_nWLwYQycy0OP1wK0z51ZeclZSIRCEtSAwp3nrqBGh9ckemqb9MYrnAdi6_NxQOoyji1dtZn1qNWQUuf6" }

[postgres]
# Just adding the postgres-header is enough to add postgres and get a database
# But remember that following entries need to be a section with properties and not just properties,
# otherwise Iterapp will think the properties belong to [postgres]-section

[redis_lfu_cache]
# Deploy a redis instance configured as a lfu cache as part of the app. See `redis`.
enabled = true

[cloud_sql_postgres_instances]
# See the doc for `postgres` for how to use this.
5432="iterate-vake:europe-west1:vake-cloud-sql"

[ingress]
# Set to `true` to disable access logs
disable_access_log = false

# Max size of bodies. Default is "1m" (I think). See
# http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size
proxy_max_body_size="8m"

# Whether to disable the ingress for this app or appling.
disable = false

# Whether to allow large headers in responses.
# Possible values are "normal", "large" and "huge".
response_max_header_size = "large"

# Iterapp is able to set domains for other environments than prod. This is proably not
# needed, but might be needed for CORS-reasons when you are serving the frontend
# elsewhere
[domains_env]
test = ["test-api.ting.no"]

# Create a cronjob
[cronjobs.update-thing]
    schedule="0 * * * *"
    path="/api/v1/run_scheduled_task"

    # The default method is GET, but will be changed to POST in a future version.
    # Valid methods are GET, POST, DELETE, HEAD.
    method = "GET"

    [cronjobs.update-thing.headers]
        Authorization = "Bearer asdf123"

[nats]
# Just adding the nats header is enough. You'll get a nats account. See more under `nats` in the menu

# You can specify requests for cpu and memory, overriding the defaults
# (25m cpu + 100Mi memory). This is an advanced feature, but might be required
# for apps using lots of memory or cpu.
#
# NOTE: This is _not_ the max amount of CPU/memory the pods can use. This is a
# note to kubernetes saying how large your pods are, and are used for calculating
# how many nodes we need in the cluster. The scheduler uses this information
# to decide which node to place the Pod on.
#
# Note: This is _per pod/replica_.
#
# See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
# Note that we don't set limits for any pods
[requests]
cpu = "25m"
memory = "100Mi"

Overriding values

Sometimes your app will need to override values, for example you would like to have a database in production and a database for all other environments. To make this work you would add the production database properties to be under [env.prod] and test database under [env.common]. The [env.prod] overrides properties defined in [env.common] when the app runs in production.

Default Environment Variables

Iterapp sets the following environment variables for all running apps.

Environment variableDescription
PORTThe port the app should listen on for HTTP. Same as port in iterapp.toml (default 3000)
HOPS_BUILD_NUMBERThe build number for the build that build the dockerimage that is running
HOPS_DEPLOYMENT_IDThe deployment id for the current deployment
HOPS_ENVThe environment the app is running in. I.e. prod, test, snap3.
HOPS_GIT_SHAThe GIT SHA hash used for building

In addition apps with a database has environment-variables used to connect to that. See postgres.

Environment variables

These are environment variables available to your app at runtime provided by HOPS.

varwhatsee
DATABASE_URLFull URL/connection string of provisioned database. postgres://user:password@host:port/databasepostgres
DB_DATABASEdatabase of the provisioned database, alias of PGDATABASEpostgres
DB_HOSThost of the provisioned database, alias of PGHOSTpostgres
DB_PORTport of the provisioned database, alias of PGPORTpostgres
DB_PASSWORDpassword of the provisioned database, alias of PGPASSWORDpostgres
DB_USERNAMEuser of the provisioned database, alias of PGUSERpostgres
HOPS_BUILD_NUMBERDeprecated in favour of HOPS_BUILD_ID
HOPS_BUILD_IDThe build id associated with the running container.
HOPS_DEPLOYMENT_IDThe deployment id for the deploy that is currently running.iterapp.toml
HOPS_DOMAINThe main domain the app listens to.domains
HOPS_DOMAINSComma speratated list of all domains the app listens to.domains
HOPS_ENVThe environment the app is running in. (E.g prod, test, snap3, etc)environments
HOPS_GIT_SHAThe GIT SHA hash used for buildingiterapp.toml
ITERAPP_BUILD_NUMBERDeprecated in favour of HOPS_BUILD_ID
ITERAPP_DEPLOYMENT_IDDeprecated in favour of HOPS_DEPLOYMENT_ID
ITERAPP_GIT_SHADeprecated in favour of HOPS_GIT_SHA
NATS_CA_FILEPath to a file with the CA-certificate used to sign the TLS-certificate used by NATSNATS
NATS_CREDENTIALSThe credentials required to connect to the nats-serverNATS
NATS_CREDENTIALS_FILEPath of a file with the same content as in NATS_CREDENTIALSNATS
NATS_URLURL used to connect to the nats-serverNATS
PGDATABASEdatabase of the provisioned database, alias of DB_DATABASEpostgres
PGHOSThost of the provisioned database, alias of DB_HOSTpostgres
PGPASSWORDpassword of the provisioned database, alias of DB_PASSWORDpostgres
PGPORTport of the provisioned database, alias of DB_PORTpostgres
PGUSERuser of the provisioned database, alias of DB_USERpostgres
PORTset by the port property in iterapp.toml (defaults to 3000)iterapp.toml
REDIS_HOSThost for provisioned Redis.redis
REDIS_PASSWORDpassword for provisioned Redis.redis
REDIS_PORTport for provisioned Redis.redis
REDIS_URLURL/connection string for provisioned Redis. redis://user:password@host:portredis
REDIS_USERuser for provisioned Redis.redis

Custom variables

In addition to the variables mentioned above, all environment variables you specify in iterapp.toml will be available the app in the appropriate environment.

System variables

In addition to the variables mentioned above, the OS/distro/runtime of your app (for example Ubuntu, ZSH) will set environment variables available to your app.

Redis

Iterapp apps (and applings) can opt in to receive a redis instance configured for LFU (Least Frequently Used) caching.

[redis_lfu_cache]
enabled = true

Each appling or app that opts in to redis will get their own redis instance for every environment. If you use applings and have the above in the root level iterapp.toml, all applings will get a shared instance. If the configuration is in a appling's iterapp.toml, it will get its own instance.

About the configuration

This redis instance will be configured as a lfu cache. It does not have persistence. This means that data will be lost when the instance is restarted, which will happen pretty regulary.

The least used keys will be deleted if the instance use more than 100MB of memory. This mean that you can cache as much data as you want, and only the most used will be kept. See also the redis reference, especially the section about LFU caching.

Your instance's redis user has some of the commands disabled, mostly to avoid accidental changing of configuration. If you wan't to add some commands that are left out, let us know.

This is the important lines from the configuration that are generated for your instance:

maxmemory 100mb
maxmemory-policy allkeys-lfu
save ""
user default +@all -@dangerous +keys +info +sort +flushall +flushdb allkeys allchannels on >INSTANCE_PASSWORD

Connecting to the redis instance

Your app will receive a environment variable, REDIS_URL, which contain all the required config used to connect to the instance. For instance, if you are using node and ioredis:

const Redis = require('ioredis');
const redis = new Redis(process.env.REDIS_URL);

async function main() {
  await redis.set('my-key', '42');

  let value = await redis.get('my-key');
  console.log(value);
}

Environment variables

If you use client library that does not support a REDIS_URL, the parameters are also exported as individual environment variables. This is all the variables that are available to your app(ling).

REDIS_URL
REDIS_PASSWORD
REDIS_HOST
REDIS_USER
REDIS_PORT

Environments

Let's talk about the different environments that Iterapp offer.

EnvironmentDescription
ProdThe environment where the latest working version of your application is installed and made available to end-users. Therefore it must always be in working condition and bug-free
TestThe environment where testing of an application is performed and quality control is done before deploying it to production
Snap(0-9)The environment where developers can test their code in a production-like environment
LocalDevelopers local environment

Your Iterapp application should run in at least two environments:

  • On your local machine (i.e., development).
  • Deployed to the Iterapp platform (i.e., production)

Ideally, your app should run in two additional environments:

  • Snap, for testing deployment and get early feedback before promoting it to Test
  • Test, for deploying the app in a production-like environment. Changes in master will automatically be deployed to Test.

Deploy to an environment

Deploy your app to the environment with

V2✨ V3 ✨

/iterapp deploy <appname> <environment> <branch>.

/iterapp deploy <appname> <environment> <branch>.

The iterapp.toml-file has a property, default_environment, which is set to test by default.

This means that the branch that is set to default in the github repository, most likely master or main, will be auto-deployed to test whenever a merge to master with a successful build is done.

Tip

V2✨ V3 ✨

See how to deploy with slack for detailed information.

See CLI for detailed information.

V2✨ V3 ✨

Verify your app is up and running in the desired environment. A link to the environment should be seen in the #iterapp-logs slack group.

Verify your app is up and running in the desired environment. The url of the environment is something like https://<environment>.<appname>.app.iterate.no.

Health / Readiness

Iterapp uses health checks in order to verify if a container in a pod is healthy and ready to serve traffic. Health checks, or probes as they are called in Kubernetes, are carried out by Kubernetes to determine when to restart a container and used by services and deployments to determine if a pod should receive traffic.

The health endpoint for the application is set with the property readiness_path in iterapp.toml.

Note

It is the responsibility of the application developer to expose a URL that the kubelet can use to determine if the container is healthy. If this is not added, Kubernetes will assume your app is not responding and stop directing traffic to it.

The default value of the endpoint is /health.

readiness_path = "/health"

Kubernetes will use the health endpoint exposed in iterapp.toml and make a HTTP request on it. Response 200 OK means that the app is ready.

The health check is used reduce downtime when switching builds. If the health endpoint stops responding then Kubernetes will stop directing traffic to the new pod.

Why not use / as health endpoint?

As Kubernetes checks the health endpoint every 2 seconds, having / as the endpoint can potentially trigger loading data, requests to external services, nuke caches etc.

Therefore having the health check as a dedicated endpoint will be a safer approach. It is advised to turn off logging at this endpoint to avoid too many log-lines.

If you want to check external services on your health endpoint, don't do it directly, since it's called every other second. Instead, have a separate task that update some state, and check that state in the health endpoint.

Read more

https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes

Commands cheat sheet

This page contains a list of commonly used commands and flags. Not all of the kubectl-commands can be run without having the correct access.

Tip

Check this cheat sheet page for a more comprehensive guide.

Slack channels

DescriptionChannels
Deployment# iterapp-logs
Support# iterapp-support
Iterapp dev# iterapp-development

Slack commands

DescriptionCommands
Register app/iterapp register appname
Deregister app/iterapp deregister app
Build app/iterapp build appname environment branch
Deploy app/iterapp deploy appname environment branch

Kubectl commands

DescriptionCommands
List all secrets within the podkubectl -n app-namespace get secrets
List all pods within namespacekubectl -n app-namespace get pods
Show detailed information of a resourcekubectl -n app-namespace describe pod pod-name

Daily routines

View logs

You have two options: Lookup your app at...

V2✨ V3 ✨

...Iterapp Apps, or...

...HOPS Web, or...

...under the hood through kubectl access.

V2✨ V3 ✨

Metrics

If your app exposes metrics in prometheus' format (also known as Open Metrics), Iterapp can gather them using our metric infrastructure.

Send custom metrics to Iterapp

You should expose metrics in your app, how you do that depends on how your app is coded, but there is probably some library for your tech of choise.

Then add the following to your iterapp.toml

[metrics]
# The path where your metrics is exposed
path = "/metrics"
# The default is to scrape the http port defined for your app, however we can also scrape some other port
# (omit port to use the default)
port = 3000

Query metrics

Right now the only way to query metrics is using our grafana instance at https://grafana.iterapp.no. This is only accessible for Iterate employees (members of the github organization iterate), however in the short future we want a way to have metrics available for users outside of Iterate.

Retention

Metrics is deleted after 1 month, so it's not meant to be used for data you need in the long term.

Disable cache for a build step

In some edge-cases you might want to disable cache for a build step in the Dockerfile, for instance if using gatsby and fetching external data which might have changed since you last built the image. This is not supported by Docker, but we have enabled a hack that makes it possible. Note that this will disable cache for that ann all subsequent build steps.

When building your docker-file, we set a build argument, BUILD_DATE_USED_FOR_CACHE_BUSTING to the current ISO-8601 timestamp. This can be used in a build-step, and since that will change for each build, the cache will never be re-used.

Example Dockerfile:

# This is the build-container, we need node to build
FROM node:16 as build

# Set NODE_ENV to production to create a production-build of react
ENV NODE_ENV production

# Create a directory to use for building
RUN mkdir /app
# Set the build-directory as the working (and current) directory
WORKDIR /app

# We start by building just dependencies, this means that we can use cached dependencies
# if these files are not changed
COPY gatsby-site/package.json .
# We are using yarn. If you use npm, this would be package-lock.json
COPY gatsby-site/yarn.lock .
# Remove this if you are not using typescript. (but you should use typescript)
COPY gatsby-site/tsconfig.json .
COPY gatsby-site/create-env.js .
# Install dependencies
RUN yarn install --pure-lockfile

# Copy the actual code and public (static files)
COPY gatsby-site /app
# Build the dependencies. This will output to `/app/build` the static files which need to be served
# to the user
RUN node /app/create-env.js

# This is set by iterapp to the current datetime
ARG BUILD_DATE_USED_FOR_CACHE_BUSTING=not_set

# This one is required to bust the cache. Required because yarn build fetch data from sanity, and
# Docker does not know if that has changed
RUN echo "Build date: $BUILD_DATE_USED_FOR_CACHE_BUSTING"

# This will never be cached
RUN yarn build


# We don't need the node-container in production, we just need something that can serve the static
# files to the user. Nginx is really good at this. `FROM` starts a new container
FROM nginx:1.21.1-alpine

# We copy the built files from the build-container. These files are in `/app/build` after the
# build-step above.
COPY --from=build /app/public /usr/share/nginx/html

How HOPS deploys your applications to Kubernetes

This page has a high-level overview of how your applications are deployed in our Kubernetes environment. We don't provide any guarantees about the information in this page. As we work to make HOPS less tied to Kubernetes, we might use the Kubernetes internals differently to accomplish our goals, and provide better interfaces for you to build, run and fix your applications.

Until then, you might need to pop up the hood and interact with some parts of the Kubernetes engine.

Apps and namespaces

Each app in HOPS is deployed to one ore more Namespaces, one for each environment, such as prod or test. We currently construct namespaces in the format apps-{app}-{environment}. Note that underscores (_) are replaced with dashes (-).

Each appling in your app is deployed as a Deployment, each with it's own Service, making them accessible internally by other applings in the same namespace. The appling's service is discoverable as {appling} for apps with only one appling, and {app}-{appling} for projects with more than one app.

Your applings' containers, built from their Dockerfile(s), run as Pods. Pods are workload descriptions that let Kubernetes know how to run one or more containers . We might run your appling's container along with sidecar containers1, such as the SQL proxy container. This usually has no impact on your applications.

Your deployments are configured to multiple replicas of your pods, meaning that we run multiple instances of your pods concurrently.

Builds and deployments

We build your applings using BuildKit builders. We build for the x86-64 architecture, and run our clusters on Linux.

Whenever you publish a new version of your application, we build all your applings and update their deployments. This causes Kubernetes to roll out new [ReplicaSets]s, which gradually rolls out the new versions of your applings. We do not provide guarantees that you will not have multiple versions running a deployment. Applings may individually succeed or fail to deploy.

As your pods become healthy, they receive traffic, and old pods are shut down using the SIGTERM signal.

How we route traffic to your applings

Traffic to your applings is routed based on the Host header in incoming HTTP requests, matching to the domain field in iterapp.toml. We route requests using nginx's ingress controller, terminating TLS in the ingress, and passing unencrypted HTTP traffic to your pod.

How we keep your pods alive2

We check for readiness, not liveness, by default (see the iterapp.toml reference)3. We use HTTP checks, and you must respond within reasonable time with a status greater than or equal to 200 and less than 400. Any other code indicates failure.

Readiness probes simply causes the load balancers to leave your pods alone on failure. If your appling is permanently unhealthy and you need to restart it, you may shut down using any exit code. When pods stop, for any reason, they are restarted.

1

See Pods that run multiple containers that need to work together. We use containers like this to provide storage and network access.

2

We're currently evaluating our health check system.

What is HOPS V3?

Note

Help us squash bugs! Report them, and see our list of known issues.

Normally HOPS is continually developed and delivered without release versions, downtime, or breaking changes. But we have recently made some big moves that brought big changes to the inner workings of HOPS and a couple of breaking changes.

We don't want to break your applications, so we let Version 2 (V2) and Version 3 (V3) exist at the same time. We (soon) recommend that everyone migrate over to V3. All new development happens on V3. The two versions will coexist for a period while people migrate their applications. After that period we will remove V2, and there will be only one version of HOPS again.

What is new in HOPS V3?

New Web Frontend!

V2V3
https://iterapp.nohttps://headless-operations.no
ReactSvelte
Designed by a programmerDesigned by a designer
Boring light modeExciting light mode

New CLI!

A new CLI with better ergonomics and more sense! The V3 CLI currently exists as a subcommand for the V2 CLI. (hops v3 ...)

New Backend!

Most of the backend parts are rewritten. The way we interact with Kubernetes is turned slightly inside out.

Most of the backend changes are or only detectable to our end users through a general feeling of calm satisfaction.

Next steps

Be an early adopter and migrate to V3!

Migrate from V2 to V3

Should you migrate from V2 to V3?

"the tide abides for, tarrieth for no man, stays no man, tide nor time tarrieth no man"

Yes, you should migrate from V2 to V3 today unless...

You are dependent on the Slack integration

The Slack integration we all use and love for V2 is not compatible with V3. And although a V3 version exists, it is not currently working.

We recommend the CLI as an alternative for telling HOPS what to do (build, deploy, etc), and the CLI and Web Interface as an alternative to keeping track of what is happening.

If you still prefer to use Slack, sit tight until we've built the Slack integration. (And also let us know that you're waiting, please.)

You're somehow tightly coupled to V2

  • Maybe you have command line scripts refering to the V2 CLI
  • Maybe you have internal documentation refering to the V2 CLI/web/slack bot
  • Maybe you have a big team, and coordinating a change like this requires a little planning

How do I migrate from V2 to V3?

1. Update CLI

The V3 CLI exists as a subcommand on the V2 CLI. If you're keeping your tools updated, you might already have the V3 CLI on your computer!

  1. Install HOPS CLI, see how to install cli!
  2. Run hops self-update to make sure you have the latest and greatest
  3. Confirm that all is well by running hops v3 version and observe that the version is hops_cli/v0.2.0 or greater. Great!

2. Log in

We have to log in to V3.

  1. Run hops v3 login where you'll be directed to the new web interface, where you'll be asked to login with your GitHub account.
  2. After you've logged, go to https://hops.run/org/none/tokens and generate a new token.
  3. Paste the token into the terminal.
  4. Confirm that you have access to apps by running hops v3 list-apps. If you don't see any apps listed, ask for help in #hops-support.

You are now able to use V3!

3. Deregister V2 (per app)

To get your app on to V3, you first need1 to get it off V2.

In #iterapp-logs, on the Iterate Slack, write /iterapp deregister APPNAME where APPNAME obviously is the name of the app you intend to deregister.

Note that this does not undeploy or remove your app from the internet. It only turns off the build/deploy automation from V2.

1

You can probably, technically have the app registered in both V2 and V3, but messages from the system might make less sense, you might get a lot of weird noise and the HOPS Team does not support nor condone dual registration.

4. Register V3 (per app)

In your terminal, write hops v3 register --cluster iterapp iterate/APPNAME where APPNAME obviously is the name of your app. If the GitHub repository is not under the iterate organization, replace iterate with the correct organization.

5. Use V3

The world is your oyster, run hops v3 for a list of exciting opportunities! Visit https://headless-operations.no to witness the endless percolation of builds, deploys, logs.

Go to #hops-support and tell us how to improve this migration guide.

How do I unmigrate back to V2?

  1. Run hops v3 unregister to learn how to deregister from V3.

  2. In #iterapp-logs write /iterapp register iterate/APPNAME where APPNAME obviously is the name of the app you want to reregister.

Create a new project on V3

Note

If your project already runs on V2, you should follow the migration guide instead of this guide.

Application

Firstly you need a somewhat functional web server. At a minimum it must be able to somehow return a 200 OK.

Repository

If you don't already have a GitHub repository for you project, create one.

It should live in the Iterate Organization.

Put your code here.

Dockerfile

For HOPS to build container images for your app, you need to provide a Dockerfile.

iterapp.toml

You need to provide an iterapp.toml, with the configuration for your project.

health

You need to set your application up respond to healthchecks as described in the health/readiness reference.

Register

You need to register your project with HOPS. This is done with the HOPS CLI.

CLI

Warning

When interacting with projects running on V3, you must use the v3 subcommand in the HOPS CLI.

Like hops v3 status.

Install

If you don't already have the HOPS CLI installed, install it as described in the cli howto.

Login

You need to login (again).

hops v3 login

Register

hops v3 register --cluster <CLUSTER> <owner/name>

If you don't know which <CLUSTER> to use, it is probably iterapp. And <owner/name> means GiHub-organization (iterate) and GitHub-repo (your new app).

Now what?

If the registration goes ok, you are now registered. Good job.

You can try and confirm that it really worked by heading to https://headless-operations.com and see if your project is listed there.

Create React App with Node Backend

Noen apper trenger både frontend og backend, der backenden kanskje er skrevet i node.js og frontenden er create app. Disse kan bruke denne konfigurasjonen. Det er en avart av Create React App

Mange av våre ventures bruker create react app direkte og har ingen backend. For disse kan du bruke disse filene.

Note

Denne er basert på at du bruker create react app med typescript og yarn. Om det ikke stemmer, så må du lese kommentarene for å gjøre noen endringer.

.dockerignore

Ignorer noen filer som ikke vi vil skal være med i docker-bygget. Dette gjør at det går raskere å teste lokalt med docker build ., og det gjør også lokalt bygg likere det som skjer i iterapp.

.dockerignore
Dockerfile
node_modules

Dockerfile

# This is the build-container, we need node to build
FROM node:10 as build

# Set NODE_ENV to production to create a production-build of react
ENV NODE_ENV production

# Create a directory to use for building
RUN mkdir /app
# Set the build-directory as the working (and current) directory
WORKDIR /app

# We start by building only dependencies, this means that we can use cached dependencies
# if these files are not changed
COPY package.json .
# We are using yarn. If you use npm, this would be package-lock.json
COPY yarn.lock .
# Remove this if you are not using typescript. (but you should use typescript)
COPY tsconfig.json .
# Install dependencies
RUN yarn install --pure-lockfile

# Copy the actual code and public (static files)
COPY src src
COPY public public
# Build the dependencies. This will output to `/app/build` the static files which need to be served
# to the user
RUN yarn build


# We don't need the node-container in production, we just need something that can serve the static
# files to the user. Nginx is really good at this. `FROM` starts a new container
FROM nginx:1.15.12-alpine

# We copy the built files from the build-container. These files are in `/app/build` after the
# build-step above.
COPY --from=build /app/build /usr/share/nginx/html

iterapp.toml

readiness_path = "/"

Troubleshooting and Support

This article lists the most common questions and errors that you might encounter. Please let us know, or even add a PR to improve the docs, if you have other types of issues.

My application is deployed but responds with a HTTP-CODE 503?

This means that the application is deployed but there is something Iterapp is not happy with.

Possible errors:

  1. HEALTH ENDPOINT

    All applications have a health endpoint which needs to respond with HTTP-CODE 200. It might be that Iterapp does not get a correct response from your applications health endpoint response. The iterapp.toml-file adds a default health endpoint but you are responsible for making the endpoint from your app.

  2. CreateContainerConfigError

    This means that there is an error in the configuration. The error message is more specific in what the configuration error is about.

    • For instance: Error message Error: secret "iterapp-api-token" not found
    • means that there is a missing secret in the environment.

    Run: kubectl -n <app-namespace> get secrets to list secrets.

I've been hitting re-run button to start a build from github, but nothing happens?

Correct, rerunning a build does not work from the web page.

V2✨ V3 ✨

Rerun build by using the build-command with slack

Rerun build by using the CLI.