Skip to content

How does it work?

Overview

There are 3 major components in the Cykubed platform:

  • Our website
  • Our API servers
  • The agent, installed by Helm into the target Kubernetes cluster

The first two are owned and managed by Cykubed. The agent is intended to be installed by the customer in their own cluster, although we do run a number of agents in our clusters for the fully hosted service. The agent uses a local Redis installation to store persistent state, and communicates with the API servers via websockets

Anatomy of a test run

Everything is done using standard Kubernetes jobs, persistent volumes and volume snapshots.

The best way to explain is to follow a workflow from the Git commit. Note that there are two different schemes depending on whether the ReadOnlyMany access mode is supported by the Kubernetes platform.

The following assumes this mode is available (which is the case for GKE):

  1. Code is pushed to a branch i.e. the git push
  2. The main Cykubed servers are notified via webhook.
  3. A message is sent to the relevant agent via it's websocket connection to start a test run.
  4. The agent creates a read-write Persistent Volume Claim (PVC) and a build Job to build the application to be tested. The Cypress spec files to be tested are parsed and added to a queue of file names in the local Redis database.
  5. Once the build is complete the agent takes a Volume Snapshot of the PVC
  6. The agent creates a read-only PVC from this snapshot
  7. The agent creates a runner Job which mounts the RO PVC and runs Cypress. Each pod in the job pulls a file at a time from the queue in Redis until there are no more files to be tested.
  8. In the meantime the agent has created another Job to prepare the node cache: it simply deletes everything bar the node_modules folder and then takes another VolumeSnapshot.

If ReadOnlyMany isn't available then we mount the snapshot into an ephemeral volume in the runner Job i.e step 6 is omitted and step 7 is modified.