Backstage Recipes
1. Get started
In your terminal, run the following commands one-by-one:
git clone https://github.com/platform-recipes/backstage.git [YOUR_APP_NAME]
cd [YOUR_APP_NAME]
git remote remove origin
yarn install
yarn dev
Backstage build process requires Yarn and Node 18 or greater. Type node -v
in your terminal to check the node version.
After we've built the application we are now ready to start it. To do this, we will need to have Docker and TILT installed, then we can simply run:
tilt up
2. Deployment
- Deploy to AWS
- With Github Actions
3. Reference catalog
- Initially this is going to just be official one.
4. Software templates
...
5. Entity onboarding
These catalog generation recipes provides ways to bootstrap meaningful data into your Backstage instance very quickly. This gives you a huge leg up as many organisations can spend weeks or even momnths getting this part right. Because we provide automatated tools to discover what you already have that would be useful to feed to Backstage, we allow teams to get the benefit's from day one. Whether you want to quickly find items for your Backstage catalog from your existing source code, or simply have an easier way to handcraft your catalog information from spreadsheets or our configuration data file, we've got you covered.
To get started, you'll need node
/npm
installed, then run the following
npm install -g recipes4backstage
The above will install the recipes to your machine.
5.1. Generation from source code
About this recipe
This recipe provides an automated way to discover candidate catalog entries for Backstage from existing source code. It takes a little bit of initial setup but once you've got it in place, all you ever need to do is point at a Github organisation that you have access to and it will introspect all the source code to discover catalog items for Backstage automatically.
This is an excellent approach for getting started quickly on Backstage given a large codebase. It will save you days of effort (maybe weeks if it's an entire IT departments Github organisation). The script saves you time by introspecting the code and then suggesting catalog entries that you can include in Backstage. Without this approach, you would have to register each item manually usually by having team members handcrafting catalog YAML files from scratch.
Clone your desired repositories locally
We will create a local script called clone-all.sh
and use it download our desired repositories. You can skip to here if you already have all the desired repositories downloaded locally and just want to run the catalog generation against it.
This script provides a fast way to download your entire code repository or any given organisation's repositories to your local machine. We will do this so that we can run our generation utility against the code in order to discover catalog items automatically.
Step 1: Save the following into a file called clone-all.sh
.
#!/bin/bash
#
# This will clone all repositories for the given organisation
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
FILE="$SCRIPT_DIR/.env"
if test -f "$FILE"; then
. $FILE
fi
echo "Enter github organisation"
read USER
if [ -z "${TOKEN}" ]; then
echo "Enter a personal access token for $USER"
read -s TOKEN
fi
LOCATION=$USER
REPO_DIR=$HOME/work/repositories/${LOCATION}
if [ ! -z ${1+x} ]; then
REPO_DIR="$1"
fi
mkdir -p $REPO_DIR
ORIGINAL_DIR=`pwd`
cd $REPO_DIR
url="https://api.github.com/orgs/${USER}/repos\?per_page\=500"
RESPONSE=`curl -u ${TOKEN}:x-oauth-basic -s ${url}`
REPOS=$(echo $RESPONSE | jq -r '.[].full_name')
for repo in $REPOS
do
echo "$repo"
DIR=$(basename "$repo")
if [ -d "$DIR" ]; then
echo "Repository exists at ${DIR}"
PREVIOUS_DIR=`pwd`
cd ${DIR}
git remote set-url origin "https://oauth2:${TOKEN}@github.com/${repo}.git"
git pull
cd $PREVIOUS_DIR
else
echo "Cloning" $repo
cd $REPO_DIR
git clone "https://oauth2:${TOKEN}@github.com/${repo}.git"
fi
done
cd $ORIGINAL_DIR
Step 2: Create a Github Personal Access Token (PAT). We will use it in the next step.
Step 3: Run TOKEN=<PAT> ./clone-all.sh
. When prompted, enter your Github user or organisation. Provided your PAT is valid for the user or organisation that you select, the script will immediately start downloading every availability private and public repository for the given user or organisation.
Run the generation
Go to the root directory that contains all of your source code repositories.
Kick off the generation process by running
recipes4backstage catalog generate
Once the generation has completed, you will be able to review all of the discovered catalog items.
If you are happy with all the items, you have an option to save the desired catalog-info.yaml
files to your github repositories.
Commit and push your changes
Once your catalog-info.yaml
are as desired within your local git clones, you can commit and push them for each repository. Assuming your Backstage instance is running and setup with Github Discovery (as will be the case, if you've followed one of our provisioning recipes) you catalog items will start appearing automatically into Backstage.
5.2. Generation from a data file
About this recipe
In the previous section we looked at a recipe for generation of catalog items from your existing source code. In this section, we look at a completely different strategy which is ideal for a case where you don't find that your source code yields very meaningful catalog items (as can sometimes be the case if your code is fairly unconventional, uses less common languages or is not well structured from an architeectural or naming stand point). With this recipe, we instead give you a way to model your architecture simply through a minimalistic data file. The data file will then produce many catalog items which you can then commit alongside your code.
Run the generation
...
Commit and push your changes
...
5.3. Generation from a spreadsheet
About this recipe
This approach is similar to the data file approach. However, it can be more useful for collaboration that involves multiple actors. Instead of trying to edit a data file colllaboratively, it allows multiple people to come together and agree on ownership and catalog items that are desired. Then, the sheet can be used to not only seed catalog items from scratch but to continually mantain their data, as an alternative to maintaining changes directly against catalog-info.yaml
files.
Establish your Google sheet
...
Run the generation
...
Rinse and repeat
...
6. Kubernetes onboarding
The Kubernetes tab is available in the catalog for all service entities. Backstage will connect to any Kubernetes clusters that you have defined in your app-config.yaml
. Then, for any components in Kubernetes that match a certain annotation that is specified in the catalog, they will appear under the Kubernetes tab. From here, the user can drill down to each cluster and see the health status of deployments, view metrics and even access the logs.
6.1. Connecting to the local cluster
If you are running Backstage inside Kubernetes, you can connect it to this cluster with the following inside the app-config.yaml
kubernetes:
clusterLocatorMethods:
- type: 'config'
clusters:
- url: https://kubernetes.default.svc.cluster.local
name: local
authProvider: 'serviceAccount'
skipTLSVerify: true
skipMetricsLookup: false
serviceAccountToken:
$file: /var/run/secrets/kubernetes.io/serviceaccount/token
caData:
$file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
6.2. Connecting to other accounts
When we deployed our Backstage instance to EKS, we associated it with a ServiceAccount
called backstage
. This service account references a role in AWS.
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT-ID>:role/backstage
name: backstage
namespace: backstage
If we go to this role in AWS and look at Trust relationships, we should see that it associated the Kubernetes service account with the Cluster. The line system:serviceaccount:backstage:backstage
maps to service account backstage
in namespace backstage
. Whereas the OIDC principal maps to the EKS cluster. If we go to the EKS cluster in the AWS console, we will see that the OpenID Connect provider URL
has an ID and we will see that it matches the ID in our role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::<ACCOUNT-ID>:oidc-provider/oidc.eks.ap-southeast-2.amazonaws.com/id/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.ap-southeast-2.amazonaws.com/id/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:aud": "sts.amazonaws.com",
"oidc.eks.ap-southeast-2.amazonaws.com/id/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:sub": "system:serviceaccount:backstage:backstage"
}
}
}
]
}
We can add permissions to give the role access to any resources within our AWS account. Additionally, if we want to access EKS clusters in another AWS account, we can set a policy under the permissions referencing a role in another account. This indicates that the role in this account can assume the role in the other account. It also indicates that it has full access on EKS. We can restrict this further if we like.
{
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Resource": "arn:aws:iam::<TARGET-AWS-ACCOUNT-ID>:role/backstage"
},
{
"Action": [
"eks:*"
],
"Effect": "Allow",
"Resource": "*"
}
],
"Version": "2012-10-17"
}
Take note of the ARN
of the given role, we are going to need to provide that to the role in the other account in order to indicate that it is allowed to assume that. In the role in the other account we would have the following policy under Permissions.
{
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Resource": "arn:aws:iam::<SOURCE-AWS-ACCOUNT-ID>:role/backstage"
},
{
"Action": [
"eks:*"
],
"Effect": "Allow",
"Resource": "*"
}
],
"Version": "2012-10-17"
}
Finally, in the role we need to indicate that we trust that source account by adding the following under Trust relationships
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<SOURCE-AWS-ACCOUNT-ID>:root"
},
"Action": "sts:AssumeRole"
}
]
}
Then, we should connect to the Kubernetes cluster and add the role to the aws-auth
configuration map. We can do this with kubectl edit cm -n kube-system aws-auth
. We would add something like this.
apiVersion: v1
data:
mapAccounts: |
[]
mapRoles: |
- "groups":
- "system:masters"
"rolearn": "arn:aws:iam::<TARGET-AWS-ACCOUNT-ID>:role/backstage"
"username": "eks-backstage"
We can set the username to anything as it won't be referenced anywhere else.
We would repeat this process for every AWS account that we want to be able to connect to. After this has been setup, we just need to register each cluster in our app-config.yaml
. The role that we assume to should be the target role. The URL will match the OpenID Connect provider URL
shown in the EKS cluster summary in the AWS console.
kubernetes:
clusterLocatorMethods:
- type: 'config'
clusters:
- url: https://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.gr7.ap-southeast-2.eks.amazonaws.com
skipTLSVerify: true
name: my-cluster
authProvider: aws
authMetadata:
kubernetes.io/aws-assume-role: arn:aws:iam::<TARGET-ACCOUNT-ID>:role/backstage
6.3. Kubernetes discovery
Let's observe the following snippet of a Kubernetes deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-app
component: front-end
It has the labels app: my-app
and component: front-end
defined. If we want a given component in Backstage to associate this Kubernetes resouce with it, then we would specify the annotation backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
.
For example, our catalog might look like this
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
annotations:
backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
This will work for any resources in Kubernetes that have those same labels such as Pod
, Service
and Ingress
.
7. Quality metrics onboarding
...
8. Docs onboarding
8.1. TechDocs onboarding for Github
- ...
8.2. OpenAPI generation from Javascript
- ...
- ...
8.3. OpenAPI generation from Typescript
- ...