Manual Operator Installation
In case you do not want to use astartectl to manage the Operator, this guide will run you through
all the steps needed to set up Astarte Kubernetes without any external tool but kubectl.
Note: Please be aware that this method is to be used only if you have very specific reasons why not to
use astartectl, for example: you're running a fork of the Operator, you're running the Operator
outside of the cluster, or you're on the very bleeding edge.
astartectl automates internally all of this guide and should be your main choice in production.
Clone the Operator Repository
First of all, you will need to clone the Operator repository, as this is where some of the needed resources for the Operator are. Ensure you're cloning the right branch for the Operator Version you'd like to install. For example, if you want to Deploy an Operator in the 0.11 series, you would
git clone -b v0.11.4 https://github.com/astarte-platform/astarte-kubernetes-operator.gitInstall RBACs and CRDs
The Operator requires a number of RBAC roles to run, and will also require Astarte CRDs to be installed.
Navigate into the deploy directory of your local clone, and install the service account:
kubectl apply -f service_account.yaml
kubectl get ServiceAccount -n kube-system astarte-operatorThen, install the Cluster Role
kubectl apply -f role.yaml
kubectl get ClusterRole astarte-operatorLast but not least, install the Cluster Role Binding:
kubectl apply -f role_binding.yaml
kubectl get ClusterRoleBinding astarte-operatorOnce done, navigate into the deploy/crds directory of your local clone, and install all Astarte Custom
Resource Definitions:
kubectl create -f api.astarte-platform.org_astartes_crd.yaml
kubectl create -f api.astarte-platform.org_astartevoyageringresses_crd.yaml
kubectl get CustomResourceDefinitionCaveats for Astarte CRDs
Astarte CRDs are automatically generated and embed the OpenAPIv3 schema of the Custom Resource. For this reason, they're
quite big in size. For this reason, using kubectl apply on these resources will always fail, as the annotations
generated by kubectl would be beyond Kubernetes' character limit for annotations.
To work around this, you should always install CRDs with kubectl create and update them with kubectl replace.
Running the Operator inside the Cluster
Navigate into the deploy directory of your local clone. The Operator Deployment template can be found in
operator.yaml. At this time, you might want to tweak the Deployment - especially for what concerns the
image tag. Once you're ready to go, apply the Deployment to your Kubernetes cluster, and wait until it
becomes ready.
kubectl apply -f operator.yaml
kubectl get deployment -n kube-system astarte-operatorRunning the Operator outside the Cluster
Note: Running the operator outside the cluster is not advised in production. Usually, you need such a deployment if you plan on developing the Operator itself. However, this scenario is tested in the e2e tests, and as such provides the very same features of the in-cluster Deployment, which remains the go-to scenario for production.
To run the Operator outside the cluster, you will need the operator-sdk command line. Please refer to
operator-sdk installation guide
to install it. Also, please make sure that the version of operator-sdk matches or is compatible with the version
of the operator-sdk module in the Operator's go.mod file.
Navigate to the root directory of your clone, and run
operator-sdk run --localThis will bring up the Operator and connect it to your current Kubernetes context.
Caveats
When running the Operator locally, you're bound to a single namespace, and to all limitations of operator-sdk run. This is
out of the scope of this guide, and you should be confident with
operator-sdk's User Guide if you plan
on running the Operator outside the Cluster.