Getting Started with Crossplane

ArunKumar
12 min readMar 3, 2022

As you all know that Crossplane is a buzzing word in the market now. In this GitOps era, companies are trying to evolve and trying to follow the best practices available in the market instead of writing loads of templates, maintaining and manage those resources. Crossplane is here to help you in managing or allowing a developer to create,modify,update,delete resources from the cluster itself. This helps you create cloud resources in a cloud native way like how you deploy apps in realtime. Lets say, you are deploying an application in your cluster and you would like to create an s3 bucket resources in AWS from that cluster itself to store the data, all you have to do write a Crossplane manifest (MR) to create the s3 resource as a part of the app deployment process. Woo… this might look overwhelming now but not to worry at the end of this blog you will see how useful this can be.

What are we going to see in this Blog ?

Picture speaks a thousand words

Intent of this Blog :

Eventhough , you can create AWS or any cloud resources via Crossplane cloud API’s itself from the Cluster. I additionally wanted to show you that if you already have the terraform templates for all your resources in place, we can basically leverage those modules into crossplane and get the resources created through the GitOps Model like the workflow above.

What is the Endgame here? :

We are gonna see how can we start afresh with Crossplane and how to achieve the workflow above to create an S3 bucket in the AWS with the Gitops Model.

NOTE : Kindly note that the S3 bucket can directly be created via crossplane AWS API itself.

However, I have used it as a common resource to demonstrate how all it can be created using Crossplane with AWS and Terraform Provider, so that if you already have a terraform template, then you leverage that as well and inject those in the crossplane manifest.

Crossplane supports around 124 AWS resources currently as on 9th March 2022.
Ref : https://doc.crds.dev/github.com/crossplane/provider-aws

Ref : https://github.com/crossplane-contrib/provider-terraform,

Crossplane VS Terraform :

Crossplane is often compared to HashiCorp’s Terraform. You can get the Key Difference Between Crossplane and Terraform via : https://blog.crossplane.io/crossplane-vs-terraform which will help you to understand the difference between Why Crossplane and Why not Terraform .Instead of re-iterating the same definition here, I would recommend you to go through this blog before you move on further to understand the difference and how we can leverage Crossplane for resource creation and to manage the resources within the Cloud.

Image Credits: Crossplane

What is Crossplane ?

Crossplane is an open source Kubernetes add-on that transforms your cluster into a universal control plane. Crossplane enables platform teams to assemble infrastructure from multiple vendors, and expose higher level self-service APIs for application teams to consume, without having to write any code.

Crossplane extends your Kubernetes cluster to support orchestrating any infrastructure or managed service. Compose Crossplane’s granular resources into higher level abstractions that can be versioned, managed, deployed and consumed using your favorite tools and existing processes.

How do you restrict access to Crossplane actions ?

In Crossplane, every piece of infrastructure is an API endpoint that supports create, read, update, and delete operations.

Note : Good thing to note here is you can restrict certain actions to the developer and allow him to create or claim the resources from the Cluster itself using Role Based Access Control(RBAC). This is an added advantage.

What are Crossplane Terms/Objects ?

XRD — Composite Resource Definition :

This is a Cluster Scoped CRD. This follows openapiV3 schema and to be used to define the inputs that is to be read by composition.

Composition :

A Composition lets Crossplane know what to do when someone creates a Composite Resource. This is where integration logic and credentials injection logic is defined.

For this blog, We are using Crossplane “terraform-provider” in the backend to execute terraform templates to create an S3 resource.

XRC (Claim)Composite Resource Claim :

Crossplane uses Composite Resource Claims (or just claims, for short) to allow application operators to provision and manage XR.

Terraform Analogy of Crossplane Terms:

XRD as Terraform variable blocks of a Terraform module,

Composition is the rest of the module’s HCL code(main.tf) that describes how to use those variables to create a bunch of resources.

XRC or claim is a little like a tfvars or locals.tf file providing inputs to the module.

The primary focus of this blog is to help you to get started with writing the manifest to build your infrastructure. Instead of boring you with the definitions and comparisons, Lets Get to the Main Event :

Prerequisites :

  1. The base EKS Cluster from which you are going to create resources using crossplane should be created along with VPC,SG etc. If you have the VPC and SG in place, you can use this to create a cluster using the EKSCTL Command or through the console itself: https://eksctl.io/usage/creating-and-managing-clusters/
  2. Kubectl installed.

How do i install Crossplane in my EKS Cluster?

Assuming that you have your VPC, SG and EKS cluster created already. Kindly follow the steps below to install the Crossplane using HELM Charts in your EKS cluster.

#Create the namespace and install the components using helm
kubectl create namespace crossplane-system

helm repo add crossplane-stable https://charts.crossplane.io/stable
helm repo update

helm install crossplane --namespace crossplane-system crossplane-stable/crossplane

What are all are required to Create a AWS resource using Crossplane ?

  1. Provider Manifest(AWS, Terraform etc)
  2. Secrets Manifest(AWS creds and github keys)
  3. ProviderConfig Manifest(Injecting AWS and Github secrets for crossplane to create resource)
  4. Resource Manifest (Template for creating a specific resource)

Now , There are few possibilities/ways with Crossplane to achieve the endgame which is creating an S3 bucket from your EKS Cluster in AWS environment.

1.0 Using a Simple Crossplane Manifest .

2.0 By Injecting Existing Terraform S3 module to Crossplane Manifest.

3.0 Using a Custom Resource Definition as per our needs , basically like a module for different teams which can be claimed through an XRC(Claim).

Now that we have seen the ways to achieve our goal, we can go through one by one.

NOTE : You can make use of the following command to describe or to get the events of the resources that you are going to create hence forth.

Kubectl Describe <objectname> <name-of-the-resource>

1.0. What are all are required to Create a AWS resource using simple/direct crossplane manifest?

  • AWS Provider Manifest: This will Install all the CRD’s ( Custom Resources Definitions ) required to create resources on the cloud. Ex: rdsinstances.database.aws.crossplane.io,ec2.aws.crossplane.io/v1alpha1, etc.
    You will have to run the following AWS manifest alone to install AWS and Terraform Modules:

Run the “Kubectl apply -f aws-terraform-provider.yaml”

  • Secrets : We should create a secret for the AWS creds and all the required credentials like Github etc through a manifest.

For #AWS and GIT Secrets:

###Generate the configuration files with the AWS Credentials. AWS_PROFILE=default && echo -e "[default]\naws_access_key_id = $(aws configure get aws_access_key_id --profile $AWS_PROFILE)\naws_secret_access_key = $(aws configure get aws_secret_access_key --profile $AWS_PROFILE)" > aws-creds.ini###Create a Kubernetes secret with the configuration file generated. kubectl create secret generic aws-secret-creds -n crossplane-system --from-file=creds=./aws-creds.iniOr Use the manifest for creating the secret : Run the Kubectl apply -f secret.yaml```
---
apiVersion: v1
kind: Secretmetadata:name: aws-credsnamespace: opm-pitype: OpaquestringData:credentials: |[default]aws_access_key_id = <access_key>aws_secret_access_key = <secret_key>---
apiVersion: v1
kind: Secretmetadata:name: git-credentialsnamespace: opm-pitype: OpaquestringData:creds: |https://<username>:<password>@github.com```
  • ProviderConfig Manifest: This will inject secret required for the Crossplane to create resources in the Cloud.
###Once the secret is created let us now create the Provider config.kubectl apply -f providerconfig.yaml
  • Resource Manifest : For creating the S3 resource in the Cloud using Crossplane.

Run the

```

Kubectl apply -f s3.yaml

```

Tada ………You should be able to see the S3 bucket in the AWS Console now which is created via Crossplane from your EKS cluster.

2.0. By Injecting Existing Terraform S3 module to Crossplane Manifest.

There are two ways to do this :
1. By Injecting the Terraform s3 resource module as an Inline to the Crossplane Manifest.

2. By Injecting the Terraform s3 resource module which is in a Github repo as a Remote Source to the Crossplane Manifest.

  • By Injecting the Terraform s3 resource module as an Inline to the Crossplane Manifest.

As we have already installed the Required components like AWS & Terraform Provider, Secrets and ProviderConfig in the previous steps. We can now directly run the Inline template using “Kubectl apply -f inline.yaml” Command.

Now you should see the resources in the Console.

Note : In the Above manifest, you could see two things :

*. writeConnectionSecretToRef : Some Crossplane resources support writing connection details — things like URLs, usernames, endpoints, and passwords to a Kubernetes Secret. You can specify the secret to write by setting the spec.writeConnectionSecretToRef field. Note that while all managed resources have a writeConnectionSecretToRef field, not all managed resources actually have connection details to write — many will write an empty Secret. Which managed resources have connection details and what connection details they have is currently undocumented.

*. providerConfigRef : This name should match the name given in the ProviderConfig.yaml.

2. By Injecting the Terraform s3 resource module which is in a Github repo as a Remote Source to the Crossplane Manifest.

As we have already installed the Required components like AWS & Terraform Provider, Secrets and ProviderConfig in the previous steps.

Note : We should have the terraform s3 module under the /tf directory and a main.tf inside that /tf directory for the crossplane to pick up the modules. The folder name /tf should not be changed but can have a directory before that.

Once the modules are created under /tf directory , you can push it to the github repo , we can source the same as per the file below to run the Remote template using “Kubectl apply -f remote.yaml” Command. Make sure the /tf path is mentioned in the source for crossplane to pick it up.

Now you should see the resources in the Console.

I guess by now should have got the idea on how to create resource using default crossplane manifest and using the remote TF github module repository.

Now there is one last way to write a module using Compostion , Definition and Claim.

3.0 Using a Custom Resource Definition as per our needs , basically like a module for different teams which can be claimed through an XRC(Claim).

This is a challenging one to understand in the first go but you can do it .

Repo :

I hope you recall the crossplane terms/object which we discussed initially. We are gonna use that now.

This will be useful, if you the devops/SRE team needs to handle the creation of resources instead of the developer themselves.

We can basically create a definition-XRD, a custom resource definition and share it to customer/developer to Claim the resource . We can showcase the definition with them asking them to provide the set of required input values to the variables mentioned in the definition by using a XRC- Claim and get those resources created.

We can create resources the inline way and also using the remote way through github repos.

For inline, you can refer this :

https://crossplane.io/docs/v1.6/concepts/composition.html

Now, we shall go with Remote github repos which has the terraform templates in place under /tf :

In the Definition.yaml :

You basically have to write a schema/definition of what is expected by the composition to create the resource. Basically the required variables for the terraform module which create S3 resource in the above mentioned format.

Once that is done, you can create a Composition.yaml file which basically has the info on how to incorporate those variables in the definition files to create the resource when it is claimed by the developer. It uses something called Patches to patch the variables from the claim to the composition.

In the Composition.yaml :

Note :

Labels and compositeTypeRef Parameters in the composition file should match the definition.yaml files name and kind parameters and that is how it will be referenced while creating a particular resource using this method.

Lables format must in plurals + group name as in the definition.yaml file and for compositeTypeRef the kind and apiversion(groupname+version name) parameter should match as in the definition file.

Now that we have the Definition.yaml and composition.yaml ready, we can apply the same using

Kubectl apply -f <filename> command.

Once these are created, you can use the claim.yaml to with the required input variables and values as required by the definition to claim the resource, by claiming means creating those resource based on your claim values.

Once you have claimed it, you can check the Terraform workspace using the command to check if the resource have been created successfully.

Kubectl get workspace <workspace name> -n namespace-name

Tada ………Now you should your buckets in the S3 console in AWS.

Finally integrating this with ArgoCD :

Install the Argocd plugin your EKS cluster by following the steps below :

Accessing the Web UI
The Helm chart doesn’t install an Ingress by default, so to access the Argo CD Web UI we have to port-forward to the service:

kubectl port-forward svc/argo-cd-argocd-server 8080:443We can then visit http://localhost:8080 to access it.

If you don’t want to have localhost, then expose that argocd pod using the following command :

kubectl expose pod argocd-server --type=LoadBalancer --name=argocd-test

This will create a ELB and you can access the LB URL to go the argocd UI.

The default username is “admin”. The password is auto-generated and defaults to the pod name of the Argo CD server pod. We can get it with:

kubectl get pods -l app.kubernetes.io/name=argocd-server -o name | cut -d’/’ -f 2

Once you have accessed it, You can create a new app in the UI and add the URL and Path as per the manifest below.

OR

You can apply the following manifest to run the Inline and Remote Crossplane manifest to create an S3 Bucket.

https://github.com/arun12cool/tf/blob/main/crossplane-argocd/argo.yaml
Argocd-UI

You should see the above in the Argocd UI and this shows that the resources has been created, you can check the same in the S3 dashboard in the AWS Console.

Now you can change the name of the bucket in the https://github.com/arun12cool/tf/blob/main/crossplane-terraform/inline.yaml repo and then wait for few seconds or hit refresh to check the update bucket.

You can also manage Argocd as an App so that it auto-upgrades the argocd app automatically, instead of you manually managing Argocd : https://github.com/arun12cool/tf/blob/main/crossplane-argocd/argocd-app.yaml

Haa…That was a lengthy blog… But i am hopeful that its worth and you can have an understanding of Crossplane and hope this helps you to get started with it.

I have not mentioned the issues and errors that i had faced as this is already a big read. However, you can always reach out to me via Linkedin or via a chat and i will be happy to help.

--

--