Setting up MGC in Existing OCM Clusters¶
This guide will show you how to install and configure the Multi-Cluster Gateway Controller in pre-existing Open Cluster Management configured clusters.
Prerequisites¶
- A hub cluster running the OCM control plane (>= v0.11.0 )
- Open cluster management addons enabled
clusteradm install hub-addon --names application-manager
- Any number of additional spoke clusters that have been configured as OCM ManagedClusters
- Kubectl (>= v1.14.0)
- Either a pre-existing cert-manager(>=v1.12.2) installation or the Kustomize and Helm CLIs installed
- Amazon Web services (AWS) and or Google cloud provider (GCP) credentials. See the DNS Provider guide for obtaining these credentials.
Configure OCM with RawFeedbackJsonString Feature Gate¶
All OCM spoke clusters must be configured with the RawFeedbackJsonString
feature gate enabled.
Patch each spoke cluster's klusterlet
in an existing OCM install:
kubectl patch klusterlet klusterlet --type merge --patch '{"spec": {"workConfiguration": {"featureGates": [{"feature": "RawFeedbackJsonString", "mode": "Enable"}]}}}' --context <EACH_SPOKE_CLUSTER>
Setup for hub commands¶
Many of the commands in this document should be run in the context of your hub cluster. By configure HUB_CLUSTER which will be used in the commands:
Install Cert-Manager¶
Cert-manager first needs to be installed on your hub cluster. If this has not previously been installed on the cluster, see the documentation for installation instructions here.
Installing MGC¶
First, run the following command in the context of your hub cluster to install the Gateway API CRDs:
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml --context $HUB_CLUSTER
Verify the CRDs have been established:
kubectl wait --timeout=5m crd/gatewayclasses.gateway.networking.k8s.io crd/gateways.gateway.networking.k8s.io crd/httproutes.gateway.networking.k8s.io --for=condition=Established --context $HUB_CLUSTER
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io condition met
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io condition met
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io condition met
Then run the following command to install the MGC:
kubectl apply -k "github.com/kuadrant/multicluster-gateway-controller.git/config/mgc-install-guide?ref=release-0.2" --context $HUB_CLUSTER
In addition to the MGC, this will also install the Kuadrant add-on manager and a GatewayClass
from which MGC-managed Gateways
can be instantiated.
Verify that the MGC and add-on manager have been installed and are running:
kubectl wait --timeout=5m -n multicluster-gateway-controller-system deployment/mgc-controller-manager --for=condition=Available --context $HUB_CLUSTER
Verify that the GatewayClass
has been accepted by the MGC:
kubectl wait --timeout=5m gatewayclass/kuadrant-multi-cluster-gateway-instance-per-cluster --for=condition=Accepted --context $HUB_CLUSTER
gatewayclass.gateway.networking.k8s.io/kuadrant-multi-cluster-gateway-instance-per-cluster condition met
Creating a ManagedZone¶
Note: To manage the creation of DNS records, MGC uses ManagedZone resources. A ManagedZone
can be configured to use DNS Zones on both AWS (Route53), and GCP (Cloud DNS). Commands to create each are provided below.
First, depending on the provider you would like to use export the environment variables detailed here in a terminal session.
Next, create a secret containing either the AWS or GCP credentials. We'll also create a namespace for your MGC configs:
AWS:¶
cat <<EOF | kubectl apply -f - --context $HUB_CLUSTER
apiVersion: v1
kind: Namespace
metadata:
name: multi-cluster-gateways
---
apiVersion: v1
kind: Secret
metadata:
name: mgc-aws-credentials
namespace: multi-cluster-gateways
type: "kuadrant.io/aws"
stringData:
AWS_ACCESS_KEY_ID: ${KUADRANT_AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${KUADRANT_AWS_SECRET_ACCESS_KEY}
AWS_REGION: ${KUADRANT_AWS_REGION}
EOF
GCP¶
cat <<EOF | kubectl apply -f - --context $HUB_CLUSTER
apiVersion: v1
kind: Namespace
metadata:
name: multi-cluster-gateways
---
apiVersion: v1
kind: Secret
metadata:
name: mgc-gcp-credentials
namespace: multi-cluster-gateways
type: "kuadrant.io/gcp"
stringData:
GOOGLE: ${GOOGLE}
PROJECT_ID: ${PROJECT_ID}
EOF
Create a ManagedZone
using the commands below:
AWS:¶
cat <<EOF | kubectl apply -f - --context $HUB_CLUSTER
apiVersion: kuadrant.io/v1alpha1
kind: ManagedZone
metadata:
name: mgc-dev-mz
namespace: multi-cluster-gateways
spec:
id: ${KUADRANT_AWS_DNS_PUBLIC_ZONE_ID}
domainName: ${KUADRANT_ZONE_ROOT_DOMAIN}
description: "Dev Managed Zone"
dnsProviderSecretRef:
name: mgc-aws-credentials
EOF
GCP¶
cat <<EOF | kubectl apply -f - --context $HUB_CLUSTER
apiVersion: kuadrant.io/v1alpha1
kind: ManagedZone
metadata:
name: mgc-dev-mz
namespace: multi-cluster-gateways
spec:
id: ${ZONE_NAME}
domainName: ${ZONE_DNS_NAME}
description: "Dev Managed Zone"
dnsProviderSecretRef:
name: mgc-gcp-credentials
EOF
Verify that the ManagedZone
has been created and is in a ready state:
NAME DOMAIN NAME ID RECORD COUNT NAMESERVERS READY
mgc-dev-mz ef.hcpapps.net /hostedzone/Z06419551EM30QQYMZN7F 2 ["ns-1547.awsdns-01.co.uk","ns-533.awsdns-02.net","ns-200.awsdns-25.com","ns-1369.awsdns-43.org"] True
Creating a Cert Issuer¶
Create a ClusterIssuer
to be used with cert-manager
. For simplicity, we will create a self-signed cert issuer here, but other issuers can also be configured.
cat <<EOF | kubectl apply -f - --context $HUB_CLUSTER
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: mgc-ca
namespace: cert-manager
spec:
selfSigned: {}
EOF
Verify that the clusterIssuer
is ready:
kubectl wait --timeout=5m -n cert-manager clusterissuer/mgc-ca --for=condition=Ready --context $HUB_CLUSTER
Next Steps¶
Now that you have MGC installed and configured in your hub cluster, you can now continue with any of these follow-on guides:
- Installing the Kuadrant Service Protection components