In Kubernetes, you can create a service with an external load balancer that provides access to it from public networks. The load balancer will receive a publicly accessible IP address and route incoming requests to the correct port on the Kubernetes cluster nodes.
To create a service with an external load balancer
- Access the Kubernetes cluster via the dashboard. Click Kubernetes access for instructions.
On the Kubernetes dashboard, create a deployment and service of the LoadBalancer type. To do it, click + Create and specify a YAML file that defines these objects. For example:
If you have deployed the Kubernetes cluster in a shared physical network, specify the following manifest:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 --- kind: Service apiVersion: v1 metadata: name: load-balancer annotations: service.beta.kubernetes.io/openstack-internal-load-balancer: "true" spec: selector: app: nginx type: LoadBalancer ports: - port: 80 targetPort: 80 protocol: TCP
The manifest above describes the deployment
nginxwith a replica set of two pods and the service
LoadBalancertype. The annotation used for the service indicates that the load balancer will be internal.
Once the load balancer is created, it will be allocated an IP address from the shared physical network and can be accessed at this external endpoint.
If you have deployed the Kubernetes cluster in a virtual network linked to a physical one via a virtual router, you can use the YAML file above without the
annotationssection for the
load-balancerservice. The created load balancer will receive a floating IP address from the physical network and can be accessed at this external endpoint.
The load balancer will also appear in the self-service panel, where you can monitor its performance and health. For example: