How to Add Load Balancers to Kubernetes Clusters

DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and volumes.

The DigitalOcean Cloud Controller supports provisioning DigitalOcean Load Balancers in a cluster’s resource configuration file. Load balancers created in the control panel or via the API cannot be used by your Kubernetes clusters. The DigitalOcean Load Balancer Service routes load balancer traffic to all worker nodes on the cluster. Only nodes configured to accept the traffic will pass health checks. Any other nodes will fail and show as unhealthy, but this is expected. Our community article, How to Set Up an Nginx Ingress with Cert-Manager on DigitalOcean Kubernetes, provides a detailed, practical example.

The example configuration below defines a load balancer and creates it if one with the same name does not already exist. Additional configuration examples are available in the DigitalOcean Cloud Controller Manager repository.

Create a Configuration File

You can add an external load balancer to a cluster by creating a new configuration file or adding the following lines to your existing service config file. Note that both the type and ports values are required for type: LoadBalancer:

  type: LoadBalancer
    app: nginx-example
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80


You can configure how many nodes a load balancer contains at creation by setting the size unit annotation. In the context of a service file, this looks like:

apiVersion: v1
kind: Service
  name: nginx
  annotations: "your-load-balancer-id" "3" "false"
  type: LoadBalancer
    app: nginx-example
    - name: http
      protocol: TCP
      port: 80
      targetPort: 80


This is the minimum definition required to trigger creation of a DigitalOcean Load Balancer on your account and billing begins once the creation is completed. Currently, you cannot assign a reserved IP address to a DigitalOcean Load Balancer.

The number of nodes a load balancer contains determines how many connections it can maintain at once. Load balancers with more nodes can maintain more connections, making them more highly available. The number of nodes can be an integer between 1 and 100. The default size is 1 node. You can resize the load balancer after creation once per minute.

The do-loadbalancer-size-unit annotation is not available in the following regions: AMS2, NYC2, SFO1. To specify a size for your load balancer in these regions, use the legacy annotation do-loadbalancer-size-slug. You can only use the lb-small value, which equates to a load balancer with one node.

Show Load Balancers

Once you apply the config file to a deployment, you can see the load balancer in the Resources tab of your cluster in the control panel.

Alternatively, use kubectl get services to see its status:

kubectl get services
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
kubernetes             ClusterIP       <none>           443/TCP        2h
sample-load-balancer   LoadBalancer     <pending>   80:32490/TCP        6s

When the load balancer creation is complete, <pending> will show the external IP address instead. In the PORT(S) column, the first port is the incoming port (80), and the second port is the node port (32490), not the container port supplied in the targetPort parameter.

In addition to the cluster’s Resources tab, cluster resources (worker nodes, load balancers, and volumes) are also listed outside the Kubernetes page in the DigitalOcean Control Panel. If you rename or otherwise modify these resources in the control panel, you may render them unusable to the cluster or cause the reconciler to provision replacement resources. To avoid this, manage your cluster resources exclusively with kubectl or from the control panel’s Kubernetes page.

Show Details for One Load Balancer

If the provisioning process for the load balancer is unsuccessful, you can access the service’s event stream to troubleshoot any errors. The event stream includes information on provisioning status and reconciliation errors.

To get detailed information about the load balancer configuration of a single load balancer, including the event stream at the bottom, use kubectl’s describe service command:

kubectl describe service <LB-NAME>
Name:                     sample-load-balancer
Namespace:                default
Labels:                   <none>
Annotations:    {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"sample-load-balancer","namespace":"default"},"spec":{"ports":[{"name":"https",...
Selector:                 <none>
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
LoadBalancer Ingress:
Port:                     https  80/TCP
TargetPort:               443/TCP
NodePort:                 https  32490/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
  Type    Reason                Age               From                Message
  ----    ------                ----              ----                -------
  Normal  EnsuringLoadBalancer  3m (x2 over 38m)  service-controller  Ensuring load balancer
  Normal  EnsuredLoadBalancer   1m (x2 over 37m)  service-controller  Ensured load balancer


For more about managing load balancers, see: