How to Configure Advanced Load Balancer Settings in Kubernetes Clusters

DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and block storage volumes.

The DigitalOcean Cloud Controller supports provisioning DigitalOcean Load Balancers in a cluster’s resource configuration file.

Warning
In addition to the cluster’s Resources tab, cluster resources (worker nodes, load balancers, and block storage volumes) are also listed outside the Kubernetes page in the DigitalOcean Control Panel. If you rename or otherwise modify these resources in the control panel, you may render them unusable to the cluster or cause the reconciler to provision replacement resources. To avoid this, manage your cluster resources exclusively with kubectl or from the control panel’s Kubernetes page.

You can specify the following advanced settings in the metadata stanza of your configuration file under annotations.

Name

Available in: 1.14.10-do.3, 1.15.11-do.0, 1.16.8-do.0, 1.17.5-do.0 and later

This setting lets you specify a custom name or to rename an existing DigitalOcean Load Balancer. The name must:

  • Be less than or equal to 255 characters.
  • Start with an alphanumeric character.
  • Consist of alphanumeric characters or the ‘.’ (dot) or ‘-’ (dash) characters, except for the final character which must not be a ‘-’ (dash).

If you do not specify a custom name, the load balancer defaults to a name starting with the character a appended by the Service UID.

The following example creates a load balancer with the name my.example.com:

    
        
. . .
metadata:
  name: name-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-name: "my.example.com"
. . .

    

Protocol

Available in: 1.11.x and later

This setting lets you specify the protocol for DigitalOcean Load Balancers. Options are tcp, http, https, and http2. Defaults to tcp.

If https or http2 is specified, then you must also specify either service.beta.kubernetes.io/do-loadbalancer-certificate-id or service.beta.kubernetes.io/do-loadbalancer-tls-passthrough.

The following example shows how to specify https as the load balancer protocol:

    
        
. . .
metadata:
  name: https-protocol-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: "1234-5678-9012-3456"
    service.beta.kubernetes.io/do-loadbalancer-protocol: "https"
. . .

    

Sticky Sessions

Available in: 1.11.x and later

Sticky sessions send subsequent requests from the same client to the same node by setting a cookie with a configurable name and TTL (Time-To-Live) duration. The TTL parameter defines the duration the cookie remains valid in the client’s browser. This option is useful for application sessions that rely on connecting to the same node for each request.

  • Sticky sessions will route consistently to the same nodes, not pods, so you should avoid having more than one pod per node serving requests.
  • Sticky sessions require your service to configure externalTrafficPolicy: Local to preserve the client source IP addresses when incoming traffic is forwarded to other nodes.

Use the do-loadbalancer-sticky-sessions-type annotation to explicitly enable (cookies) or disable (none) sticky sessions, otherwise the load balancer defaults to disabling sticky sessions:

    
        
metadata:
  name: sticky-session-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-type: "cookies"
    service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-name: "example"
    service.beta.kubernetes.io/do-loadbalancer-sticky-sessions-cookie-ttl: "60"

    

See a full configuration example for sticky sessions.

Health Checks

Available in: 1.11.x and later (basic functionality)

Health checks verify that your nodes are online and meet any customized health criteria. Load balancers will only forward requests to nodes that pass health checks.

The load balancer performs health checks against a port on your service (defaults to the first node port on the worker nodes as defined in the service).

You can configure most health check settings in the metadata stanza’s annotations section.

    
        
metadata:
  name: health-check-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-port: "80"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-path: "/health"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-check-interval-seconds: "3"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-response-timeout-seconds: "5"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-unhealthy-threshold: "3"
    service.beta.kubernetes.io/do-loadbalancer-healthcheck-healthy-threshold: "5"

    

See full configuration examples for the health check annotations.

External Traffic Policies and Health Checks

Load balancers managed by DOKS assess the health of the endpoints for the LoadBalancer service that provisioned them.

A health check’s behavior is dependent on the service’s externaltrafficpolicy. A service’s externaltrafficpolicy can be set to either Local or Cluster. A Local policy only accepts health checks if the destination pod is running locally, while a Cluster policy allows the nodes to distribute requests to pods in other nodes within the cluster.

Services with a Local policy assess nodes without any local endpoints for the service as unhealthy.

Services with a Cluster policy can assess nodes as healthy even if they do not contain pods hosting that service. To change this setting for a service, run the following command with your desired policy:

kubectl patch svc myservice -p '{"spec":{"externalTrafficPolicy":"Local"}}'
Note
Because DigitalOcean load balancers terminate client connection requests with proxy, setting externalTrafficPolicy to Local does not preserve the client source IP address. If your service requires retaining the request’s original IP address, see Preserving Client Source IP Address.

SSL Certificates

Available in: 1.11.x and later

You can encrypt traffic to your Kubernetes cluster by using an SSL certificate with the load balancer. You’ll have to create the SSL certificate or upload it first, then reference the certificate’s ID in the load balancer’s configuration file. To use the certificate, you must also specify HTTPS as the load balancer protocol using either the service.beta.kubernetes.io/do-loadbalancer-protocol or the service.beta.kubernetes.io/do-loadbalancer-tls-ports annotation. You can obtain the IDs of uploaded SSL certificates using doctl or the API.

Additionally, you can specify whether to disable automatic DNS record creation for the certificate upon the load balancer’s creation using the do-loadbalancer-disable-lets-encrypt-dns-records annotation. If you specify true, we will not automatically create a DNS A record at the apex of your domain to support the SSL certificate. This setting is available in versions 1.21.5, 1.20.11, and 1.19.15.

The example below creates a load balancer using an SSL certificate:

    
        
---
kind: Service
apiVersion: v1
metadata:
  name: https-with-cert
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-protocol: "https"
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id"
    service.beta.kubernetes.io/do-loadbalancer-disable-lets-encrypt-dns-records: "false"
spec:
  type: LoadBalancer
  selector:
    app: nginx-example
  ports:
    - name: https
      protocol: TCP
      port: 443
      targetPort: 80
. . .

    

See the full configuration example.

Note

When you renew a Let’s Encrypt certificate, DOKS gives it a new UUID and automatically updates all annotations in the certificate’s cluster to use the new UUID. However, you must manually update any external configuration files and tools that reference the UUID.

For further troubleshooting, examine your certificates and their details with the compute certificate list command, or contact our support team.

Forced SSL Connections

Available in: 1.11.x and later

The SSL option redirects HTTP requests on port 80 to HTTPS on port 443. When you enable this option, HTTP URLs are forwarded to HTTPS with a 307 redirect. To redirect traffic, you need to set up at least one HTTP forwarding rule and one HTTPS forwarding rule.

The example below contains the configuration settings that must be true for the redirect to work.

    
        
. . .
  name: https-with-redirect-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
    service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: "your-certificate-id"
    service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
  spec:
    type: LoadBalancer
    selector:
      app: nginx-example
    ports:
      - name: http
        protocol: TCP
        port: 80
        targetPort: 80
      - name: https
        protocol: TCP
        port: 443
        targetPort: 80
. . .

    

See the full configuration example for forced SSL connections.

PROXY Protocol

Available in: 1.11.x and later

Enabling the PROXY protocol allows the load balancer to forward client connection information (such as client IP addresses) to your nodes. The software running on the nodes must be properly configured to accept the connection information from the load balancer.

Options are true or false. Defaults to false.

    
        
---
. . .
metadata:
  name: proxy-protocol
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
. . .

    

Preserving Client Source IP Address

DigitalOcean load balancers do not automatically retain the client source IP address when forwarding requests. To preserve the source IP address, do one of the following:

  • Enable PROXY protocol - This option works with all protocols. This requires the receiving application or ingress provider to be able to parse the PROXY protocol header.

  • Use the X-Forwarded-For HTTP header - DigitalOcean load balancers automatically add this header. This option works only when the entry and target protocols are HTTP or HTTP/2 (except for TLS passthrough).

For more information, see Cross-platform support in the Kubernetes documentation.

Backend Keepalive

Available in: 1.14.10-do.3, 1.15.11-do.0, 1.16.8-do.0, 1.17.5-do.0 and later

By default, DigitalOcean Load Balancers ignore the Connection: keep-alive header of HTTP responses from Droplets to load balancers and close the connection upon completion. When you enable backend keepalive, the load balancer honors the Connection: keep-alive header and keeps the connection open for reuse. This allows the load balancer to use fewer active TCP connections to send and to receive HTTP requests between the load balancer and your target Droplets.

Enabling this option generally improves performance (requests per second and latency) and is more resource efficient. For many use cases, such as serving web sites and APIs, this can improve the performance the client experiences. However, it is not guaranteed to improve performance in all situations, and can increase latency in certain scenarios.

The option applies to all forwarding rules where the target protocol is HTTP or HTTPS. It does not apply to forwarding rules that use TCP, HTTPS, or HTTP/2 passthrough.

There are no hard limits to the number of connections between the load balancer and each server. However, if the target servers are undersized, they may not be able to handle incoming traffic and may lose packets. See Best Practices for Performance on DigitalOcean Load Balancers.

Options are true or false. Defaults to false.

    
        
---
. . .
metadata:
  name: backend-keepalive
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-enable-backend-keepalive: "true"
. . .

    

Accessing by Hostname

Available in: 1.12.10-do.2, 1.13.9-do.0, 1.14.5-do.0 and later

Because of an existing limitation in upstream Kubernetes, pods cannot talk to other pods via the IP address of an external load-balancer set up through a LoadBalancer-typed service.

As a workaround, you can set up a DNS record for a custom hostname (at a provider of your choice) and have it point to the external IP address of the load balancer. Then, instruct the service to return the custom hostname by specifying the hostname in the service.beta.kubernetes.io/do-loadbalancer-hostname annotation and retrieving the service’s status.Hostname field afterwards.

The workflow for setting up the service.beta.kubernetes.io/do-loadbalancer-hostname annotation is generally:

  1. Deploy the manifest with your service (example below).
  2. Wait for the service’s external IP to become available.
  3. Add an A or AAAA DNS record for your hostname pointing to the external IP.
  4. Add the hostname annotation to your manifest (example below). Deploy it.
    
        
kind: Service
apiVersion: v1
metadata:
  name: hello
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-certificate-id: "1234-5678-9012-3456"
    service.beta.kubernetes.io/do-loadbalancer-protocol: "https"
    service.beta.kubernetes.io/do-loadbalancer-hostname: "hello.example.com"
spec:
  type: LoadBalancer
  selector:
    app: my-app-example
  ports:
    - name: https
      protocol: TCP
      port: 443
      targetPort: 80
. . .

    

Ports

You can specify which ports of the load balancer should use HTTP, HTTP2 or TLS protocol.

Note
Ports must not be shared between HTTP, TLS, and HTTP2 port annotations.

HTTP Ports

Available in: 1.14.10-do.3, 1.15.11-do.0, 1.16.8-do.0, 1.17.5-do.0 and later

Use this annotation to specify which ports of the load balancer should use the HTTP protocol.

Values are a comma separated list of ports (for example, 80, 8080).

The following example shows how to specify an HTTP port:

    
        
. . .
metadata:
  name: http-ports-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-http-ports: "80"
. . .

    

HTTP2 Ports

Available in: 1.12.10-do.2, 1.13.9-do.0, 1.14.5-do.0 and later

Use this annotation to specify which ports of the load balancer should use the HTTP2 protocol.

Values are a comma separated list of ports (for example, 443, 6443, 7443). If specified, you must also specify either service.beta.kubernetes.io/do-loadbalancer-tls-passthrough or service.beta.kubernetes.io/do-loadbalancer-certificate-id.

If service.beta.kubernetes.io/do-loadbalancer-protocol is not set to http2, then this annotation is required for implicit HTTP2 usage. Unlike service.beta.kubernetes.io/do-loadbalancer-tls-ports, no default port is assumed for HTTP2 to retain compatibility with the semantics of implicit HTTPS usage.

The following example shows how to specify a HTTP2 port:

    
        
. . .
metadata:
  name: http2-ports-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-http2-ports: "443,80"
. . .

    

TLS Ports

Available in: 1.11.x and later

Use this annotation to specify which ports of the load balancer should use the HTTPS protocol:

Values are a comma separated list of ports (for example, 443, 6443, 7443). If specified, you must also specify one of the following:

  • service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: Specifies whether the load balancer should pass encrypted data to backend droplets. Options are true or false. Defaults to false.
  • service.beta.kubernetes.io/do-loadbalancer-certificate-id: Specifies the certificate ID used for the HTTPS protocol. To list available certificates and their IDs, install doctl and run doctl compute certificate list.

If no HTTPS port is specified but either service.beta.kubernetes.io/do-loadbalancer-tls-passthrough or service.beta.kubernetes.io/do-loadbalancer-certificate-id is, then port 443 is assumed to be used for HTTPS, except if service.beta.kubernetes.io/do-loadbalancer-http2-ports already specifies 443.

The following example shows how to specify a TLS port with passthrough:

    
        
. . .
metadata:
  name: tls-ports-snippet
  annotations:
    service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
    service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"
. . .

    

Size Unit

Available in: 1.19.15-do.0, 1.20.11-do.0, 1.21.5-do.0 and later

This setting lets you specify how many nodes the load balancer is created with. The more nodes a load balancers has, the more simultaneous connections it can manage.

The value can be an integer between 1 and 100. Defaults to 1.

You can resize the load balancer after creation but not more than once per hour.

The following example shows how to specify a the number of nodes a load balancer contains:

    
        
. . .
metadata:
  name: nginx
  annotations:
    kubernetes.digitalocean.com/load-balancer-id: "your-load-balancer-id"
    service.beta.kubernetes.io/do-loadbalancer-size-unit: "3"
    service.beta.kubernetes.io/do-loadbalancer-disable-lets-encrypt-dns-records: "false"
. . .

    
Note
The do-loadbalancer-size-unit annotation is not available in the following regions: AMS2, NYC2, SFO1. To specify a size for your load balancer in these regions, use the legacy annotation do-loadbalancer-size-slug. You can only use the lb-small value, which equates to a load balancer with one node.

Disown

Available in: 1.14.10-do.4, 1.15.12-do.0, 1.16.10-do.0, 1.17.6-do.0 and later

This setting lets you specify whether to disown a managed load balancer. Disowned load balancers cannot be mutated any further, including creation, updates and deletion. You can use this setting to change ownership of a load balancer from one Service to another, including a Service in another cluster. For more information, see Changing ownership of a load-balancer.

Options are true or false. Defaults to false. You must supply the value as a string, otherwise you may run into a Kubernetes bug that throws away all annotations on your Service resource.

Warning
Disowned load-balancers may not work correctly while disowned. This is because necessary load-balancer updates, such as target nodes or configuration annotations, stop being propagated to the load-balancer. Similarly, the Service status field may not reflect the load-balancer’s current state anymore. Consequently, you should assign disowned load-balancers to a new Service as soon as possible.

The following example shows how to disown a load balancer:

    
        
. . .
metadata:
  name: disown-snippet
  annotations:
    kubernetes.digitalocean.com/load-balancer-id: "your-load-balancer-id"
    service.kubernetes.io/do-loadbalancer-disown: "true"
. . .

    

References

For more about managing load balancers, see: