DigitalOcean Load Balancers are a fully-managed, highly available network load balancing service. Load balancers distribute traffic to groups of Droplets, which decouples the overall health of a backend service from the health of a single server to ensure that your services stay online.
After you create a load balancer and add Droplets to it, you can manage and modify it on its detail page.
First, click Networking in the main navigation, and then click Load Balancers to go to the load balancer index page. Click on an individual load balancer’s name to go to its detail page, which has three tabs:
Droplets, where you can view the Droplets currently attached to the load balancer and modify the backend Droplet pool.
Graphs, where you can view graphs of traffic patterns and infrastructure health.
Settings, where you can set or customize the forwarding rules, sticky sessions, health checks, SSL forwarding, and PROXY protocol.
To start sending traffic from your hostname to your load balancer, you need to create an A record on your DNS provider that points your hostname at the load balancer’s IP address.
If your DNS provider is DigitalOcean, reference Create and Delete DNS Records to see how to do this. If you do not use DigitalOcean as a DNS provider, reference your current provider’s documentation to see how this is done.
Load balancers automatically connect to Droplets that reside in the same VPC network as the load balancer.
To validate that private networking has been enabled on a Droplet from the control panel, click Droplets in the main nav, then click the Droplet you want to check from the list of Droplets.
From the Droplet’s page, click Networking in the left menu. If the private network interface is enabled, the Private Network section populates with the Droplet’s private IPv4 address and VPC network name. If the private network interface has not been enabled, a “Turn off” button is displayed.
In the Droplets tab, you can view and modify the load balancer’s backend Droplet pool.
This page displays information about the status of each Droplet, its downtime, and other health metrics. Clicking on a Droplet name will take you to the Droplet’s detail page.
If you are managing backend Droplets by name, you can add additional Droplets by clicking the Add Droplets button on this page. If you are managing by tag, you will instead have an Edit Tag button.
When you add Droplets to a load balancer, the Droplets start in a DOWN state and remain in a DOWN state until they pass the load balancer’s health check. Once the backends have passed the health check the required number of times, they will be marked healthy and the load balancer will begin forwarding requests to them.
Click the Graphs tab to get a visual representation of traffic patterns and infrastructure health.
The Frontend section displays graphs related to requests to the load balancer itself:
The Droplets section displays graphs related to the backend Droplet pool:
Click the Settings tab to modify the way that the load balancer functions.
The load balancer’s scaling configuration allows you to adjust the load balancer’s number of nodes. The number of nodes determines:
The load balancer must have at least one node, and can have up to 100 nodes. You can add or remove nodes at any time to meet your traffic needs.
Forward rules define how traffic is routed from the load balancer to its backend Droplets. The left side of each rule defines the listening port and protocol on the load balancer itself, and the right side defines where and how the requests will be routed to the backends.
You can change the protocols using the drop down menus. If you use HTTPS or HTTP2, you will need an an SSL certificate or to use SSL passthrough.
Health checks verify that your Droplets are online and meet any customized health criteria. Load balancers will only forward requests to Droplets that pass health checks. If your load balancer uses UDP in its forwarding rules, the load balancer requires that you set up a health check with a port that uses TCP, HTTP, or HTTPS to work properly.
In the Target section, you choose the Protocol (HTTP, HTTPS, or TCP), Port (80 by default), and Path (/
by default) that Droplets should respond on.
In the Additional Settings section, you choose:
The success criteria for HTTP and HTTPS health checks is a status code response in the range 200 - 399. The success criteria for TCP health checks is completing a TCP handshake to connect.
Sticky sessions send subsequent requests from the same client to the same Droplet by setting a cookie with a configurable name and TTL (Time-To-Live) duration. The TTL parameter defines the duration the cookie remains valid in the client’s browser. This option is useful for application sessions that rely on connecting to the same Droplet for each request.
The SSL option redirects HTTP requests on port 80 to HTTPS on port 443. When you enable this option, HTTP URLs are forwarded to HTTPS with a 307 redirect. To redirect traffic, you need to set up at least one HTTP forwarding rule and one HTTPS forwarding rule.
Enabling the PROXY protocol allows the load balancer to forward client connection information (such as client IP addresses) to your Droplets. The software running on the Droplets must be properly configured to accept the connection information from the load balancer.
Backend services need to accept PROXY protocol headers or the Droplets will fail the load balancer’s health checks.
By default, DigitalOcean Load Balancers ignore the Connection: keep-alive
header of HTTP responses from Droplets to load balancers and close the connection upon completion. When you enable backend keepalive, the load balancer honors the Connection: keep-alive
header and keeps the connection open for reuse. This allows the load balancer to use fewer active TCP connections to send and to receive HTTP requests between the load balancer and your target Droplets.
Enabling this option generally improves performance (requests per second and latency) and is more resource efficient. For many use cases, such as serving web sites and APIs, this can improve the performance the client experiences. However, it is not guaranteed to improve performance in all situations, and can increase latency in certain scenarios.
The option applies to all forwarding rules where the target protocol is HTTP or HTTPS. It does not apply to forwarding rules that use TCP, HTTPS, or HTTP/2 passthrough.
There are no hard limits to the number of connections between the load balancer and each server. However, if the target servers are undersized, they may not be able to handle incoming traffic and may lose packets. See Best Practices for Performance on DigitalOcean Load Balancers.