The Managed Elements of DigitalOcean Kubernetes

DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that lets you deploy Kubernetes clusters without the complexities of handling the control plane and containerized infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and volumes.


DigitalOcean’s Managed Kubernetes provides users with administrator access to the cluster and full access to the Kubernetes API through kubectl and doctl. There are no restrictions on the API objects users can create as long as the underlying Kubernetes version supports the object(s).

We simplify the Kubernetes experience by managing key services and settings on your behalf that you cannot or should not modify.

Warning
Besides the methods in this guide, do not modify any managed components pre-installed in your DigitalOcean Kubernetes cluster, such as workloads, policies, Cilium, and CoreDNS. Modifying these services can cause your cluster’s operations to temporarily or permanently fail, and we may revert these changes at any time to maintain the functionality of your cluster.

Managed Elements of the Worker Nodes

Worker Node Configuration

You can add more workers and recycle them in the control panel by using the API or doctl. Once you’ve added them, we manage their configuration, including the:

  • Operating system
  • Installed packages
  • File system
  • Local storage
  • Container daemon configuration
  • Machine size

While it is technically possible to access and alter the worker nodes at this time, your changes will be overwritten by the reconciler and will not persist. In the future, you may not be able to change them at all.

Automatic Application of Labels to Nodes

DigitalOcean will apply the following labels to nodes, and their presence is enforced by the reconciler:

doks.digitalocean.com/node-pool
doks.digitalocean.com/node-id
doks.digitalocean.com/node-pool-id
doks.digitalocean.com/version

Custom node pool labels can be set through the DigitalOcean API.

Worker Node Firewalls

When you create a cluster on version 1.19 or later, we automatically provision two cloud firewalls for the cluster and manage the opening and closing of their NodePorts (ports 30000-32767) as services are added and removed from the cluster. One firewall manages the connections between resources in your VPC network, including the control plane, and worker nodes, while the other manages connections between worker nodes and the public internet. Cluster firewalls are named k8s- concatenated with the cluster name.

Defining a NodePort service in your service spec automatically opens the specified ports on the firewall. When you remove services from the cluster, the ports on the firewall close automatically.

You cannot delete the cluster’s default firewalls or manually change their configuration in the control panel. Any changes made to the default firewalls through the control panel will not persist and be reverted. If you need to open ports outside of the NodePort range, such as port 80, manually create a new DigitalOcean Cloud Firewall and associate it with the cluster.

In some cases, firewall management for a particular service may not be desirable, such as when you want a NodePort to be only accessible using the VPC network. To selectively exclude a service from firewall management, use the kubernetes.digitalocean.com/firewall-managed annotation and set it to false. When set to false, this disables public access to the NodePort and no inbound rules are created.

Load balancers access the cluster using the cluster’s private network interface and don’t need a port explicitly provisioned for them.

Worker Node Maintenance

Nodes and control plane components in your Kubernetes cluster require routine maintenance, which usually takes place during your cluster’s weekly 4-hour maintenance window. For example, automatic and required upgrades take place during the window; however, you can still manually upgrade your clusters at any time.

As a managed Kubernetes service, we may run potentially disruptive jobs on your cluster during this maintenance window. Additionally, if your cluster is in a critical state, we may conduct necessary maintenance outside the window, resulting in potential API unavailability.

We recommend you reschedule your cluster’s maintenance window to the time of least activity for your workload.

DigitalOcean Infrastructure Components

Some DigitalOcean products integrate natively with Kubernetes clusters directly from the Kubernetes manifest files, and we manage their integration with the cluster:

You should not manage these DigitalOcean resources through the control panel or API because any changes you make to Kubernetes clusters outside the cluster’s configuration will be overwritten by the DOKS reconciler. For example, if you manually delete a volume or load balancer in a Kubernetes cluster from the control panel, it will be recreated during the next reconciliation process and you will still be billed.

Managed Elements of the Control Plane

Kubernetes control plane is fully managed and is included in the price of the worker nodes. The default control plane runs a single replica of each component; some downtime will occur during unexpected failures as components are restarted. If you enable high availability for a cluster, multiple replicas of each control plane component are created, ensuring that a redundant replica is available when a failure occurs. This results in additional increased uptime. DigitalOcean provides a 99.95% uptime SLA for control planes when high availability enabled.

You cannot modify:

The default admission controllers may differ between versions. See the Kubernetes documentation for details on the available admission controllers.