DIRECT SERVER RETURN WITH KUBERNETES

kannanvr - Mar 17 '22 - - Dev Community

What is DSR?

Direct server return (DSR) is a method of load balancing that allows traffic returning from a load-balanced server to be routed asymmetrically, skipping the load balancer and traveling back to clients through the server's default gateway

Traditional Way of Routing the Data:

Traditionally, when a client application sends the packet to reach the server, it would first reach the firewall and then it will be routed to load balancer, and then finally it will reach out to the server. Due to this the servers can’t really receive the client source IP rather they receive the masqueraded IP of the source.
When server responds back to client, it will send the packets in the same order as how it received.
The following diagram explains the packet flow of routing data in the traditional way:

Image description

DSR Packet Routing:

In the case of DSR, the server directly returns the response to the client. To achieve this functionality, the server should receive the client source IP address so that it can reach out to the client directly.

Image description

Business Use Case:

Following are the business use case for Direct server return

  1. Server usage by the client IP
  2. Billing based on source IP
  3. Security Based on client source IP
  4. video games, multimedia, content delivery network etc

Streaming services like Netflix, Youtube, Primevideos etc need to send huge amount of video data from their server to the mobile client application. It can’t send the data through the traditional method. In this kind of video server, the number of video request packets is very less compared to video response from server. If we follow the traditional method, load balancer will get heavily loaded

How DSR achieved in Kubernetes kube-proxy?

IPVS (IP Virtual Server) has been supported by kuberenetes from version 1.11 onwards. By changing the iptables mode to ipvs mode on kube-proxy we can make use of ipvs which is completely replacing the iptables on kube-proxy
When you provision a LoadBalancer or NodePort service (method to expose traffic outside the cluster) you can add “externalTrafficPolicy: Local” to enable DSR. This is mentioned in the Kubernetes documentation in the page (https://kubernetes.io/docs/tutorials/services/source-ip/)

Drawbacks of Enabling the DSR provided by kubernetes kube-proxy:

Following are the drawbacks of enabling the DSR on kubernetes kube-proxy directly

  1. Administrator should disable the node-node forwarding
  2. Client should directly communicate with kubernetes nodes

##How can we achieve on CNI?

Following is the architecture for enabling the direct server return on kubernetes and also enabling the load balancer as a service for the kubernetes cloud environment

Image description

External router should have the BGP and ECMP capability. It should be able to connect with the kubernetes node as a neighbor. Also, it should be able to route the packets from the internet to the kubernetes nodes in the shortest path.
Kubernetes nodes should be run with BGP and the external router information needs to be configured to connect by bird protocol. Also, every kubernetes node should be able to route the packets with the help of ipvsadm which is a kernel utility.
Now when we configure the external IP of the service, CNI should start advertising the external IP to the external router. Also, it informs to external router that when it receives the packets with destination IP as an external IP, external router should route the packets to kubernetes node which advertises the external IP.
When kubernetes node receives the ip packets with the destination ip of external IP, then it routes the packet to ipvsadm Firewall mark.

We need to add the below iptables rule to mark the incoming packets with the firewall mark.

iptables -t mangle -A PREROUTING --destination 64.185.181.238 -j MARK --set-mark 11
Enter fullscreen mode Exit fullscreen mode

Once the incoming received packets were received with the firewall mark, we need to add the below ipvsadm table for ipip mode….

Ipvsadm -a -f <FWmark> -r <Real  POD IP> -I
Enter fullscreen mode Exit fullscreen mode

The above ipvsadm rule forwards the ip packets which contain the firewall mark to the pod ip in ip-ip mode in case of ipv4.
Once it reaches the pod, it will be given to application without doing any NAT with the source IP.
To reach the IP packet to the pod, we need to setup the following things on the pod

  1. Create an alias on the default network interface and set the virtual IP to it on pod
  2. ARP ignore to be added for the network interface on virtual IP interface, since this interface should not reply for the ARP requests, commands below.
  • echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
  • echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce By setting the above things on the pod, we can receive the source ip on the pod. Also, pod can directly respond back to the client There are CNIs like kube-router, calico, cilium that can support Direct Server Return. ##Drawbacks of DSR:
  1. ARP (Address Resolution Protocol) requests must be ignored by the backend servers.
  2. Cookie Translation and port translation can't be implemented
  3. Protocol vulnerabilities are not protected
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .