Prevent Data Breaches and Unauthorized External Connections from Container Clusters with Egress Control
By Gary Duan
While more and more applications are moving to a microservices and container-based architecture, there are legacy applications that cannot be containerized. External egress from a container cluster to these applications needs to be secured with egress container security policies when containers are deployed with Kubernetes or Red Hat OpenShift. In addition, modern container applications are frequently built requiring API access to services running outside the cluster, even on the internet. Updates of open source software applications and operating systems also may require internet access. These modern and legacy applications include SaaS-based API services, internal database servers and applications developed with .NET frameworks. The cost and risk to migrate these applications to a microservice architecture is so high that many enterprises have a mixed environment where new containerized applications and legacy applications are running in parallel.
Application segmentation is a technique to apply access control policies between different services to reduce their attack surface. It is a well-accepted practice for applications running in virtualized environments. In a mixed environment, containerized applications need access to the internet and/or legacy servers. DevOps and security teams want to define egress control policies to limit the exposure of external connections to the internet and legacy applications. In Kubernetes, this can be achieved by Egress Rules of the Network Policy feature.
This article discusses several implementations of egress control policy in Kubernetes and Red Hat OpenShift and introduces the NeuVector approach for overcoming limitations with basic Network Policy configurations.
Egress Control for Container Security with Network Plugins
In Kubernetes 1.8+, an Egress policy can be defined as something like this,
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: default spec: podSelector: matchLabels: role: app policyTypes: - Egress egress: - to: - ipBlock: cidr: 10.10.0.0/16 ports: - protocol: TCP port: 5432
This example defines a network policy that allow containers with the label ‘role=app’ to access TCP port 5432 of all servers in subnet 10.10.0.0/16. All other egress connections will be denied.
This achieves what we want to some extent, but not all network plugin support this feature. Checking this link, we can see that Calico, Canl and Weave Net are some of the network plugins that support this feature commercially.
However, using a subnet address and port to define external services is not ideal. If these services are PostgreSQL database clusters, you really want to be able to specify only those database instances using their DNS names, instead of exposing a range of IP addresses. Using port definitions is also obsolete in a dynamic cloud-native world. A better approach is to use application protocols, such as ‘postgresql,’ so only network connections using the proper PostgreSQL protocols can access these servers.
Here is a summary of the limitations of egress control with Network Policy.
- Allow only rules, no Deny rules to specific IPs (only Deny_all)
- No concept of ‘external’ outside cluster, namespace egress only which includes outside of cluster
- No rule prioritization / order for adjusting the order in which firewall rules are hit
- No Hostname (DNS name) support, IP address only
Egress Controls with Red Hat OpenShift
OpenShift provides enhancements on the egress controls in native Kubernetes Network Policy by defining a custom resource which implements an egress firewall for egress outside of the cluster. This CRD is called the EgressNetworkPolicy and is deployed by default in OpenShift.
In OpenShift you can create one egress control policy per namespace, and not on the default namespace. All egress rules for the namespace need to be in this policy declaration. Here’s an example that allows egress to google.com, cnn.com and others but denies access to yahoo.com.
OpenShift egress control supports dns names, IP addresses, rule ordering, and deny rules. Limitations of OpenShift include:
- Does not apply to routes (routers used for external access) so egress connections through routes will by egress controls
- Namespace only, no pod selectors to further refine egress controls
- No application protocol verification (e.g. mysql, …) to further secure connections by layer7 application protocol (this also is a limitation in network policy)
- Limited rule management, where the order of definition in the yaml file is the method and can’t be compared to other global rules
Egress Controls with Istio
Istio is an open-source project that creates a service mesh among microservices and layers onto Kubernetes or OpenShift deployments. It does this by “deploying a sidecar proxy throughout your environment”. In the online documents, they give this example for Egress policy.
apiVersion: config.istio.io/v1alpha2 kind: EgressRule metadata: name: googleapis namespace: default spec: destination: service: "*.googleapis.com" ports: - port: 443 protocol: https
This rule allows containers in the ‘default’ namespace to access subdomains of googleapis.com with https protocol on port 443.
The article cites two limitations,
- Because HTTPS connections are encrypted, the application has to be modified to use HTTP protocol so that Istio can scan the payload.
- This is “not a security feature“, because the Host header can be faked.
To understand these limitations, we should first examine how a HTTP request looks like.
GET /examples/apis.html HTTP/1.1 Host: www.googleapis.com User-Agent: curl/7.35.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate
In the above example, you can see that when you type,
in the first portion of the URL, ‘www.googleapis.com’, does not appear in the first line of the HTTP request, which only contains the path. It shows up as the Host header. This header is where Istio looks for to match the domain defined in its network policy. If HTTPS is used, everything here is transmitted encrypted so Istio cannot find it. A compromised container can replace the header and trick Istio to allow traffic to a malicious IP.
In fact, the third limitation that the documentation doesn’t mention is that this approach only works with HTTP. We cannot define a policy to limit access to our PostgreSQL database cluster.
Istio Egress Gateway
An alternative method of egress control in Istio is to funnel all egress traffic through an egress gateway running within the cluster. However, while this is more secure than the egress control through the sidecar proxy, it can be bypassed and is prone to configuration errors. From the Istio docs a caution reads “Istio cannot securely enforce that all egress traffic actually flows through the egress gateway…”
This means that in order to make sure all egress traffic flows through the gateway, additional controls are required. For example, Kubernetes network policy, as discussed first, could be used to block egress except through the gateway. This adds complexity and potential for misconfiguration that could lead to connections leaking out of the cluster without the admin team knowing about it.
Egress Controls for Container Security with NeuVector
Once the NeuVector Enforcer container is deployed on a container host, it automatically starts monitoring the container network connections on that host with DPI technology (deep packet inspection). DPI enables NeuVector to discover attacks in the network. It can also be used to identify applications and enforce a network policy at the application level when deployed as a complete Kubernetes security solution.
Once a network policy with the whitelisted DNS name(s) is pushed to the Enforcer, the Enforcer’s data path process starts to silently inspect the DNS requests issued by the containers. The Enforcer does not actively send any DNS request, so no additional network overhead is introduced by NeuVector. It parses DNS requests and responses, and correlates the resolved IP with the network connections made by the specified containers. Based on the policy defined, the Enforcer can allow, alert or deny the network connections.
This approach works on both unencrypted and encrypted connections, and it will not be tricked by any faked headers. It can be used to enforce policies on any protocol, not just HTTP or HTTPS. More importantly, because of the DPI technology, the Enforcer can inspect the connections and make sure only the proper PostgreSQL protocol is used to access the PostgreSQL databases.
Using DPI/DLP to Enforce Egress Control Through a Traditional Proxy
Another more advanced use case is to route traffic through a proxy such as a squid proxy running outside the cluster. This presents the challenge of how to distinguish connections which should be allowed versus those which should be blocked based on the destination of the connection, but enforcing the policy at the source container/pod.
In the example below, connections to external resources at morningstart.com should be allowed, while oracle.com should be blocked. However, all connections must go through a squid proxy running outside the cluster.
The challenge is that we want to enforce egress control from within the cluster, at the source container/pod, so that we have the most flexibility to define which container sources should be allowed to access which egress destinations.
In order to accomplish this we can use the NeuVector deep packet inspection (DPI) capability and data loss prevention (DLP) feature to inspect the http headers in the outbound connection and allow morninstar.com while blocking oracle.com.
Secure, Scalable and Flexible Egress Control for Containers in Kubernetes and OpenShift
Kubernetes and OpenShift run-time container security should include control of ingress and egress connections to legacy and API-based services. While there are basic protections built into Kubernetes, Openshift, and Istio, business critical applications need enhanced security provided by Layer 7 DPI filtering to enforce egress policies based on application protocols and DNS names.
Egress control policies, like other run-time security policies, should support automation and integration through resources like the Kubernetes Customer Resource Definition (CRD) which enable DevOps teams to declare policies through Security Policy as Code.
Watch the webinar recording of the topics in this post including hands-on demo’s.