By Fei Huang
What is Serverless?
The nature of a serverless computing framework is to abstract applications at much higher level to provide portability, resource utilization and cost benefits. For example delivering API level or function level code that can run only when needed. Ideally, after developers have checked in code, a serverless computing framework will take over control of the pipeline from build, ship, to run enabling users to easily scale and manage applications. As compared to the typical existing container framework, serverless is going much further in abstraction. It automates and responds to everything needed for running these services. The container build-ship-run concepts are the same core pieces to support serverless as well, so serverless computing can be a perfect extension of container platforms for certain applications or functions.
Knative Serverless Platform
Knative is an open source framework for building, deploying and managing modern serverless workloads on Kubernetes. Knative is empowered by Kubernetes and service mesh technologies like Istio. It has become a fast growing cross-cloud serverless platform that Google, Redhat, IBM and SAP have all announced their support for in their commercial offerings of Knative. Today Lambda owns the majority of serverless deployments, but this may change given the rapid adoption of Kubernetes and service meshes. To address vendor lock-in concerns in an open source, Kubernetes native way, it is possible that a Knative-based serverless solution could become the future de-facto serverless framework.
Securing Serverless Platforms
So let’s look into serverless requirements from a run-time security angle. Serverless computing doesn’t change the fundamental requirements of application services. Whether it’s event-driven design, stateless microservices or stateful applications, production applications are still using same layers of computing resources. For example memory, CPU, network, storage etc. must still be available. The main difference is who or what owns the responsibility for resource management and workflow management. Serverless applications benefit from the shift that more and more intelligence is being built into the platform as more application management functions are owned by modern cloud platforms. In a similar way that the containers and Kubernetes are changing cloud application architectures, serverless brings in more layers of abstraction, more new interfaces and more complexity into platforms. From an overall system security viewpoint, the attack surface becomes even larger, especially for network based attacks. The following screenshot of system containers and network connections shows how complex the network and infrastructure can get. For just one single Knative application container, there are more than 20+ supporting containers surrounding and supporting it. If we add an Istio service mesh layer, even more internal east-west traffic will be generated. Protecting a service mesh and serverless platform is a big challenge indeed.
To properly secure a Knative application at run-time, let’s follow the service behavior logic. The first straightforward security enforcement will be checking the north-south gateway for the cluster. Knative is using the Kubernetes native ingress controller, API gateway or service meshes for traffic routing. Obviously for the ingress and egress interfaces it is very critical to have network protection in place.
In a Knative environment, like many other cloud services, HTTP and DNS are the most widely used protocols to communicate externally. A real layer 7 firewall will be able to inspect all this traffic and provide deep security at the packet level. For example, a DNS attack is one popular network attack that hides a malicious payload inside the DNS traffic and thus a typical L3/L4 firewall solution is blind to these attacks.
Looking deeper into the Knative cluster, we see that there are heavily used services which are the core components supporting serverless containers. These are the backbone of the service, and east-west network security is the most efficient method for protect these during run-time. The Kubernetes API server is one of those in the critical path, and it’s one where a critical vulnerability was discovered in Kubernetes several months. Coredns is another critical one that almost all running containers will need to communicate with. If there’s a Layer 7 DDoS attack or other direct attack on one of these core services, the whole cluster will be under high risk of crashing. That is why a distributed L7 firewall like NeuVector, which is designed to protect east-west traffic as well as ingress/egress connections, is so critical.
The discovery service, autoscaler services and activators are the critical ones that are providing scaling for Knative serverless functions at runtime. From a security perspective, we suggest locking down the behavior of these active containers with layer 7 segmentation, which is needed to limit the scope of the behavior of these system services. Anomaly detection should also be provided, not only suspicious network behavior, but this should also include the service behavior including process behavior, file system behavior or even syscall behavior. Combined, these are strong protections that can deliver defense in depth.
There are many system containers which help to keep applications running but are hidden from the view of users. A true runtime security solution needs to have deep integration and visibility into the orchestration, service mesh, and serverless systems, and it needs to be able to inspect the network behavior together with the container behavior.
Protecting Serverless Workloads During Runtime
This protection should be for both system containers as well as application workloads. From a security perspective, application containers should not be treated any differently than system containers. The same level of deep security is necessary to protect them. This also is true for the serverless functions that are running inside their host containers. Serverless functions may scale inside the same container or across multiple host containers. A serverless security solution needs to fit into this environment, automatically scaling not only across host containers but also scaling inside the host container.
Security automation is the key to be able to keep up with the speed of serverless scaling. That’s also why the NeuVector solution is able to monitor application behavior no matter whether it’s serverless or not in an automated fashion. This is the only way security technology can continue to provide visibility and protection for dynamic serverless workloads.
For example, in the example below NeuVector is protecting this Knative serverless environment. There are some malicious network attempts from the compromised system discovery service which are immediately detected and alerted.
In addition to run-time security, It’s always good to have security built into whole CI/CD process to reduce risks as much as possible. Vulnerability scanning can start during the build phase, and admission controls should be used to prevent unauthorized or vulnerable image deployments. Security auditing and scanning are good practices even though they are not the most efficient solution for runtime protection.
This article mainly focuses on several good runtime security practices for Knative. The NeuVector run-time container security solution will naturally protect your Knative serverless workloads with cloud-native integration. It really doesn’t matter whether there are long-running container workload or short-lived serverless functions, NeuVector provides the only true network firewalled security mesh that is simple and provides deep network protection. Without any manual configuration needed, NeuVector will start protecting your serverless computing environment. This will best enable you to focus on what really matters – keeping the hackers out – and not get lost in details such as analyzing individual CVEs.