Going serverless means departing from a traditional, on-premise server infrastructure, and migrating to third-party, cloud-hosted applications and services. Serverless is also known as backend as a service (BaaS). When an organization goes serverless, they take advantage of programmer tools (also known as “rich client”) that are tried-and-tested, which make component integration much easier.
The Big Three serverless cloud providers include:
- Amazon Web Services (AWS)
- Microsoft Azure
- Google Cloud Platform
Benefits of Going Serverless
One benefit of implementing a serverless framework is the ability to depart from the traditional “always-on” server architecture. Enterprises can scale up and down based on bandwidth needs and provide extra storage for peak demands. With maintenance costs reductions, architects can focus on driving business value. Other benefits include:
- Server cost savings
- Increased simplicity
- Less time spent on engineering, e.g. implementing, updating or scaling servers
- Increased independence from third-party vendors
- Support from cloud providers, e.g., server management and infrastructure decision making
When Not to Go Serverless
There are instances where serverless may not be the best choice for an organization. Serverless does not afford the observability that traditional architecture commands. The insights that a CISO may have, for instance, may be lost when using a third-party provider. Additionally, serverless is still a young space. The tools for monitoring and observing serverless operations available are still immature.
A heavy reliance on cloud provider ecosystems, e.g. server hardware and runtimes, is still something developers are wrapping their heads around. Unless an enterprise’s architects are capable of being completely hands-on, architecture control is suddenly placed in the hands of a serverless framework manager that calls the shots. In some scenarios, an organization may be kept in the dark regarding the serverless environment variables that are involved.
Therefore, choreography is favored over orchestration, as architects carefully manage information flow, control and security over the entire serverless ecosystem. Fortunately, the Cloud Native Computing Foundation (CNCF) is working on standardizing best practices to make application cloud migration and vendor lock-in easier.
Containerization Works Hand-In-Hand With Serverless
A developer may write logic that exists on a web server (server-side), which s/he runs in event-triggered containers. These containers are designed to run applications in their own isolated packages, with their own characteristics and dependencies. Containers are also ephemeral, in that they are activated and killed once the adequate input code is run.
Containers host all the elements and applications that are necessary to run, e.g. libraries, dependencies and settings. Containers offer portability – a container will run the same no matter where the host location resides. Docker is the most widely recognized containerization technology. Amazon ECS and EKS, and Google Container Engine are examples of cloud containerization platforms.
Key Differences: Containers vs. Serverless
It is possible to use a hybrid structure consisting of containers and serverless. However, there are some differences between the two. Servers are needed to actually run serverless computing infrastructure, but those servers reside on the cloud. Containers reside on a single machine, relying on its operating system (OS) until it is transferred to another machine if desired.
The number of containers used in an application are determined early in the software development lifecycle (SDLC). Architects don’t fluctuate the number of containers used after making this determination, unless a major overhaul is needed. Serverless solutions, on the other hand, are scalable based on an enterprise’s backend needs.
Container setup can take longer, because configuration is necessary to get it running. Serverless setup take milliseconds to employ because dependencies are nonexistent. Since the backend environment can vary, serverless testing can take longer. Containers behave the same way wherever they are set up.
Containers provide developers with more control over the application environment than serverless. However, serverless offers the release of new packages at a higher speed than with containers. Iterations are also accomplished more quickly.
Faas Fast Emerging as a Viable Platform Solution
Containerization applications are similar to Functions as a Service (FaaS). FaaS is a relatively new approach that started in 2014, which has taken the complexities of traditional, monolithic architecture out of the picture. FaaS breaks functions down into modular pieces that can be executed independently of one another. In terms of language and environment, FaaS run much like commonplace applications. They are also not limited to any singular framework or library.
Cloud services such as AWS Lambda, Google Cloud Functions, IBM Cloud Functions and Microsoft Azure Functions have implemented FaaS, adding automation and scalability to the mix. The large cloud providers maintain portfolios that consist of BaaS and FaaS platforms. Depending on their needs, companies can build upon BaaS and FaaS serverless frameworks. Taken together, the framework becomes a unified serverless product.
Benefits and Limitations of Using FaaS
There are several advantages to FaaS, including a decreased need for developer involvement in logistics – the cloud provider handles it. Horizontal scaling is elastic and can occur automatically. This results in an increased ability on the developer’s part to focus on innovation and quality, et al. Other benefits include:
- Scalability of parts and not entire infrastructures
- Reduction of idle resources
- Heightened fault tolerance with less downtime due to system outages
- Compact, modular business logic
There are limitations to the time FaaS allows for a function to respond to triggered events. For example, the timeout for a FaaS function in AWS Lambda occurs at most after 15 minutes. Tasks that are long in duration, therefore, are not appropriate for the FaaS environment, unless you’re coordinating several consecutive FaaS functions. Microsoft and Google’s serverless clouds also have limitations for when a function is invoked and then timed out.
Time limitations means that certain classes of long-lived tasks are not suited to FaaS functions without re-architecture – you may need to create several coordinated FaaS functions. Whereas in a traditional environment, you can have one task that is long in duration, which includes both coordination and execution.
There are also disadvantages to implementing FaaS from an infrastructure management standpoint. Control is handed over to the third-party cloud service system, who may not know all your architecture’s complexities. Fluctuating auto-scaling costs may make it difficult to control budgets, particularly when turbulent operating conditions exist. At times it may become difficult to monitor and manage all the functions needed to operate. Taken together, all of these obstacles are manageable with the right diagnostics, debugging, scripting and visualization tools.
Serverless Microservices a Welcome Departure from Monolithic Servers
Monolithic applications layer architecture components in a modular application. Monolithic applications are straightforward and practical for simple system and software architecture. Microservices comprise of a group of self-sustained services partitioned in a database, each with its own database schema. Each interconnected microservice application is smaller in size than the layered architecture of a monolithic application.
Microservices can operate within serverless environments. While a serverless function may appear similar to a microservice function, there is one inherent difference. A microservice may handle multiple functions, not just one. Therefore, a microservice function may be larger than a serverless function.
Microservices use a partitioned database architecture. Taken as a single unit, each microservice within the architecture works together to operate a single application. Microservices break the application into sets of manageable services, affording architects speed, simplicity and ease-of-maintenance. Services can be scaled and deployed independently.
The microservices ecosystem can be difficult to maintain due to its sheer complexity. Testing, and implementing changes and updates is complex, since each microservice may depend on one or more services to function. Careful planning and coordination is needed to ensure rollouts reach each and every interconnected component.
Serverless Security Needs to Be Addressed
In addition to setting physical cyber security restraints, it is important to restrict access to serverless applications, containers and microservices. Employ the least privilege rule, which provides the minimum level of access a user needs in order to complete his or her work. The following tips will also help to ensure your serverless deployment is more secure.
Secure Function Dependencies
Developers are responsible for securing function dependencies within application libraries. Update to the latest application versions when they are released and patch vulnerabilities when information about them is disclosed.
Add a Security Layer Over Workflows
An extra layer of security should be implemented over serverless workflows. API gateways act as one of the points-of-entry and constrain access to microservice groups. API gateways enforce strict authentication to trigger functions. In addition, API gateway end-to-end security offers monitoring and alerts, holding up deployment whenever risk concerns are raised.
Use a VPN
Set up a virtual private network (VPN) to add security that is more easily set up than firewalls or real networks. It is easier to setup VPN within a serverless application early on at the start. It will be more difficult to add VPN later.
Build Threat Models of Cyber Security Threats
Threat modeling uses process flow diagrams to identify and mitigate security issues. Irwin, an AWS DevOps Engineer, says, “Design threat-less, serverless architecture using ThreatModeler.” If you have some threats in mind, architects can customize ThreatModeler with custom threats and custom security requirements.
Out-of-the-box tools such as ThreatModeler let you threat model an entire serverless environment, such as the AWS or Microsoft Azure platform. ThreatModeler takes into consideration the entire serverless deployment environment, and encompasses backend and FaaS concepts.
How ThreatModeler Will Help to Secure Your Serverless Deployment
ThreatModeler Cloud Edition aids organizations with its award-winning, automated, scalable platform for cloud infrastructures. Map out and address potential threats to various AWS environments. CISOs can rest more easily knowing that ThreatModeler is helping them to manage risks more effectively.
ThreatModeler Cloud Edition offers seamless integration with the CICD pipeline, enabling DevSecOps teams to build a secure cloud architecture. To learn more about how ThreatModeler™ can help your organization build a scalable threat modeling process, book a demo to speak to a ThreatModeler expert today.
How to Threat Model an AWS Microservices Architecture
On Thursday, October 3, 2019, from 10 – 11 AM ET, ThreatModeler will host a webinar on How to Threat Model an AWS Microservice Architecture. ThreatModeler is proud to be an Advanced Technology Partner, offering full integration with AWS Microservices. Utilizing AWS Microservices will help organizations to achieve simplicity, flexibility and scalability. Explore how threat modeling can identify threats unique to your infrastructure or application architecture.