While we know the many benefits of going serverless -- reduced costs via pay-per-use pricing models, less operational burden/overhead, instant scalability, increased automation -- the challenges of going serverless are often not addressed as comprehensively. The understandable concerns over migrating can stop any architectural decisions and actions being made for fear of getting it wrong and not having the right resources.
This article discusses the common concerns around going serverless and advice to minimize their impact.
If you'd like to learn more about the challenges of going serverless and other concepts more in-depth, make sure to check out our Knowledge Base.
Security Risks Caused by Misconfiguration and Premature Deployment
Misconfiguration and subsequent premature deployment is a genuine issue when it comes to technology. Even though Serverless is a managed service, and there are usually fewer configuration concerns to take into account, you are still in charge of making your application secure, just as in a traditional server-based infrastructure. As teams start to migrate, using new cloud applications without full insight into the deployment until it's too late, their infrastructure is at risk of exposure to data leaks, Distributed Denial of Service (DDoS) attacks, and Man-in-the-Middle attacks, to name a few.
There have been plenty of stories over the years showing prominent organizations' data breaches, leaks and various successful security hacks into their infrastructure. This leads to their customer base questioning their reputation and security, not to mention the huge financial repercussions. On the other hand, serverless infrastructures have proved to be rather bullet-proof when it comes to security breaches.
Learning any new language or skill requires mistakes to be made, but the key is to avoid having these mistakes create any true impact. There are plenty of resources and platforms available that check your infrastructure follows the correct security best practices. A simple method to check configurations is to deploy small and often into a test environment, letting it run in there for some time while using one of these platforms to cross-check. Once all prove successful and safe, you'll be confident when you deploy it into production.
Dashbird analyses the security, efficiency, posture, and reliability of cloud-native applications and reports back. You can find out in seconds how your infrastructure benchmarks against industry best practices.
Cost Efficiency for Prolonged Computing
The longer the computing tasks run, the more you pay. For highly scaled applications with data-intensive workload processing requests, a detailed cost analysis of usage-based serverless computing services is required to ensure optimal cost efficiency. Costs could spike unpredictably for an unforeseen increase in usage demands and may require organizations to tradeoff the traffic they can accommodate in a bid to reduce the IT expense.
Make sure to keep a close eye on AWS service costs.
There are multiple ways to solve this issue; the easiest solution is simply not to use Lambda for long-running processes at all. Lambda's limitations got removed over the years, and the service is now more flexible than ever before. However, sometimes it can still be advisable to use a virtual machine or container. AWS Fargate tries to be the middle ground between function as a service and VMs: Serverless containers, so to say. This might be a better service for your long-running processes.
Learn more about AWS Lambda cost-saving strategies.
The more complex solution requires you to implement your process more distributed if possible, divide and conquer. This can lead to faster invocations and even less money spent. Sadly, this isn't always possible, and you can't parallelize some processes, so Lambda isn't an option anymore.
Additional Function and Microservice Calls
Additional overhead is introduced to initiate compute requests with every function and microservice since the workloads may not reside on the same server instance. This information may be required to plan for an accurate total cost of ownership.
Furthermore, every server invocation has its inherent latency that may impact app performance. Most serverless providers enforce time limits on how long the request can consume computing resources before it's terminated. For instance, the execution time limit for Lambda is set at 15 minutes. If an API Gateway event invokes a Lambda function, this timeout is just 29 seconds.
AWS' services become better and better at direct integration with each other. If possible, try to cut out Lambda briding functions whenever possible. API Gateway can write to DynamoDB directly, and with a VTL template, you can transform the request data to fit into your table without needing a Lambda function. This saves money and speeds up the transfer.
The same goes for AWS Step Functions; it can deliver its payload directly to many services already, removing the need for costly Lambda functions in-between.
Observability lessens
As we already know, insight is key and the main driver for architectural changes and improvements. A common stumbling block for anyone new to serverless infrastructure is the lack of visibility, or rather the seemingly reduced visibility compared to what they were used to.
Serverless, by design, encourages event-based architecture and is often stateless so having access to logs and application traces is the only way to understand any gaps in your infrastructure.
All public cloud platforms offer services to increase visibility and observability of your infrastructure; however, specialist monitoring platforms such as Dashbird can give further insight. Such services make observability easier by providing intuitive dashboards that can drill down into detail should you need it, offer 3rd party integration for automated alerts, and seamlessly remain updated with any infrastructure changes.
These features offer full and comprehensive observability to a level that would be difficult to have in a default cloud-provider monitoring service such as AWS CloudWatch.
Vendor Lock-in
There is often a fear of losing control when it comes to serverless, as the vendor determines management and application specifics. The benefits of the cloud, such as hardware choices and upgrades, runtimes, and resource parameters, can also be seen as over-reliance and inflexibility. In addition to this, once the infrastructure has been deployed and fully functioning, concerns are raised around vendor lock-in, and limitations should users want to migrate later down the line.
As developers working within agile organizations, architectural adaptability is crucial to meet the needs of the business. While hardware choices are no longer down to the business, public cloud platforms and ways to work have come a long way to enable greater infrastructure autonomy.
Looking at applications using AWS Lambda and AWS DynamoDB (DDB), hundreds of Lambda functions can interact with just a few DDB tables. If every Lambda queries using DDB standards -- this is programming to an implementation -- any database move will require an arduous change to each Lambda function. A useful workaround is to create an interface that can translate general Lambda requests to the DDB query standards. This is called programming to an interface.
This change in programming will allow developers to simply write a new interface that still understands the requests and can translate to the new database query language when they need to move out of AWS DDB. The interface can be deployed as a Lambda layer for an even greater decoupling level.
Distributed Computing
With serverless comes the design for distributed computing, where components are shared among multiple computers to enable greater efficiency and performance. The challenge comes from creating a balance of granular functions for high performance but not too many to make it unmanageable in the long term. Another consideration is to ensure that the functions aren't so high level or broad that their very benefit is eliminated. Instead, you simply have multiple mini monoliths to contend with.
When looking at examples of large enterprises taking advantage of serverless benefits and the distributed computing model, after the "I want that too" thoughts comes the "but how?!" It's important to keep clear that every organization started small and scaled. This sounds like straightforward advice, but it can get easily lost in the noise when starting or migrating entire builds into the cloud.
The second thing to keep in mind when thinking about your system and how each function will communicate with another is the actor model. Limiting the actors to a set of functions such as creating other actors, sending messages, or deciding what to do with the next message will help avoid overwhelm and encourage a communicative environment.
Opportunities of going serverless
Now that we've gone through some of the challenges, we'd like to stress that the benefits of serverless are so vast that the operational and financial rewards could significantly outweigh the older, clunky and chunkier alternative making the switch highly valuable and incredibly worthy. Here are just some of the benefits of moving to serverless:
- Automated Infrastructure Management: An entire chain of manual infrastructure management processes is eliminated as users no longer need to coordinate and manage resource for increasingly distributed software components that constitute modern apps.
- Cost and Time Saving: The operational costs and time is reduced as no system administration processes are required to package and deploy the apps.
- High Scalability and Optimized Resource Utilization: The cost barrier to scale apps is also reduced as serverless IT workloads don't require dedicated resources. Every application request is met with continuous and independent scalability requirements, yet the users are charged only during the period that requests are served.
- Truly Agile Business Processes: Since the application deployment process is (apparently) decoupled from the underlying infrastructure, Agile and DevOps-driven organizations are can maintain flexible IT operations and business processes. The constraints due to hardware complexity and infrastructure configuration limitations have less impact and role in dictating IT-driven business operations or app functionality. Agile teams can aim for faster development sprints and deploy iterative improvements or changing app functionality with fewer constraints.
- Reliability and Performance: Services like AWS are inherently resilient and can guarantee performance SLAs most of the times. Serverless architecture offers the convenience of reducing the opportunities of hardware misconfigurations that potentially lead to performance degradation with traditional app deployment practices.
Conclusion
Going serverless comes with its challenges, and cause for confusion would be untruthful as there are plenty of unknowns in this now very accessible, fast-moving technology. With so many large organizations using it today who have the budget for vast resources and teams and yet are still subject to security breaches and architectural failures, the new serverless world can seem daunting and unworthy.
However, in equal measure to the many questions and concerns are their solutions and answers. The most prominent advice we can give from our own experience is to start and continue small for configurations and deployment, use a dedicated serverless observability platform such as Dashbird to expand visibility, increase insights, continuously encourage and follow best practices, and keep simplicity throughout to avoid overly complex systems.
It won't be perfect straight away, but Rome wasn't built in a day either!
Further reading:
Roadmap for Backend Developer on Serverless Infrastructures
Solving the Challenges of Serverless at Scale
Four Immediate Benefits You'll Gain from a Modern Monitoring Tool