Serverless computing is one of the paradigms experiencing rapid growth in the cloud. What are its actual benefits? What differentiates it from other models? With the advent of serverless computing, developers and IT departments have had the opportunity in recent years to focus on strategic activities, leaving out time-consuming tasks such as planning, procurement, and maintenance of computing resources. It is good to clarify immediately that we are in the field of cloud computing. As you know, there are three main cloud computing models, which differ in the different levels of control, flexibility, and resource management:
In recent years we have witnessed the evolution of the models described above. Therefore, different paradigms of management and use of resources were born: one of these is the so-called Function as a Service (Faas), also called serverless computing. Serverless computing is a cloud computing paradigm that allows applications to run without worrying about underlying infrastructure problems. The term “serverless” could be misleading: one might think that this model does not involve using servers for processing. In reality, it indicates that the provisioning, scalability, and management of the servers on which the applications are run are administered automatically, in a completely transparent way for the developer.
All this is possible thanks to a new architecture model called serverless. The first FaaS model dates back, as mentioned, to Amazon’s release of the AWS Lambda service. Over time, more alternatives have been added to Amazon’s solution, developed by other prominent vendors, such as Microsoft, with its Azure Functions, and IBM and Google, with its Cloud Functions. There are also reasonable open-source solutions: among the most used, we have Apache OpenWhisk, used by IBM itself on Bluemix for its serverless offering, and OpenLambda and IronFunctions, based on Docker container technology.
A function contains code that a developer wants to run in response to specific events. The developer takes care of configuring this code and specifying the requirements in terms of resources within the console of the reference vendor. Everything else, including resource sizing, is handled automatically by the provider, based on the workload required.
The benefits deriving from serverless computing are many:
As always, not all that glitters is gold. There are cons to consider when evaluating the adoption of this paradigm:
There are currently several companies that rely on serverless computing. AWS Lambda is used, for example, by Localytics to process billions of data points in real-time and to process historical and stored data in S3 or streamed from Kinesis. The Seattle Times newspaper uses AWS Lambda to resize its online edition images to display correctly on multiple devices, whether desktops, tablets, or smartphones.
Spindox uses AWS Lambda for several purposes:
It also integrates Lambdas with IoT projects to process and log requests from Chatbot developed internally.
It, therefore, appears evident that the use of serverless computing is strictly linked to the type of product to be developed and that not all applications are suitable for this paradigm. The limitations become evident, especially when dealing with legacy systems, which are not always easily adaptable to new technologies, or systems that are too complex, which would risk uncontrollably increasing costs.
If used with due care, the advantages for new applications are evident in both the development and quality of the product created. Using resources only when they are needed makes it a very flexible and attractive model for companies, but what does it mean on the side of development and operations teams? Could not the lack of resource planning lead to the development of insufficiently engineered applications, lacking the proper precautions both in terms of performance and use of the resources themselves?