Serverless Event-Driven Applications
In an age where cloud computing is becoming more and more ubiquitous, companies are adopting both cloud-native and hybrid solutions. It is very cost-effective and saves a lot of the overhead involved in setting up and managing the infrastructure or servers.
These cloud-based applications are designed and written in terms of small chunks or single functions called as microservices. Different microservices have different usability requirements; some are called frequently, and some very rarely. But these services have to be live on the server and eat up resources and computational power. While cloud economics are a lot more efficient than on-premise servers for varying peak loads, they can be optimized even further with serverless architecture for certain use cases.
Wikipedia defines serverless computing as:
“A cloud computing execution model in which the cloud provider dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity. It is a form of utility computing.
Serverless computing still requires servers. The name ‘serverless computing’ is used because the server management and capacity planning decisions are completely hidden from the developer or operator. Serverless code can be used in conjunction with code deployed in traditional styles, such as microservices. Alternatively, applications can be written to be purely serverless and use no provisioned services at all.”
So now, in the concept of severless computing, we bundle these services in containers and the cloud infrastructure and run these containers in response to certain events. The idea is that if you can have set of computational resources in a container that you can very bring up and get rid of within milliseconds. Because of this, your service will only be live for the time it’s being used or doing actual work. This means you aren’t paying per hour for your instance; rather, you are paying per function instance call.
Container virtualization is core to cloud computing and serverless event-driven applications. Containers are otherwise commonly known as operating system-level virtualization. They are a new-age approach to virtualization which equips the application with only the bare minimum of resources required to run and function. Containers use the host operating system as their base.
According to a CodeAcademy article by Vineet Badola: “Rather than virtualizing the hardware (which requires full virtualized operating system images for each guest), containers virtualize the OS itself, sharing the host OS kernel and its resources with both the host and other containers.”
One can often find a single executable service or microservice inside a container. We can measure the size of the containers in tens of megabytes, and this process can be provisioned within a matter of seconds. Dockers are a famously lightweight implementation of container virtualization.
IBM OpenWhisk on Bluemix
OpenWhisk is an IBM implementation of serverless event-driven infrastructure framework. As an event-action platform, OpenWhisk lets you execute code in response to an event. Several complexities of infrastructure are hidden in this serverless operational and deployment model. Thus, developers don’t have to worry about pre-provisioning infrastructure such as servers or operations, they can now simply focus on code and business needs, allowing them to quickly build robust and scalable applications.
IBM OpenWhisk Concept and Architecture
Source: The Medium
The OpenWhisk model consists of three concepts:
- Triggers: The trigger is an event that is fired when a specific condition is met. This can be linked to the events fired by some external services like a change in a table in Cloudant, a message received in the messaging hub queue, a commit in GitHub, or an IOT sensor sending data. A trigger can also be caused by periodic alarms.
- Actions: An action is an event handler. It is the code snippet that developers write that is invoked directly through an HTTP call or by a trigger. OpenWhisk supports Nodejs, Python, Swift, and even arbitrary binaries packaged as docker containers.
- Rules: Rules are the mapping between actions with triggers. It associates actions with the invoking triggers. Rules can associate multiple triggers with an action.
- Sequences: A sequence is the chaining of multiple actions.
- Packages: These describe external services in a uniform manner.
Additionally, with API Gateway support (which is included), you can expose an OpenWhisk action as an API. This provides the capability to later apply security and rate limiting policies, view API usage and response logs, and define API sharing policies.
Some of the advantages of OpenWhisk are:
- Low-level details such as scaling, load balancing, logging, fault tolerance, and message queues are taken care of.
- Support for multiple programing languages. With the inclusion of docker actions you can write your code in whichever language you want and bundle the binaries in docker container to be invoked as an action. This means companies are no longer forced to build skills in multiple programing languages.
- An open ecosystem that supports and allows sharing microservices via OpenWhisk packages.
- A rich ecosystem of building blocks from various domains (analytics, cognitive, data, IoT, etc.).
- It hides infrastructural complexities and enable developer to focus on business logic.
- It provides a pricing model in which you will be only charged per request rather than traditional model of charging per hour.
- Microservices can be shared through OpenWhisk packages.
It is a well-known fact that OpenWhisk is a powerful, open source serverless computing platform. It is changing the paradigm of building new robust applications with ability of seamless integration and on demand scaling.
Leveraging the Ability of IBM OpenWhisk and IBM Containers to Perform Complex Asynchronous Tasks
As already mentioned above, OpenWhisk provides the ability to run your application binary using docker actions. This is very useful feature, as you are not bound to writing your application in any specific programing language. Basically, a docker action works as follows:
- OpenWhisk receives an event.
- OpenWhisk infrastructure brings up the docker container.
- It will run the encapsulated application binary and return a response in the event of synchronous two-way operation.
- OpenWhisk brings down the docker container.
A docker container can run for a maximum of 300 seconds. Docker actions are usually designed to run synchronous or asynchronous flash tasks which can be completed within this timeframe. But there may be scenarios in which your asynchronous application or task takes more than 300 seconds to complete.
Let’s look at an example of the typical scenario of a DevOps build and deploy. Building and deploying your apps to Bluemix can take longer depending upon the number and types of applications. So for this scenario, we can take advantage of integrating IBM containers with IBM OpenWhisk.
The diagram above shows the integration of OpenWhisk and IBM containers in order to implement the DevOps build and deploy scenario.
It flows as follows:
- Git commit events are fired and received on the GitHub webhook trigger in OpenWhisk.
- The GitHub webhook is sequenced with a docker action. This docker action is created for a Linux image that contains our entry script to invoke and create instances of an DevOps image in the IBM container service.
- The docker action creates the container for the DevOps image in detached mode by passing the required parameters and exits. The DevOps image contains the actual scripts for the building and deployment of applications to Bluemix. As the DevOps container is created in detached mode, it will be automatically destroyed when the script finishes.
- DevOps containers use IBM shared volumes for the persistence of various application builds.
- Build and deployment scripts send progress updates by invoking OpenWhisk actions via HTTP. These action are sequenced with the Slack webhook to send updates over a Slack channel.
A similar approach can be used for various use cases that require the performance of long-running, loosely coupled asynchronous tasks in a cost-effective way.
Any questions? Drop us a note at firstname.lastname@example.org.
About the Author
Vinitesh is a consultant for Prolifics’ Smarter Process practice, specializing in integration and cloud computing. He has experience in implementing various digital transformation projects using business process automation and SOA technologies. He is also a certified IBM BPM Advanced integration developer and has a keen interest in the field of serverless computing and blockchain.