By Menaka Jayawardena, Associate Technical Lead at WSO2
Today, customers increasingly demand access to real-time information like stock prices, train times, etc. and they expect to be automatically informed when something has changed without having to hit the refresh button. Delivering this critical information, as it occurs, is a challenging task for every business. Traditionally, applications required backend servers to fetch the latest information; however, this proved to be inefficient, as it consumes a significant number of resources.
Many APIs that make up the web today are synchronous APIs. The nature of this type of API is a function of the time frame from the request to the return of data and it provides a way to make scheduled requests for resources, data or services when available. Polling is a common approach, but the process of periodical requests to the backends and waiting for responses becomes inefficient over time.
APIs should be designed to allow users to receive a stream of events from the service, instead of polling it periodically. Event-driven APIs or asynchronous (async) APIs can be used to meet this requirement — with mission-critical information pushed to client applications at the time of the event. This provides a much better experience for users.
Async APIs vs. REST APIs
REST APIs for core management capabilities and end-user interactions are becoming essential for building both on-premises and cloud-based solutions. RESTful APIs are more flexible, faster, popular, and scalable. They are now favoured over SOAP APIs, which are becoming outdated.
Unlike conventional request/response APIs (e.g., REST and SOAP), asynchronous APIs can send multiple responses to a single request. This can also be in the form of unidirectional or bi-directional communication. Several protocols can be used for async APIs, such as WebSocket, Webhooks, MQTT, and Server-Sent Events (SSE). Most of these protocols support HTTP at the connection creation stage and use a specific channel to transfer the subsequent messages between the client and the server. Also, conventional HTTP verbs (i.e., GET, POST, PUT, etc.) are not valid for these channels.
Another prominent difference between a REST API and an async API is the usage of an event backbone technology (a message broker such as Kafka or RabbitMQ) and topics. The backend services are registered as event publishers and they publish events on specific topics. Client applications are registered as event subscribers to respective topics, to receive those events published by the publisher services. Upon receiving the events, the client performs the required processing and displays it to the user.
Since async APIs and REST APIs are conceptually different, several unique challenges arise when using a conventional system for asynchronous APIs. These include incompatibilities with existing security mechanisms and throttling policies, and problems around capturing analytics data. Handling these challenges via a proper API management solution that fully supports event-driven APIs is a must.
Are your Event-Driven APIs secure?
API security can be categorised into authentication and authorisation. Authentication describes who can access which resource, while authorisation describes whether the authenticated user can perform the specific task. In conventional REST APIs, users can be authenticated using user credentials, access tokens, certificate-based authentication, etc. Also, each resource can be protected with scopes and each API invocation can be protected too. However, in asynchronous APIs there are only topics to which the clients and services are subscribed, and the communication occurs through a dedicated messaging backbone; it is a challenging task to secure APIs.
One possible approach to this challenge is by authenticating during the initial HTTP communication. For example, we can secure the initial WebSocket handshake (via HTTP) before creating the connection. It is also possible to enforce authorisation by defining whether the client can publish any events or not.
There are some open endpoints that do not require authentication and authorisation based on particular use cases.
Rate Limiting, Throttling and Monetisation
Security, rate limiting, throttling, monetisation and analytics are some of the important factors that an organisation should focus on when exposing its core business functions as APIs. To address these, an enterprise must select the right API management solution.
Much of the time, the end goal of any business that exposes APIs for external parties is to generate revenue. For this, the main requirement is the ability to limit the usage of the API (block the access, reduce bandwidth, etc). API management systems support rate limiting and monetisation for REST/SOAP APIs, using policies based on the request count (requests per second/minute, bandwidth, etc). When the client exceeds the number of requests allowed, the client is blocked for some time.
Protecting backend services from spikes of requests are also handled via these policies, by introducing a request rate limit. But when it comes to async APIs, servers publish the events and applications are the event subscribers. Therefore, conventional throttling policies cannot be applied because server-client events need to be considered.
The definition of throttling policies should also be changed. Consider the following:
Time-based throttling: A client can only be subscribed to the topic for a specific time. After that, the client is disconnected from the server.
Event count-based throttling: A client can only receive x number of total events. This also can be combined with time-based throttling and create policies (e.g., a count of 10000 events per day).
Backpressure-based throttling: When the client cannot handle the rate of events it receives, it imposes stress on the gateway to delivering the messages to the client — since it is required to queue the messages and send when the client can accept them. In these situations, the client can be removed from the gateway, to ensure that the gateway is not affected.
Why analytics plays a vital role in API management.
Analytics plays a vital role in any API-driven business and helps to make informed decisions, by providing details such as the number of API consumers, most-accessed API resources, latencies, and identifying trends, etc. It should be a mandatory capability supported by an API management product.
In traditional REST/SOAP APIs, an API gateway can capture information such as invoking API resources, backend latencies, geo-locations, etc. These are fetched from the request/response headers.
When it comes to async APIs, capturing this information becomes much more complex, since there are no HTTP requests or responses. What we do have is a set of topics and subscribers. All the messages are sent through a separate channel (server -> client or client -> server) and the gateway should be able to capture the required information. For each subscriber of the API, the gateway should capture:
- The number of messages being pushed.
- The TPS variation over time
- The number of publishing errors
- Health details about the backend (endpoint)
Expanding business reach and adoption
Using event-driven APIs has become key to meeting customer demand and providing a better user experience. Since there are several fundamental differences between REST and async APIs, using a standard API management solution may be challenging. The right API management solution should combine traditional API management capabilities with an event-driven architecture. Moreover, vendors now provide integration software with plug-and-pay and configuration-driven approaches to implement asynchronous messaging patterns. This will provide tremendous value, enabling an organisation to expand business reach and adoption.