Cover Image for Push vs Polling vs Long Polling

Push vs Polling vs Long Polling

There are situations where we need to develop "event-based" applications, such as a chat application, a real-time cryptocurrency price display, or an event-driven microservices architecture. In these cases, apart from taking into account messaging semantics, choosing the appropriate communication model based on performance and scalability becomes crucial. Essentially, we have three models to consider: push, polling, and long polling. Let's explore the conceptual differences between them.


In this model, the client opens a connection to the server and waits for messages. The connection remains open, and the server synchronously pushes messages to the client as soon as it receives them from the sender.

This model provides the most real-time experience among the three approaches. Messages are delivered to the client immediately after the server receives them. However, it is worth noting that this model may not be the most scalable option. The server lacks knowledge of whether the client is ready to receive new messages, which can potentially lead to increased server load and impact application performance.

The pushing model is well-suited for scenarios where there are a limited number of clients/consumers and messages. For instance, it is commonly used in chat applications like WhatsApp, where there are a small number of clients per group or channel. In fact, WhatsApp itself utilizes this model. RabbitMQ and Redis Pub/Sub are examples of technologies that employ the pushing model.


Unlike the push model, polling follows a request/response approach. The client sends a request to the server and receives an immediate response, which may include new messages or no data at all. This model can be inefficient in terms of bandwidth usage when there are no new messages, as it results in empty responses. Additionally, opening and closing connections for each request can be costly in terms of resources.

Although the polling model may not scale well, it is the simplest one to implement and can be useful in scenarios where scalability and performance are not major concerns. It provides a straightforward way to retrieve updates from the server. However, it's important to consider the impact on bandwidth consumption and resource utilization when employing this model.

Long polling

This model combines elements from the previous two models: the client sends a request to the server, and the connection remains open until new messages are returned. Long polling addresses the issue of empty responses that occur in polling, which can waste bandwidth and server resources. With long polling, the client informs the server, "Hey, let me know when there's a new message, I'll wait for it." This way, the server is aware that a specific client is ready to receive messages and can send them accordingly.

There are two interesting aspects to consider here. First, once a response is received, the client/server connection is closed, and the client is responsible for making another request to the server when it's ready to handle new messages. Second, server parameters can be configured to enhance performance. For example, the server could be set to send data to clients when there are at least three new messages or when the pending messages reach a size of 25KB or more. This optimization improves message handling efficiency and enhances the overall client/server communication. The long polling model can be viewed as a server-side local polling model.

Kafka serves as an example of a technology that implements the long polling model, which contributes to its scalability compared to RabbitMQ in scenarios involving a massive number of messages and clients. Additionally, it's worth mentioning Server-Sent Events (SSE), a protocol that implements the long polling model but does not close the connection after sending a message to the client.

Although I mentioned specific tools that implement these three communication models (push, polling, and long polling), it's important to note that these models are protocol-agnostic and can be implemented without relying on any particular tool.

By understanding the differences between push, polling, and long polling, developers can make informed decisions and choose the most appropriate communication model to ensure optimal performance and scalability for their event-based applications.