High-efficiency apps — preventing unnecessary network requests

Rogério de Oliveira
13 min readJul 26, 2023

--

Juazeiro (Ziziphus joazeiro) during the dry season in Porteirinha — MG.

Motivation

It’s well-known that server requests are one of the most expensive tasks in web applications. Even today, it remains a significant challenge for developers and engineers seeking to build high-efficient apps.

Despite the latest breakthroughs in the communication field that have significantly improved networking, such as 5G, Wi-fi 6, and so on, it’s still necessary to be careful about the server’s response time and resource consumption. Actually, according to the nature of the solution, it will always be a crucial requirement. Real-time apps, for example, can be extremely sensitive to network latency. Furthermore, there are other relevant factors that compel us to maximize network efficiency to the greatest extent possible, as outlined below:

  • Green energy and green coding
    Nowadays, more than ever, climate change is in focus, and yes, the way we code is a part of it. Performing unnecessary requests increases the load on the server as well as the network infrastructure itself (routers, amplifiers, switches, etc.), which results in higher power consumption. It is important to keep in mind that your code requires electrical energy to run. Therefore, it is necessary to minimize the energy consumption of software, thereby limiting the potential environmental impact and preventing unnecessary requests.
  • Limited mobile data plans
    If someone is accessing your site with a limited mobile data plan, every unnecessary network request is a waste of his money.
  • Slow networks
    Currently, it’s not difficult to find people who are struggling with problematic connections in terms of speed and stability. A significant portion of the world’s population still relies on what would be considered “poor connections” (according to: A Global Overview of Internet Prices and Speed). For this case, it’s desirable to keep server requests as efficient as possible in terms of both quantity and payload size.
  • User experience
    Web pages will not load until all of its essential resources have downloaded completely which can lead to long wait periods and consequently losing user’s attention.

These are just a few examples, other scenarios are also possible. Among all these possibilities, this article will focus on providing some software tips to optimize client-server communication. Specifically, it will address data transfer (size and speed) by avoiding unnecessary requests and optimizing existing ones. The idea would be to keep the current network infrastructure unchanged and bump up its overall performance and quality by following these tips.

HTTP: the usual and no so optimized way to connect client and server

For sure, data transfer is undoubtedly an essential aspect when discussing connection efficiency, if not the most crucial one. It directly affects all the previous points discussed. To enhance the data transfer process, it’s generally necessary to act on HTTP — Hypertext Transfer Protocol, which is one of the most widely used standards for loading web pages and exchanging information between a client and a server.

In terms of HTTP there are various approaches to optimize server-client communication, normally the most common would be:

  1. Resources minification

2. File compression

3. Reduction of the number of server calls and request optimization

4. Images format

5. Lazy loading

6. Caching

7. CDN — Content Delivery Network

Each of these examples deserves a dedicated article to be properly covered. That way, as mentioned before, this content will be restricted only to the third one, which provides some possible answers to the question: How can the network efficiency be enhanced by reducing the number of server calls?

The main thing to put an eye on when talking about HTTP is its Round Trip Time — RTT. Although the successful adoption of HTTP/2 and even the HTTP/3, it is important to note that RTT is inherent to their foundation. Therefore, it is advisable to avoid unnecessary requests whenever possible, as HTTP requests can be slow and expensive. The following diagram shows a simplified distinction of HTTP/1 (left side) and HTTP/2 (right side).

Figure 1 — The initial version of HTTP required several round trips before starting content delivery, which was solved with the introduction of its second version. Even so, the round trip time (RTT) is unavoidable.

Every HTTP call is composed of a request and a response. Before receiving any data, the client needs to negotiate each request with the server using a process called handshake which is not described in the above diagram. Here is a funny way explaining how HTTP handshaking works: “How HTTPS works”.

So, all of that ends up consuming a considerable portion of the total time, which is the duration from the initial process request until the start of data transmission. In addition, there is network latency associated with the infrastructure, which can directly impact the response time. Unfortunately, the HTTP protocol is quite complex to be fully discussed in this article. You can check out MDN’s article for a more in-depth conceptual overview: “An Overview of HTTP.”

At this point, there is a clear need to cut off unnecessary requests and optimize existing ones. However, before getting into that, it’s worth to talk a bit about HTTP cache, one of the first lines of defense against unnecessary requests. In general, the HTTP cache is an effective way to improve load performance because it reduces unnecessary network requests. It is supported in all browsers and doesn’t take too much effort to set up. On the other hand, it has limited control over the lifespan of cached responses. As its name suggests, the browser will look up its cache before starting a new request in order to check whether there’s a valid cached response that can be used to fulfill the request. If there’s a match, the response is read from the cache, eliminating both the network latency and the data cost that the transfer incurs.

Figure 2 — HTTP cache is the gateway for developers to improve application performance and end-user experience. Adapted from: What Is HTTP Caching and How Does It Work?.

Honestly, from my perspective, the HTTP cache is just one part of what would be the optimal solution and it does not eliminate the need for designing an efficient communication mechanism between the server and client. Indeed, there is an agreement that the most successful way to optimize an app in terms of HTTP requests is at the code level.

Going beyond HTTP cache

Excluding server issues, perhaps the main reason for struggling with response time is the way the developers design the calls themselves. From a code perspective, there are some simple design patterns and tricks that can be quite useful in improving execution time. Let’s explore a few of them.

The code-based examples utilize Node.js + Knex on the server-side and JavaScript on the client-side to demonstrate their practical implementation applied to solar panel data. However, the same concepts can also be applied to other technologies. Figure 2 contains the data set used to demonstrate the examples, which consists of three rows of solar panel data.

// Solar panel dataset. Fetched from my-host.com/panel
[
{
panelId: '3fb7b82a-b15c-4b96-ba8f-94c4059742b9',
area: 2.00,
manufacturer: 'Solar Edge',
panelCode: 'SE-1255MF',
panelType: 'monocrystalline',
power: 12.55,
warrantyTime: 30
},
{
panelId: '47e27902-444a-456c-94e6-6de34d7e1dbf',
area: 3.50,
manufacturer: 'Solar Edge',
panelCode: 'SE-2500MF',
panelType: 'monocrystalline',
power: 12.55,
warrantyTime: 25
},
{
panelId: '418713ca-7b40-4a1d-83a7-ce049cd66975',
area: 4.00,
manufacturer: 'Solar Edge',
panelCode: 'SE-1255PF',
panelType: 'polycrystalline',
power: 12.55,
warrantyTime: 30
}
];

1. Reuse local data whenever possible

This principle tells us to reuse the available data whenever possible instead of performing a new request to get the same data. It becomes clear for edit and delete operations. For instance, let’s imagine that the second row was edited changing its power value to 14.00, so the data set would look like:

// Edited solar panel dataset. Fetched from my-host.com/panel
[
{
panelId: '3fb7b82a-b15c-4b96-ba8f-94c4059742b9',
area: 2.00,
manufacturer: 'Solar Edge',
panelCode: 'SE-1255MF',
panelType: 'monocrystalline',
power: 12.55,
warrantyTime: 30
},
{
panelId: '47e27902-444a-456c-94e6-6de34d7e1dbf',
area: 3.50,
manufacturer: 'Solar Edge',
panelCode: 'SE-2500MF',
panelType: 'monocrystalline',
power: 14.00,
warrantyTime: 25
},
{
panelId: '418713ca-7b40-4a1d-83a7-ce049cd66975',
area: 4.00,
manufacturer: 'Solar Edge',
panelCode: 'SE-1255PF',
panelType: 'polycrystalline',
power: 12.55,
warrantyTime: 30
}
];

For this case, the client would need just to wait for a 204 status, indicating that the edit operation was successful because all the required data is already available on the client.

// Solar panel controller

import { solarService } from 'services';

const edit = async (req, res) => {
try {
const panel = req.body;

await solarService.edit(panel);
// If the operation succeeds in the server, in most cases, there is no need of sending any data back to the client.
return res.status(204).send();
} catch (err) {
res.status(500).send();
}
}

By doing so on the server side, we achieve lighter and even faster HTTP responses. Finally, the client app simply needs to update its local data with the newer record.

// Solar panel component
import { solarService } from 'services';

// "panel" contains the data changed by the user whereas "panels" is the data set.
const edit = async (panel) => {
try {
await solarService.edit(panel);
// Once the operation succeed, just override the old data with the new value.
panels[1] = panel;
} catch (err) {
// Error handling.
}
}

The same principle can be applied to delete operations. In this case, the server would send a 204 status, and the client would then remove the specific record from its dataset. For most of the creation process, the required data is generated on the client side. Typically, only a small amount of information, such as the primary key and other necessary fields, is generated on the server. This way, most data can be reused directly by the client. For instance, let’s consider the creation of a new panel record. The service could be as follows:

// Returns [ { panelId: '3fb7b82a-b15c-4b96-ba8f-94c4059742b9' } ]
const create = (panel) => {
return db('panel').returning('panelId').insert(panel);
}

2. Create constant values on the client instead of looking for them on the server

This point might sound quite controversial, and in fact, it is. The idea is to keep constant or hardly mutable values on the client instead of retrieving them from the server. The main downside of this approach is that when a value changes, it’s required to update it in two places to maintain synchronization, both in the client and on the server. That might be unfeasible for certain applications, but if it isn’t, it can be helpful in reducing the overall request overload. For instance, let’s assume there are different types of panels available:

// Types of solar panels
[
{
id: 1,
panelType: 'monocrystalline'
},
{
id: 2,
panelType: 'polycrystalline'
}
]

The data structure shown in Figure 7 could be replicated on the client, avoiding the need for an additional request to fetch it from the server.

// Types of solar panels defined on the client
export const PANEL_TYPES = [
{
id: 1,
panelType: 'monocrystalline'
},
{
id: 2,
panelType: 'polycrystalline'
}
]

Then, when presenting the panel types to the user, there is no longer a requirement to wait for them. This tip helps to prevent unnecessary requests for fetching small pieces of information.

3. Grouping HTTP requests

This tip aims to speed up the overall app’s load time by grouping requests. As mentioned before, each HTTP request requires a roundtrip to be done (Figure 1). Consequently, the more requests the app has, the longer it will take. However, if it’s possible to wrap up those requests into a single one, the required time can be significantly reduced. Let’s imagine a web page used to build solar kits, which are composed of inverters, transformers, optional items, and the panels themselves. The following resources are required to fill up the kits building page:

  • Inverters;
  • Transformers;
  • Additional items;
  • Panels.

Each of the above elements needs to be fetched from specific endpoints, initially, it’d end up performing four requests:

Figure 3 — Mainly due to HTTP round trips, multiple independent requests can take more time to be done when compared to grouped or single ones.

In this hypothetical example, all requests took a total of 1100 ms. To improve this timestamp, a viable solution would be to consolidate all requests into a single endpoint, which we can call “kit-resource” as depicted in Figure 4.

Figure 4 — According to the server’s architecture, grouped or single requests can save a precious time.

This time, it only took 650 ms instead of the 1100 ms required by multiple requests. It’s important to keep in mind that the numbers used in these examples are unreal; they are used solely for exemplification. What matters is the idea that, in general, grouping requests can save loading time.

The idea of grouping requests works fine when we’re talking about a single API, but for microservices, it’s not necessarily true. An analysis is required to determine if it is worthwhile to group them in an API Gateway before sending them to the client. Otherwise, multiple requests are inevitable. Some readers might argue about GraphQL and similar technologies as alternatives to request grouping. This is a completely different discussion though. Since the goal of this article is to cover only Restful services, discussing GraphQL would be out of scope.

4. Data pagination

Data pagination is not a new concept for most developers, and perhaps it’s already well-defined in their minds. However, it is interesting to mention this approach. Actually, it’s unavoidable, sooner or later your server will need it to keep its operation healthy. Of course, it’s strongly recommended to design data-paginated apps from the beginning.

Figure 5 — Data pagination reduces considerably the overload for both server and client.

By paginating the data, requests become on-demand based. In other words, the current chunk is only fetched from the server when it is truly required. This behavior brings numerous benefits to both the server and the client, preventing overloading and reducing loading time. Note that this time we aren’t talking about cutting off the requests number whatsoever, on the opposite, we’re trying to enhance overall performance by evenly distributing the data over time according to the client’s needs.

5. Lazy loading

This technique is quite powerful and relies on the same basic principle as the previous one — data pagination. The idea here is to achieve lighter requests by cutting out unnecessary payload. A page may consist of various resources, such as images. When visiting that page, the user may not have intended to scroll down to the bottom to see all those images. Therefore, triggering requests to retrieve that data would be a complete waste of time. Lazy loading comes into play to solve that problem by “paginating” the page resources themselves, including CSS, HTML, and JavaScript. Instead of loading the entire app at once, it can be split into “pieces” to get speedy requests. The following two figures illustrate how it works in a simple manner.

Figure 6 — Without lazy loading, the app loads all its resources even when they aren’t necessary for the current context.

In Figure 6, when a user navigates directly to the home page using the “/home” path, the app loads all of its assets, including files from other pages. Even though the user doesn’t visit the “sales page,” it will take more time to load due to the additional overload introduced by the “home page.” Let’s assume that the loading time for the “sales page” is 150 ms and 350 ms for the “sales page”. Consequently, without lazy loading, the total loading time would be 450 ms.

Figure 7 — The lazy loading approach allows to avoid unnecessary and heavy requests by fetching only the required assets for the current context.

Otherwise, assuming the same example but fetching only the necessary data, it turns out that the response time is significantly improved, as shown in Figure 7. More precisely, it would take only the necessary time to load the “sales page,” which is 150 ms instead of 450 ms.

Currently, lazy loading is widely used in modern web frameworks such as Angular, Vue, React, Solid, etc.

6. Debouncing

For example, let’s imagine a search field where it is desired to put the typing events off until the user enters a minimum number of characters or even trigger those events after a certain amount of time has elapsed to avoid unnecessary requests. That comes from the fact that the more information the user provides, the more effective the search will be. So, waiting a bit it’s always welcome. To sum up, the client app holds onto the requests until it has fulfilled its requirements, such as meeting the minimum input length or respecting a specific time period. The Figures 8 and 9 illustrate how this type of debouncing works.

Figure 8— Without debouncing, each input event triggers a new request to the server. Sending a new request while the user hasn’t finished filling out the form may result in a complete waste of resources.
Figure 9 — Debouncing restricts the number of requests by preventing them from being called again until a certain amount of time has passed.

Taking Figure 8 as an example, let’s assume that the user is looking for all solar panels whose manufacturer code starts with “SE-12”. To perform this search, an input is used to retrieve the results based on the provided keywords. Without a debouncer, a new input event would be raised for each letter, resulting in a new request being sent to the server. This would be sent over to the server, a waste of resources, wouldn’t it? It’d be much better if the app could wait for a few milliseconds before triggering the request. Keep in mind that users are considerably slower compared to input events, so in this context, a debouncer is always welcome.

Besides the traditional debouncing, there are other variations of the same concept. The example being discussed is known as “trailing mode,” in which the invocation occurs after a delay. There is also the opposite; it’s called “leading mode” in which the invocation occurs immediately as soon as the first event happens. A great content about debouncing techniques can be found at: “Debouncing and Throttling Explained Through Examples”.

Conclusion

This brief text demonstrates techniques and ideas for improving the app’s performance by reducing and optimizing client-server communication through the HTTP protocol. Some of them are widespread in the development culture, while others not so much. For instance, in my opinion, the idea of reusing local data or constant values is hardly discussed by the tech contents.

As shown, mismanaged HTTP requests can cause problems, leading to delays and overloading for both the client and server. In a context of high performance requirements, it is totally unacceptable.

The intention here is to present some of the available possibilities and, based on each scenario, determine the best approach to follow. Maybe some of the mentioned ideas are not ideal due to specific app restrictions, so the developer needs to identify the sweet spot between business restrictions and performance.

Regardless of the adopted solution, the key message to remember is: always strive to build efficiency-first applications, the environment, the users, and your finances thank you.

References

POSNICK, Jeff. Prevent unnecessary network requests with the HTTP Cache. Web.dev, 2020. Available at: https://web.dev/http-cache/. Accessed in : June 15 th, 2023.

CORBACHO, David. Debouncing and Throttling Explained Through Examples. CSS Tricks, 2016. Available at: https://css-tricks.com/debouncing-throttling-explained-examples/. Accessed in: June 23 th, 2023.

--

--

Rogério de Oliveira
Rogério de Oliveira

Written by Rogério de Oliveira

Postgraduate in Software Architecture - PUC/MG | Computer Engineering - UNIFEI My LinkedIn: https://www.linkedin.com/in/rogerio-oliveirahs/

No responses yet