How to Make HTTP Requests in Node.js With Fetch

How to Make HTTP Requests in Node.js With Fetch
Last edit: Dec 27, 2023

This article will discuss the basics of using the Fetch API, a simple and intuitive interface for making HTTP requests. It is essential for working with data from remote servers, scraping data for further processing, and providing a convenient and flexible way to interact with external resources.

This guide delves into the fundamentals of utilizing Fetch in Node.js, encompassing installation and exploring essential methods like GET and POST. We'll provide practical examples of interacting with a web scraping API using the POST method and showcase the execution results. Additionally, we'll furnish code snippets for all frequently used HTTP methods and delve into the intricacies of handling responses, logging requests, and effortlessly sending files with the Fetch API.

Explore our Scraping Packages & Libraries

Scrape-it.Cloud is a robust web scraping API that simplifies the extraction of data from websites using advanced algorithms and rotating proxies to avoid bans. The…

The Google SERP API library is a comprehensive solution that allows developers to integrate Google Search Engine Results Page (SERP) data. It provides a simplified…

Understanding the Fetch API

The Fetch API is a simple and intuitive interface for making asynchronous HTTP requests. It is essential for working with data from remote servers, providing a convenient and flexible way to interact with external resources. It is based on promises, making it a powerful tool for working with asynchronous code.

In contrast to other ways of making HTTP requests in Node.js, the Fetch API has several advantages. For example, the syntax of the Fetch API is concise and understandable, making the code more readable. Additionally, Fetch automatically parses JSON data, simplifying working with data in JSON format. Finally, Fetch supports streaming data, which is useful for working with large files.

Basic Usage of Fetch in Node.js

To use the Fetch API, you should install the corresponding npm package. This requires having the NodeJS package installed on your computer. We previously covered how to install NodeJS in our introductory article on scraping using NodeJS.

To install the Fetch API, navigate to the folder of your project and run the following in the command prompt or terminal:

npm install node-fetch

Additionally, create a package.json file that specifies the imported module:

{
    "type": "module",
    "dependencies": {
      "node-fetch": "^3.3.1"
    }
  }

After that, you can start using it in your project. Fetch API supports all HTTP methods, including GET, POST, PUT, and DELETE. Let's take a look at examples of using each method.

Fetch for GET Requests

GET requests are the simplest and most common type of HTTP request. They allow you to easily extract data from web pages, making them a popular choice for web scraping.

For example, let's use GET requests to fetch the HTML code of a page using the Fetch API. To make the examples more precise, we'll look at both a basic GET request and a GET request with additional parameters.

Making Basic GET Request with Fetch 

Now, create a file with the *.js extension and import the Fetch API:

import fetch from 'node-fetch';

Then, specify the URL of the page from which you want to get the data:

const url = 'https://demo.opencart.com/';

Finally, fetch request and specify the order of operations to get the page's HTML code and display the received data on the screen. Provide for the output of error information in case of their occurrence:

fetch(url)
  .then(response => response.text())
  .then(data => console.log('Your data:', data))
  .catch(error => console.error('Error:', error));

Save the changes in the project and run:

This image demonstrates how to retrieve the page's HTML code using the response.text() method.
Extract data from a web page

As you can see, we got the necessary data in the form we expected. If you want to get not the page's HTML code but, for example, the JSON response of the request, then instead of response.text() is enough to use response.json(), which will get and parse the JSON response object.

Making GET Request with Additional Parameters using Fetch

Using additional parameters in a GET request is very simple. For this example, we will use Google SERP. First, we will import the module and define the base URL:

import fetch from 'node-fetch';

const baseUrl = 'https://www.google.com/search';

Next, we'll define the necessary parameters, including the query, the domain, the language, and the localization country:

const queryParams = '?q=Coffee&domain=google.com&gl=us&hl=en';

Then we will put together the entire link:

const url = `${baseUrl}${queryParams}`;

Keep the fetch and results output to the screen unchanged:

fetch(url)
  .then(response => response.text())
  .then(data => console.log('Your data:', data))
  .catch(error => console.error('Error:', error));

GET requests are the easiest to understand and process. Let's move on to more complex methods that support more parameters.

Try Our Ready-Made Solutions for Your Needs

Get fast, real-time access to structured Google search results with our SERP API. No blocks or CAPTCHAs - ever. Streamline your development process without worrying…

Gain instant access to a wealth of business data on Google Maps, effortlessly extracting vital information like location, operating hours, reviews, and more in HTML…

Fetch for POST Requests

POST requests send data, create new resources, or update existing ones on a server. They differ from GET requests, which are only used to retrieve data from a server.

In a POST request, data is sent in the request body, making it especially well-suited for sending large amounts of data. Additionally, data in a POST request can be sent in various formats, such as JSON, XML, or URL-encoded data, depending on the server's requirements.

Using Fetch for Basic POST Requests

Let's look at a basic example of how to make a POST request using the Fetch API. First, we'll import the module and declare the base URL for the request:

import fetch from 'node-fetch';

const url = 'https://example.com/';

Next, we'll define the parameters we need to pass in the request body:

const postData = {
    key1: 'value1',
    key2: 'value2'
  };

Finally, we'll assemble the entire request, specifying the HTTP method, request body, and headers object:

const requestOptions = {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json'
  },
  body: JSON.stringify(postData)
};

Executing the request and displaying the data on the screen is almost identical. The only change is that in the fetch command, we need to specify not only the URL, but also additional parameters:

fetch(url, requestOptions)
  .then(response => response.json())
  .then(data => console.log('Your data:', data))
  .catch(error => console.error('Error:', error));

For example, you can use this approach to send the user's login and password using the POST method for authorization. In this case, the server will respond with a message indicating whether the authorization was successful.

Sending Complex POST Request using Fetch

The previous example was very simple and more theoretical. Let's use the Web Scraping API with a POST request to get a list of all the titles on the demo site page. To do this, we import the fetch module:

import fetch from 'node-fetch';

Then, we specify the endpoint for the Web Scraping API and the unique API key:

const apiKey = "YOUR-API-KEY";
const url = "https://api.scrape-it.cloud/scrape";

Next, we specify the HTTP request headers, method, and body. In this case, we will use the extraction rules to extract only the product titles from the HTML page of the site:

const requestOptions = {
  method: 'POST',
  headers: {
    'x-api-key': apiKey,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    url: "https://demo.opencart.com/",
    js_rendering: false,
    extract_emails: false,
    extract_rules: {
      title: "h4"
    },
    proxy_type: "datacenter",
    proxy_country: "US"
  })
};

Finally, we execute the request and print the result of the extraction rules to the screen:

fetch(url, requestOptions)
  .then(response => response.json())
  .then(result => console.log(result.scrapingResult?.extractedData))
  .catch(error => console.log('error', error));

Running this code will output a list of all the product titles on the demo site page:

This image illustrates the part of the article where we use the Web Scraping API to get a list of all the product titles on the demo site page.
A screenshot of the demo site page scraping results

As you can see, we only got the necessary data using the Web Scraping API and a fairly simple POST request.

Fetch for Other HTTP Requests

As mentioned earlier, the Fetch API supports all major HTTP methods. In addition to GET and POST, PUT and DELETE are commonly used. To use these methods, create a new *.js file and import the node-fetch module. Then, specify the method type and execute the request.

Make PUT Request using Fetch

First, define the parameters you want to update with the PUT method.

const updatedData = { key: 'updatedValue' };

Then, simplify the previous code by specifying the URL and other parameters directly in the Fetch command.

fetch('https://example.com', {
  method: 'PUT',
  headers: {
    'Content-Type': 'application/json'
  },
  body: JSON.stringify(updatedData)
})
  .then(response => response.json())
  .then(data => console.log(data))
  .catch(error => console.error('Error:', error));

As you can see, the request is not much different from previous examples.

Make DELETE Request using Fetch

The last method is used to delete data. Let's modify the previous example slightly:

fetch('https://api.example.com', {
  method: 'DELETE'
})
  .then(response => response.json())
  .then(data => console.log(data))
  .catch(error => console.error('Error:', error));

In addition to these methods, Fetch supports others, such as PATCH, HEAD, and OPTIONS. Their usage is similar to the methods we have already discussed.

Try Our Ready-Made Solutions for Your Needs

Google SERP Scraper is the perfect tool for any digital marketer looking to quickly and accurately collect data from Google search engine results. With no coding…

Our Google Maps Scraper lets you quickly and easily extract data from Google Maps, including business type, phone, address, website, emails, ratings, number of reviews,…

Handling Fetch API Responses

Previous examples have covered the basics of processing NodeJS Fetch API responses. However, there are a few additional things to consider. First, you should check the response status to ensure the request succeeded. Second, you should handle both JSON and text responses differently. Third, you can also process the response headers.

Processing the Response

We can use a ternary operator to assign the correct processing logic in a scenario where the response format is unknown (JSON or text). The response will be processed as JSON in case of a successful request. In case of an error, the response will be processed as text.

fetch(url)
  .then(response => response.ok ? response.json() : response.text())
  .then(data => console.log('Data:', data))
  .catch(error => console.error('Error:', error));

As a result, we achieved dynamic processing that automatically adapts to the type of HTTP response.

Getting the Response Status Code

Handling status codes is an important part of writing good code. The status code indicates the outcome of a request. For example, a 200 status code indicates that the request was successful. A 500 status code indicates a server error. A 404 status code indicates that the requested page was not found. Now, let's put the above example into practice:

fetch(url)
  .then(response => {
    if (response.ok) {
      // Successfull response
      return response.json();
    } else if (response.status === 500) {
      // Retry the request
      return null; 
    } else if (response.status === 404) {
      console.log('Page not found.');
      return null;
    } else {
      // Any other error
      throw new Error(`HTTP error! Status: ${response.status}`);
    }
  })
  .catch(error => {
    console.error('Error:', error);
  });

It is important to note that the .catch() at the end of the function handles HTTP request errors and any errors that may occur during processing. Following these best practices can make your code more resilient, flexible, and reliable.

Working with Response Headers

Working with response HTTP headers may sometimes be necessary. For example, let's consider an example where we get the value of the Content-Type header:

fetch(url)
  .then(response => {
    const headers = response.headers;
    const contentType = headers.get('Content-Type');
    const response = response.text();
    console.log('Content-Type:', contentType);

    return response.json();
  })
  .catch(error => {
    console.error('Error:', error);
  });

The rest of the headers can be retrieved similarly.

Best Practices and Tips

The more features you use, the more your code will be more practical, functional, and user-friendly. Therefore, as additional ways to use Fetch API, let's consider asynchronous web requests logging and file transfer examples.

Logging HTTP Requests 

Logging HTTP requests is essential for debugging and monitoring application performance. Typically, this involves writing logs to the console or a central log file. We can create a separate function to log into the console for convenience.

function logRequest(url, method, status) {
    console.log(`[HTTP Request] ${method} ${url} - Status: ${status}`);
  }

To log data, simply call a pre-defined function in the desired location and pass it the link, method, and response status code.

fetch(url)
  .then(response => {
    logRequest(url, 'GET', response.status);
  })
  .catch(error => {
    console.error('Error:', error);
  });

In the future, you can customize the logging function to your needs. For example, instead of displaying logs on the screen, you can implement logging to a file.

Send File Using Fetch

To send a file using Node.js, you need to use the fs module to read the file into memory and the node-fetch module to make the HTTP request. First, import the fs and node-fetch modules into your project:

import fetch from 'node-fetch';
import fs from 'fs';

Next, specify the path to the file you want to send and the URL of the page that will receive the file:

const url = 'https://example.com';
const filePath = 'path/file.txt';

Then, read the file into memory in binary format:

const fileData = fs.readFileSync(filePath);

const formData = new FormData();
formData.append('file', fileData, { filename: 'file.txt' });

Set the request options, including the POST method and the request body with the file:

const options = {
  method: 'POST',
  body: formData,
};

And finally, execute the request:

fetch(url, options)
  .then(response => response.ok ? response.json() : Promise.reject('HTTP error!'))
  .then(data => console.log('Response:', data))
  .catch(error => console.error('Error:', error));

If the request is successful, the file will be sent to the page specified by the URL.

Conclusion and Takeaways

The Node Fetch API provides a simple and efficient way to scrape HTML pages with various HTTP requests in Node.js. Its advantages include clear and concise syntax, automatic JSON parsing, and streaming support, which helps work with large files.

This article provides a comprehensive overview of the primary usage of Fetch in Node.js, starting with installing the required package with npm and examples of the main methods, such as GET and POST. In addition, we discussed an example of interacting with a web scraping API using POST requests. We also provided examples of other methods, such as PUT and DELETE.

An important aspect of the article is handling Fetch API responses. We cover methods for handling response statuses, different response formats (JSON and text), and working with response headers. Finally, we provide practical tips, such as logging HTTP requests for debugging and monitoring, and examples of sending files using Fetch API.

Tired of getting blocked while scraping the web?

Try out Web Scraping API with proxy rotation, CAPTCHA bypass, and Javascript rendering.

  • 1,000 Free API Credits
  • No Credit Card Required
  • 30-Day Trial
Try now for free

Collect structured data without any coding!

Our no-code scrapers make it easy to extract data from popular websites with just a few clicks.

  • CSV, XLSX, and JSON Formats
  • No Coding or Software Required
  • Save Time and Effort
Scrape with No Code
Valentina Skakun

I'm a technical writer who believes that data parsing can help in getting and analyzing data. I'll tell about what parsing is and how to use it.