Linux Curl Command
Linux curl command is used to download or upload data to a server via supported protocols such as HTTP, FTP, IMAP, SFTP, TFTP, IMAP, POP3, SCP, etc. It is a remote utility, so it works without user interaction.
The data transfer from one place to another is one of the vital and most used tasks of a computer system. However, there are many GUI tools available for data transfer. But, when working on the command-line, it becomes a bit complicated. The curl utility allows us to transfer data via the command line.
The basic syntax for using curl is as follows:
From the above syntax,
The URL syntax is a general protocol-dependent URL. We can specify multiple URLs as follows:
The curl command supports the following command-line options:
–abstract-unix-socket <path>: It is used to connect via abstract Unix domain socket instead of a network.
–anyauth: It is used to command curl for authenticating the method by itself for using the most secure method. This utility is an optional utility “–basic, –digest, –ntlm, and negotiate,” which is used to set a specific authentication method.
-a, –append: It is used to upload files. It appends the file rather than overwriting it. If the given file does not exist on the server, it will create one.
–basic: It specifies the use of HTTP basic authentication. It is the default option of the curl command. It is useful for overriding the previous settings.
–cacert <file>: It is specified for using the particular certificate file to verify the peer. The files may have several CA certificates. The standard format for the certificates is PEM, so, all the certificates must be in it.
–capath <dir>: It is specified for using the particular directory to verify the peer. We can specify multiple paths by separating them by a colon (:) such as “path: path2:path3”. The standard format for the certificates is PEM, so, all the certificates must be in it.
–cert-status: It is used to verify the status of the server certificate. It uses Certificate Status Request or OCSP stapling TLS.
–cert-type <type>: It specifies the provided curl certificate type. These certificates can be in PEM, DER, and ENG format. The default value is PEM. If it is specified multiple times, the last value will be taken by the curl.
-E, –cert <certificate[: password]>: It is specified for using the client certificate file when getting a file via any of SSL-based protocol such as HTTPS, FTPS, etc.
–ciphers <list of ciphers>: It is used to select the ciphers to use in the connection.
–compressed-ssh: It is used to enable built-in SSH compression. This option will take as a request by the server, so the server may or may not accept it.
–compressed: It is used to request a compressed response using the curl algorithms and saves the uncompressed document. This option will send a report for unsupported encoding curl.
-K, –config <file>: It is used to describe a text file for reading curl arguments. The curl will use the command line arguments from the text file.
–connect-timeout <seconds>: It is used to specify the maximum time in seconds for curl connection time-out.
–connect-to <HOST1:PORT1:HOST2:PORT2>: It is used to create a request to the given pair of host and port; otherwise, it will connect to the next pair. This option is a handy tool for making direct requests at a specific server.
-C, –continue-at <offset>: It is used to continue or resume a previous file transfer at the given offset.
-c, –cookie-jar <filename>: It is used to specify a particular file to which we want to write all cookies after a successful operation.
-b, –cookie <data>: It is used to forward data to the HTTP server in the Cookie header.
–create-dirs: It is used for conjunction with the ‘-o? option, It will create the required local directory hierarchy.
–crlf (FTP SMTP): It is used to convert LF to CRLF in the upload. It is a handy tool for MVS (OS/390).
–crlfile <file>: It is used to specify (in PEM format) with a Certificate Revocation List.
–data-ascii <data>: It is an alias for the ?-d? option.
–delegation <LEVEL>: It is used to set LEVEL to acknowledge server what it is allowed to delegate when it comes to user credentials.
–digest: It is used to enable HTTP Digest authentication.
-q, –disable: If used as the first argument, It will ignore the curlrc config file.
–dns-interface <interface>: It is used to acknowledge the server to send the outgoing DNS requests.
–dns-servers <addresses>: It is used to specify the DNS servers instead of default servers.
-f, –fail: It is used to make the curl silently fail on server errors.
-F, –form <name=content>: It is used emulate a filled-in form submitted by user.
-P, –ftp-port <address>: It is used to reverse the default listener roles when connecting with FTP.
–ftp-ssl-ccc-mode <active/passive>: It is used to set the CCC mode.
-G, –get: It is used to specify data with ?-d? option to be used in an HTTP GET request instead of POST request.
-h, –help: It is used to show the help manual having a brief description of the usage and support options.
-0, –http1.0: It is specified for using the HTTP version 1.0.
–ignore-content-length: It is used to ignore the Content-Length header.
-i, –include: It is used to include the HTTP responses headers.
-4, –ipv4: It is used to resolve names to Ipv4 addresses.
-6, –ipv6: It is used to resolve names to Ipv6 addresses.
Installation of the curl Command
The curl command comes with most of the Linux distributions. But, if the system does not carry the curl by default. You need to install it manually. To install the curl, execute the following commands:
Update the system by executing the following commands:
Now, install the curl utility by executing the below command:
Verify the installation by executing the below command:
The above command will display the installed version of the curl command.
Fetch the content of the specified URL
To fetch the content of any specific URL, execute the curl command, followed by the URL. Consider the below command:
The above command will fetch the page data of the specified page. Consider the below snap of the output:
From the above output, we can see the page data of the given URL are being fetched. To stop the execution, press the CTRL+C keys.
Save data in a Specific File
To save the data in a specific file, pass the ‘-o’ option followed by directory, file name, and URL as follows:
Consider the below command:
The above command will save the page data in ‘linux.html’ file under ” /home/tutoraspire/Documents/” directory. Consider the below output:
From the above command, we can see the total amount of downloaded data, received data, average time, and some other statistics about the data.
To verify the downloaded data, open the file by executing the cat command
Consider the below snap of output:
Download a File From the Web
One of the interesting and fascinating uses of curl is that we can download a file from the web. To download a file from the web, copy the download link and paste it with the curl command. We can pass other arguments as well to make it more specific. For example, download the latest version of ubuntu, copy the download link of the ubuntu from its official website and paste it with curl command as follows:
The above command will download the Ubuntu 20.04 to the specified directory. Provide the proper file extension; otherwise, it will download the file in a different format. Consider the below output:
From the above output, the ubuntu.iso file is being downloaded. We can see the download time, file size, download speed, and other statistics. To stop the execution, any time press the CTRL+D keys.
Resume the Interrupted Downloads
There may be a chance that the downloads can be interrupted for some reason. We can resume downloads by using the curl command. To resume the interrupted file, pass the ‘-C’ option with curl command as follows:
The above command will resume the download of the specified URL.
Download Multiple Files
To download the multiple files, specify the multiple URLs separated a by space as follows:
The above command will download the data from both URLs, respectively.
Query HTTP Headers
The HTTP headers contain additional information; it allows the webserver to download this information. To query the HTTP headers from a website, execute the command with ‘-I’ option as follows:
The above command will produce the below output: