- `curl` usually used for webscraping and interacting with websites
- can send to and from
- can use multiple protocols such as http, ftp, scp, etc.
- dump contents of a website to stdout by default
- `curl -L https://google.com` outputs HTML to stdout and follows the redirect
- can scrap APIs
- `wget` web get
- generally for retrieving files from the web
- may be able to push files with POST commands
- `wget https://xxx.com/abc.jpg -O file_name.jpg` saves image hosted by website to specified file_name
- able to mirror a website