Most of the time, when I download something it's a file archive of some kind - usually a tarball or a zip file. This could be some source code for an app that isn't included in Gentoo's Portage tree, some documentation for an internal corporate app, or even something as mundane as a new WordPress installation. The traditional way of downloading and untarring something in the terminal would be something like this:

wget http://wordpress.org/latest.tar.gz tar xvzf latest.tar.gz rm latest.tar.gz

Or perhaps the more compact form:

wget http://wordpress.org/latest.tar.gz && tar xvzf latest.tar.gz && rm latest.tar.gz

Either way is a bit clumsy. This is a very simple operation, a powerful shell like bash should allow such a task to be performed in a more "slick" manner. Well, thanks to a useful little command "curl", we can actually accomplish the mess above in just one piped statement:

curl http://wordpress.org/latest.tar.gz | tar xvz

No temporary files to get rid of, no messing around with ampersands. In short, a highly compact, efficient command. In fact, from a theoretical standpoint, the curl method can be faster than the concatenated wget/tar/rm mess since stdout piping will use RAM as a buffer if possible, whereas wget and tar (with the -f switch) must read/write directly from a disk. Incidentally, tar with the -v option (the way we're using it in all the above examples) prints each file name to stdout as each is untarred. This can get in the way of curl's nice, ncurses output showing download status. We can silence tar by invoking it without -v thusly:

curl http://wordpress.org/latest.tar.gz | tar xz

And that's all there is to it!