How-To Geek

How To Compile and Install from Source on Ubuntu

Ubuntu and other Linux distributions have extensive package repositories to save you the trouble of compiling anything yourself. Still, sometimes you’ll find an obscure application or a new version of a program that you’ll have to compile from source.

You don’t have to be a programmer to build a program from source and install it on your system; you only have to know the basics. With just a few commands, you can build from source like a pro.

Installing the Required Software

Installing the build-essential package in Ubuntu’s package repositories automatically installs the basic software you’ll need to compile from source, like the GCC compiler and other utilities. Install it by running the following command in a terminal:

sudo apt-get install build-essential

Type Y and press Enter to confirm installation when prompted.

Getting a Source Package

Now you’ll need your desired application’s source code. These packages are usually in compressed files with the .tar.gz or .tar.bz2 file extensions.

As an example, let’s try compiling Pidgin from source — maybe there’s a newer version that hasn’t been packaged yet and we want it now. Locate the program’s .tar.gz or .tar.bz2 file and save it to your computer.

A .tar.gz or .tar.bz2 is like a .zip file. To use it, we’ll have to extract its contents.

Use this command to extract a .tar.gz file:

tar -xzvf file.tar.gz

Or use this command to extract a .tar.bz2 file:

tar -xjvf file.tar.bz2

You’ll end up with a directory with the same name as your source code package. Use the cd command to enter it.

Resolving Dependencies

Once you’re in the extracted directory, run the following command:


(Note that some applications may not use ./configure. Check the “README” or “INSTALL” file in the application’s extracted folder for more specific instructions.)

(The ./ part tells the Bash shell to look inside the current directory for the “configure” file and run it. If you omitted the ./, Bash would look for a program named “configure” in system directories like /bin and /usr/bin.)

The ./configure command checks your system for the required software needed to build the program.

Unless you’re lucky (or already have a lot of required packages on your system), you’ll receive error messages, indicating you’ll need to install certain packages. Here, we see an error message saying the intltool scripts aren’t present on their system. We can install them with the following command:

sudo apt-get install intltool

After installing the required software, run the ./configure command again. If you need to install additional software, repeat this process with the sudo apt-get install command until ./configure completes successfully. Not every required package will have the exact name you see in the error message — you may need to Google the error message to determine the required packages.

If an older version of the program you’re trying to compile is already in Ubuntu’s software repositories, you can cheat with the sudo apt-get build-dep command. For example, if I run sudo apt-get build-dep pidgin, apt-get will automatically download and install all the dependencies I’ll need to compile Pidgin. As you can see, many of the packages you’ll need end in -dev.

Once ./configure completes successfully, you’re ready to compile and install the package.

Compiling and Installing

Use the following command to compile the program:


This process may take some time, depending on your system and the size of the program. If ./configure completed successfully, make shouldn’t have any problems. You’ll see the lines of text scroll by as the program compiles.

After this command finishes, the program is successfully compiled — but it’s not installed. Use the following command to install it to your system:

sudo make install

It’ll probably be stored under /usr/local on your system. /usr/local/bin is part of your system’s path, which means we can just type “pidgin” into a terminal to launch Pidgin with no fuss.

Don’t delete the program’s directory if you want to install it later — you can run the following command from the directory to uninstall the program from your system:

sudo make uninstall

Programs you install this way won’t be automatically updated by Ubuntu’s Update Manager, even if they contain security vulnerabilities. Unless you require a specific application or version that isn’t in Ubuntu’s software repositories, it’s a good idea to stick with your distribution’s official packages.

There are a lot of advanced tricks we haven’t covered here — but, hopefully, the process of compiling your own Linux software isn’t as scary anymore.

Chris Hoffman is a technology writer and all-around computer geek. He's as at home using the Linux terminal as he is digging into the Windows registry. Connect with him on Google+.

  • Published 02/13/12

Comments (17)

  1. nova1

    good one! must bookmark!

  2. TheFu

    You don’t have to use a single tar command to decompress the files.
    $ gunzip file.tgz
    $ tar xvf file.tar

    will have the same effect.

    After your de-tar the package, please read the README!!!!!!
    It will reveal secrets to using the software. Often there is an INSTALL file too. Read the INSTALL file to learn which steps are needed to compile and install the tool.

    Not all source code programs use ./configure.

    May the dependencies you are missing be short.

  3. Robert Benzing

    I’ll just bet this won’t work on my Ubuntu 11.04? I’m the Root owner. Only I’m Not?? I have a 750 HD (That’s what they call it) That had Windows Vista on it. It crashed. I can’t do anything with it.

    So I plug in my I omega HD USB. Ubuntu tells me I cannot transfer,Copy,Drag and drop. Or go to the Bath room? These drives do not have Read write privileges? I’m not the Root owner? Well who am I? The Janitor?

    I have spent five days trying to and reading all the instructions on how to use rw permissions. But that doesn’t work either?

    If there is anyone who can help me, Get the items of off the 750 Drive to the Iomega Drive? Please Email me.

    Thank You,

  4. Kris Dekeyser

    I do install software from source on a regular base. The stow utility has become invaluable for me to install and test different versions without breaking my system. Stow is typically not installed, but can be installed with apt-get or can be downloaded from the GNU FTP servers and build as above. It’s even possible to stow stow itself.

    The way I do things is like this:

    I download and unpack the package in a subdir of /usr/local/src. If you unpack the file it typically will unpack in a directory containing the package name and version number, which is excactly what I want. I build the tool there as normal, with the exception that I add one option to the configure command: ‘–prefix=/usr/local/depot//’. make install will then create the package in the depot subdirectory.

    Once installed I run ‘stow /’ in the depot directory and that will create soft-links in /usr/local/bin, /usr/local/lib, etc. To uninstall I run ‘stow -D ‘ and all the softlinks are gone. There are many web pages describing stow usage. I find this blog post very clear:

    Stow also detects if the package conflicts with another stowed package and refuses to install until you solve the conflict. Once you have different versions of a package installed it is easy to switch versions using ‘stow’ and ‘stow -D’.

    This also works if you do not have root access to your linux machine. You can simply install all the tools you need in your home directory. You simply replace /usr/local with $HOME/app and make sure you add $HOME/app/bin to your path, $HOME/app/lib to LD_LIBRARY_PATH and $HOME/app/man to MANPATH.

  5. Chris Hoffman

    @The Fu

    Great advice. Do read the README and INSTALL files for more information.

    This article could be an entire book if it went into all the variations — this is just to get users up and running with the basic, common stuff.

    @Robert Benzing

    That’s odd — generally Ubuntu gives users read/write access to external hard drives.

    Here’s a shortcut you can try. It’s not the proper way to do this, but it’s the fastest. Run this command in a terminal:

    gksu nautilus

    This should give you a file browser window with full root permissions. Not best practice, but it should work.

  6. Forensic Penguin2

    Heh! I saved this article to Evernote, just to have it when into forensics.

  7. champy

    It’s good for newbie

  8. dbam

    I really don´t like this way of ‘make install’, because I have to keep that directory if i want to purge the files installed. I have found a better way, installing and using ‘checkinstall’ – it creates (and optionally installs) debian packages. Like that, it´s up to the dpkg to keep track of dependencies and purging files.

    Instead of invoking “sudo make install” type:

    sudo checkinstall


  9. Chris Hoffman


    I’ve used sudo checkinstall in the past. It’s great when it works, but I’ve encountered problems with some software. It’s unofficial and unsupported — ./configure, make and make install works everywhere, but checkinstall might not.

  10. Freddy

    Hey, I’m getting an error:

    checking for GLIB… no
    configure: error:

    You must have GLib 2.16.0 or newer development headers installed to build.

    If you have these installed already you may need to install pkg-config so
    I can find them.

    -I tried to apt-get install Glib and pkg-config, but no succes.. any ideas? thanks

  11. Chris Hoffman


    Have you tried installing the glib-dev package? Probably “libglib2.0-dev”, to be more precise?

  12. dbam

    @Chris Hoffman

    Hey Chris, thank You for the advice! I didn’t know about checkinstall being unofficial…

    Any alternatives for managing applications compiled/installed from source with easy remove/purge function?

    I really find keeping the source/build directory just to be able to uninstall later quite annoying…


  13. Chris Hoffman


    I would definitely use checkinstall if it works for you. In the past, I’ve encountered issues where checkinstall refuses to create the DEB package, but if it does, you’re probably good to go.

    Checkinstall is also something that won’t go with you from distribution to distribution, whereas make install is the same everywhere, really.

  14. M Henri Day

    I’m still running Natty on my main computer as I vastly prefer GNOME2 to the other alternatives, while testing some of these latter (xfce, Cinnamon, etc) on a couple of other boxes running Oneiric. One of the things I should very much like to do on the main box is upgrade to the latest Linux kernel, which of course, would involve having to compile it. Frankly, I’ve been too chicken to try, as I depend on this computer (a home build with a Gigabyte GA-990FXA-UD3 mainboard, an AMD Phenom II X4 955 processor, and 16GB Corsair Corsair XMS3 Vengeance DDR3 RAM), but I’m strongly tempted to head over to the Linux Kernel Archives, download the latest stable version, linux-3.2.7.tar.bz2, and install it on 11.04. I’ve already installed auto-apt on the system, which should help if I run into any dependency problems, but I’m still hesitating. Would upgrading the kernel in this way bring any advantages in terms of performance, and do I have a sporting chance of succeeding by following the instructions above ?…


  15. Chris Hoffman


    Compiling a new kernel would involve all sorts of potential headaches. The kernel compilation process — while I haven’t gone through it in some time — at least used to be more complicated than the instructions you’ll find in this article.

  16. M Henri Day

    Thanks for your courteous reply, Chris ! «Up Ubuntu» has a brief article ( showing how to install a ppa which after a dist-upgrade will place the 3.2.0-17 kernel on one’s box(es). Not the very latest, but more recent than the kernel which comes with Oneiric. It seems to work very well on my older boxes….

    Have you considered writing an article on how to convince a kernel to recognise a MAC address when after an update it refuses to do so ? This is precisely what happened to me when I decided to upgrade the machine described above from Natty to Oneiric – I lost my connexion to the internet and nothing I’ve been able to figure out has enabled me to get the network setting for the 3.0.x kernel to recognise my MAC address ; the network doesn’t recognise that a wired connexion exists, and when I attemtp to edit it, it simply doesn’t «take» (the machine is an Ubuntu/Win7 dual-boot and Win7 wasn’t of course, affected at all by the upgrade, so determining the MAC address was no problem, as it is platform independent, save that Windows uses dashes where Ubuntu uses colons between each pair of the six sets of two hexadecimal digits). Fortunately for me, there’s a work around ; i e, I can choose the alternative to boot earlier kernel versions from the GRUB menu, which means that I can boot the 2.6.38-13-generic x86_64 kernel and recover my internet connexion. But I didn’t upgrade to Oneiric in order to continue to use the default Natty kernel, so I’d greatly appreciate any suggestions which might help me resolve this problem. I’ve considered installing the ppa described above, but am worried about what would happen if, after upgrading to the 3.2.0 kernel, the MAC address still wasn’t recognised and, moreover, I lost access to the 2.6.38-13-generic x86_64 jernel. That would render Ubuntu useless to me on this, my best machine, and given that I vastly prefer Ubuntu to Windows, I really would hate to see that happen….


  17. Chris Hoffman

    Wow, that sounds like an obscure bug I’ve never heard of — probably a problem with the kernel on your specific hardware — so I’m not sure how to help there…

More Articles You Might Like

Enter Your Email Here to Get Access for Free:

Go check your email!