JavaScript status: DISABLED compiling

Compiling Software Applications

Compiling, in the context of computer programming, refers to when the programming code (source code) that constitutes a particular program or piece of software (which is originally written by a human) is converted (using a software program generally known as a compiler) into a binary package which a microprocessor can read or make use of. In other words, source code is converted into binary code by a software compiler. This conversion can be achieved by using the make utility. Prior to issuing this command some configuration options (arguments) are passed to it by running the configure script (if it exists, which it normally does). If there is no configure script this usually means that there are no configure options available and one may proceed directly to issuing the make command. It may also mean that one may need to generate it oneself using autogen.sh.

Why would one want to compile or build programs? Compiling within ones chosen operating system produces software that is optimized for that system. Moreover, compiling for oneself negates the need to rely on others.

Compiling the linux kernel is dealt with elsewhere. How to compile software applications now follows.

To be able to compile one needs to set the operating system with the correct environment for compiling. Read this article about compiling first; and then read Appendix 1 below. Then continue with the next paragraph.

Acquire the source code (distribution) file of the desired program either as tar.gz (.tgz), tar.bz2 or tar.xz (.txz) file formats (such a file being known as a tarball). The different file formats simply denote the different types of compression used in creating that file.

Using Tor as an example enter into the command-line interface:


Enter the directory to which the source file was downloaded, in the above example it is /mnt/home/downloads:
Unpack (extract) the source code file (the source code is extracted to a directory called tor-0.2.1.30):
Enter this source code directory:
Always read the README and INSTALL text files for any specific or special instructions.

If a configure script is available (look inside the source directory) features can be enabled or disabled. All the available configure options/arguments are listed by executing the following command (always do this since the options available can change with each new software version):
Execute the relevant configure script, for 32-bit or 64-bit operating systems, with appropriate options/arguments which will generate a valid Makefile script file within the source directory (only use -pipe if >=512MB R.A.M. is available):
If there are errors, and the script does not run to completion, this means that dependencies are missing and would need to be installed first before running configure again. For example, Tor has a dependency on openssl which, in turn, has a dependency on libevent; this means that packages containing the development header files of these programs have to be installed prior to running the configure script successfully, i.e. to completion.

Then
  1. build (compile) the binary distribution of the program:
  2. check the result:
  3. install the program:

The process is now complete (but this is not the whole picture). The program has been installed and it may be used if you know where the executable file is located (usually at /usr/bin). It may be found using the which command, e.g.:
The above explanation is a general outline. Not all source code will make use of the same Makefile configuration arguments (options) as given above, e.g. view the configuration options used to compile ffmpeg via the command-line interface:
For an idea of how complex compiling can become examine this midori case.

Some extra notes: Ideally one should proceed to create a PET software package from the compilation process. The PET is a special package that is manually created so that it can be subsequently used by any user to automatically install the pre-compiled software and any additional supporting files such as menu entries and optionally other customizations. Most users rely on these PET packages for acquiring additional software to their repository. After compiling, some manual tweaking is often required so as to have successfully running software, and that is one reason why PET packages are created since they will contain those modifications. For a hobbyist GNU+Linux user to become self-proficient at software package creation it would take at least a year without this article.

Compile, install, and create the corresponding PET software package

First configure and compile:
N.B. If the package is intended for maximum wide distribution, then use -mtune=generic instead of -mtune=native.

Then install using either method (a) or method (b): Then use the dir2pet script which converts a directory into a PET file, e.g. using tor-0.2.1.30 compiled within Wary:
The name given to the directory is arbitrary and user-defined, but should be appropriate and comprehensible since the resultant PET file will have the same name. For example, the above directory has been given a date suffix to denote the date of creation of the software package, and the suffix w to denote that it was compiled within Puppy Wary. Normally, for software applications one would create additional files to place inside the parent directory before executing dir2pet so that menu entries, and icons for the menu and desktop, are made available: see SoftwarePackageCreation
To un-install a PET software package use the Puppy Package Manager: Menu > Setup > Puppy Package Manager

Now try compiling these simple cases: libvpx; xvid (more demanding; must read INSTALL file)

See also src2pkg and Pcompile:

Appendix 1

To convert Puppy into a complete compiler environment it is necessary to acquire and install a special SFS file that corresponds with the version of Puppy Linux that will be used for compiling. The file has "devx" in its name.
For live CD/DVD and frugal Puppy installations
For full Puppy installations
http://puppylinux.com/development/compileapps.htm

Puppy 1 versions used usr_devx.sfs.

Appendix 2

Test 1
devx file is installed when the command-line interface displays:
# cc
		cc: no input files

Test 2
Save the following as test.c
If using Geany, set it to C and test compile first
/* Example C program */

int main()

{ int i;
for (i = 0; i < 50000; i++)
{
printf ("%d",i);
printf (" Puppy is Great\n");
}
return 0;
}

gcc test.c -o test && ./test

http://members.cox.net/midian/articles/ansic1.htm

Appendix 3

Compiling is the process by which a program written in a human readable format is converted to a computer executable format. Programs can be written in assembly language, which is almost "perfect" in the eyes of a computer, very close to binary code.

lda 02
sta #c000
lda #c001
cmp #c000
bne d000

But this is difficult to understand, in C it might look like this:
a=2;
b=c;
if (a != b){
my_subroutine();
}

This is easier to understand (high level language), a simple comparison of two numbers, and depending on the result (if they are different), a sub-program is executed. As computers do not understand "if" and don't have variables (C) but only a stack (assembler), the code must be translated from C to assembler or even better directly to binary code (assembler itself is not binary, but a very simple form of a "high level language" very close to binary code). This translation is "compiling". If you open the resulting "code" in a hex editor, you will only see binary code, values from 0 to 255 "wildly mixed".

Appendix 4

http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html
http://www.gentoo.org/doc/en/gcc-optimization.xml

-O2 option: smaller binary, faster to load from disk, less RAM usage, better cache usage in systems with moderate processor cache, higher reliability.
-O3 option: slight to moderate speed increase for some applications, higher memory (RAM, cache and disk) usage, longer load times from disk, very occasional problems with compilation; expect less than 1% improvement in *overall* execution speed. It could possibly make faster code but the applications that benefit from it are very few, usually image and video decoders and such. However the side effects, like larger binary size, affects everything. Larger binaries use more memory, load slower, cause more disc I/O, etc. So compiling a system with -O3 will have the effect that a few applications run slightly faster at the expense of the rest of the system running slightly slower and becoming less responsive. Linux caches regularly used programs and files in RAM (that's the "cache" part when you run free -m on the command line), so the program may only need loading from the hard disk once (depending on the program and computer usage). Therefore this is less of a problem on systems with large amounts of RAM. A large CPU cache also helps as it is better suited to larger binaries, so you are more likely to see some sort of speed up. So if you have a high end system, you will suffer less from the problems associated with -O3.

References

http://www.outflux.net/blog/archives/2014/01/27/fstack-protector-strong/
https://wiki.archlinux.org/index.php/makepkg


Legal Disclaimer: This documentation is produced in good faith. And, by using it you assume full responsability for your actions.

Copyright Policy: Any and all original material accessible from this page may be freely distributed at will under this Creative Commons Attribution License, unless otherwise indicated.


TUTORIALS