Advantages and disadvantages of different modes of transport. Transport logistics transport. International road transport

To properly use gcc , the standard C compiler for Linux, you need to learn the command line options. In addition, gcc extends the C language. Even if you intend to write your source code in the ANSI standard of that language, there are some gcc extensions that you need to know to understand the Linux header files.

Most of the command line options are the same as those used by C compilers. There are no standards for some options. In this chapter, we will cover the most important options that are used in everyday programming.

Trying to conform to the ISO C standard is useful, but because C is a low-level language, there are situations where the standard features are not expressive enough. There are two areas where gcc extensions are widely used: interacting with assembly code (these topics are covered at http://www.delorie.com/djgpp/doc/brennan/) and building shared libraries (see Chapter 8). Because header files are part of shared libraries, some extensions also appear in system header files.

Of course, there are many more extensions that are useful in any other kind of programming that can really help with coding. More information on these extensions can be found in the gcc Texinfo documentation.

5.1. gcc options

gcc accepts many command options. Fortunately, the set of options that you really need to be aware of is not so large, and in this chapter we will look at it.

Most of the options are the same or similar to those of other compilers, gcc includes a huge documentation of its options available via info gcc (the gcc man page also gives this information, however the man pages are not updated as often as the Texinfo documentation).

-o filename Specifies the name of the output file. This is usually not necessary when compiling to an object file, i.e. the default is to substitute filename.c with filename.o. However, if you create an executable, by default (for historical reasons) it is created under the name a.out . This is also useful when you want to place the output file in a different directory.
-With Compiles without linking the source file specified on the command line. As a result, an object file is created for each source file. When using make, the gcc compiler is usually invoked for each object file; this way, in the event of an error, it's easier to find which file failed to compile. However, if you're typing commands by hand, it's not uncommon for multiple files to be specified in a single gcc call. If ambiguity may arise when specifying multiple files on the command line, it is better to specify only one file. For example, instead of gcc -c -o a.o a.c b.c , it makes sense to use gcc -c -o a.o b.c .
-Dfoo Defines preprocessor macros on the command line. You may need to discard characters that the shell treats as special characters. For example, when defining a string, you should avoid using the string-terminating characters " . The two most common ways are "-Dfoo="bar"" and -Dfoo=\"bar\" . The first way works much better if there are spaces in the string, because the shell treats whitespace in a special way.
-I directory Adds a directory to the list of directories to search for include files.
-L directory Adds a directory to the list of directories to search for libraries, gcc will prefer shared libraries over static ones unless otherwise specified.
-l foo Links against the lib foo library. Unless otherwise specified, gcc prefers linking against shared libraries (lib foo .so) over static ones (lib foo .a). The linker searches for functions in all the listed libraries in the order in which they are listed. The search ends when all the required functions are found.
-static Links with only static libraries. See chapter 8.
-g , -ggdb Includes debug information. The -g option causes gcc to include standard debug information. The -ggdb option specifies to include huge amount information that only the gdb debugger can understand.
If disk space is limited or you want to sacrifice some functionality for link speed, you should use -g . In this case, you may need to use a debugger other than gdb . For the most complete debugging, you must specify -ggdb . In this case, gcc will prepare as much as possible detailed information for gdb . It should be noted that unlike most compilers, gcc puts some debugging information into the optimized code. However, tracing in the debugger of optimized code can be tricky because runtime can jump and skip parts of the code that you expect to run. However, it is possible to get good performance about how optimizing compilers change the way code is executed.
-O, -On Causes gcc to optimize the code. By default, gcc does a small amount of optimization; when specifying a number (n), optimization is carried out at a certain level. The most common optimization level is 2; currently the highest optimization level in the standard version of gcc is 3. We recommend using -O2 or -O3 ; -O3 can increase the size of the application, so if that matters try both. If memory and disk space are important to your application, you can also use the -Os option, which minimizes code size at the cost of increased execution time. gcc only enables builtins when at least minimal optimization (-O) is applied.
-ansi Support in C programs of all ANSI standards (X3.159-1989) or their ISO equivalent (ISO/IEC 9899:1990) (commonly referred to as C89 or less commonly C90). Note that this does not provide full compliance with the ANSI/ISO standard.
The -ansi option disables gcc extensions that normally conflict with ANSI/ISO standards. (Due to the fact that these extensions are supported by many other C compilers, this is not a problem in practice.) It also defines the __STRICT_ANSI__ macro (as described later in this book) that header files use to support an ANSI/ISO conforming environment.
-pedantic Displays all warnings and errors required by the ANSI/ISO C language standard. This does not provide full compliance with the ANSI/ISO standard.
-Wall Enables generation of all gcc warnings, which is usually useful. But this does not include options that may be useful in specific cases. A similar level of granularity will be set for the lint parser for your source code, gcc allows you to manually turn each compiler warning on and off. The gcc manual describes all the warnings in detail.
5.2. header files
5.2.1. long long

The type long long indicates that the block of memory is at least as large as long . On Intel i86 and other 32-bit platforms, long is 32 bits, while long long is 64 bits. On 64-bit platforms, pointers and long longs take 64 bits, and long can take 32 or 64 bits depending on the platform. The long long type is supported in the C99 standard (ISO/IEC 9899:1999) and is a longstanding C extension provided by gcc .

5.2.2. Built-in Functions

Some parts of the Linux header files (particularly those that are specific to a particular system) use built-in functions very extensively. They are as fast as macros (no cost for function calls) and provide all kinds of validation that is available in a normal function call. Code that calls built-in functions must compile with at least minimal optimization (-O) enabled.

5.2.3. Alternative extended keywords

In gcc, each extended keyword (keywords not defined by the ANSI/ISO standard) has two versions: itself keyword and a keyword surrounded on both sides by two underscores. When the compiler is used in standard mode (usually when the -ansi option is enabled), normal extended keywords are not recognized. So, for example, the attribute keyword in a header file should be written as __attribute__ .

5.2.4. Attributes

The attribute extended keyword is used to pass more information about a function, variable, or declared type to gcc than is allowed by ANSI/ISO C code. For example, the aligned attribute tells gcc exactly how to align a variable or type; the packed attribute indicates that no padding will be used; noreturn specifies that the function never returns, which allows gcc to optimize better and avoid bogus warnings.

Function attributes are declared by adding them to the function declaration, for example:

void die_die_die(int, char*) __attribute__ ((__noreturn__));

An attribute declaration is placed between parentheses and a semicolon and contains the attribute keyword followed by the attributes in double parentheses. If there are many attributes, a comma-separated list should be used.

printm(char*, ...)

Attribute__((const,

format(printf, 1, 2)));

In this example, you can see that printm does not consider any values ​​other than those specified, and has no side effects related to code generation (const), printm indicates that gcc should check function arguments in the same way as printf() arguments. The first argument is the format string, and the second argument is the first replacement parameter (format).

Some attributes will be discussed as the material progresses (for example, during the description of building shared libraries in Chapter 8). Comprehensive information on attributes can be found in the gcc documentation in Texinfo format.

From time to time you may find yourself looking through the Linux header files. You will most likely find a number of designs that are not ANSI/ISO compliant. Some of them are worth looking into. All of the constructs covered in this book are covered in more detail in the gcc documentation.

From time to time you may find yourself looking through the Linux header files. You will most likely find a number of designs that are not ANSI/ISO compliant. Some of them are worth looking into. All of the constructs covered in this book are covered in more detail in the gcc documentation.

Now that you've learned a little about the C standard, let's take a look at the options that the gcc compiler offers to ensure that it conforms to the C standard of the language you're writing in. There are three ways to ensure that your C code is standards-compliant and free of flaws: options that control which version of the standard you want to conform to, definitions that control header files, and warning options that trigger more stringent code checking. .

gcc has a huge set of options, and here we will only cover those that we consider the most important. A complete list of options can be found in the gcc online man pages. We will also briefly discuss some of the #define directive options that can be used; they should normally be specified in your source code before any #include lines, or defined on the gcc command line. You may be surprised by the abundance of options for choosing which standard to use instead of a simple flag to force the current standard to be used. The reason is that many older programs rely on historical compiler behavior and would require significant work to update them to the latest standards. Rarely, if ever, will you want to update your compiler so that it breaks running code. As standards change, it's important to be able to work against a particular standard, even if it's not the most recent version of the standard.

Even if you're writing a small program for personal use, where standards compliance may not be all that important, it often makes sense to include additional gcc warnings to force the compiler to look for errors in your code before the program executes. This is always more efficient than stepping through the code in the debugger and wondering where the problem might be. The compiler has many options that go beyond simple standards checking, such as the ability to detect code that conforms to a standard but possibly has questionable semantics. For example, a program may have an execution order that allows a variable to be accessed before it is initialized.

If you need to write a program for shared use, given the degree of conformance and the types of compiler warnings you think are sufficient, it's very important to put in a little more effort and get your code to compile without any warnings at all. If you accept some warnings and get into the habit of ignoring them, one day a more serious warning may appear that you risk missing. If your code always compiles without warning messages, a new warning is bound to come to your attention. Compiling code without warning is a good habit to adopt.

Compiler Options for Tracking Standards

Ansi is the most important option regarding standards and causes the compiler to act according to the ISO C90 language standard. It disables some non-standard-compliant gcc extensions, disables C++-style comments (//) in C programs, and enables the handling of ANSI trigraphs (three-character sequences). In addition, it contains the macro __STRICT_ANSI__ , which disables some extensions in header files that are not compatible with the standard. In future versions of the compiler, the accepted standard may change.

Std= - This option provides finer control over which standard to use by providing a parameter that specifies exactly what standard is required. The following are the main options available:

C89 - support the C89 standard;

Iso9899:1999 - support the latest version of the ISO standard, C90;

Gnu89 - Support the C89 standard but allow some GNU extensions and some C99 functionality. In version 4.2 of gcc, this option is the default.

Options for tracking standard in define directives

There are constants (#defines) that can be specified as options on the command line or as definitions in the source code of the program. We generally think of them as using the compiler command line.

STRICT_ANSI__ Forces the ISO C standard to be used. Defined when the -ansi option is given on the compiler command line.

POSIX_C_SOURCE=2 - Enables functionality defined by IEEE Std 1003.1 and 1003.2. We will return to these standards later in this chapter.

BSD_SOURCE - Enables the functionality of BSD systems. If they conflict with POSIX definitions, the BSD definitions take precedence.

GNU_SOURCE - Allows a wide range of properties and functions, including GNU extensions. If these definitions conflict with POSIX definitions, the latter take precedence.

Compiler options for outputting warnings

These options are passed to the compiler from the command line. And again, we will list only the main ones, full list can be found in the gcc online help manual.

Pedantic is the most powerful cleansing option for C code. In addition to enabling the option to check against the C standard, it disables some traditional C constructs that are forbidden by the standard and makes all GNU extensions illegal from the standard. This option should be used to make your C code as portable as possible. The disadvantage is that the compiler is very concerned about the cleanliness of your code, and sometimes you have to rack your brains to get rid of a few remaining warnings.

Wformat - checks the correctness of the types of arguments of functions of the printf family.

Wparentheses - checks for parentheses, even where they are not needed. This option is very useful for checking that complex structures are initialized as intended.

wswitch-default - Checks for the presence of the default variant in switch statements, which is generally considered good programming style.

Wunused - Checks for a variety of cases, such as static functions declared but not declared, unused parameters, discarded results.

Wall - Enables most gcc warning types, including all previous -W options (only -pedantic is not covered). With its help, it is easy to achieve the cleanliness of the program code.

Note

There are many more advanced warning options, see the gcc web pages for all the details. In general, we recommend using -Wall ; this is a good compromise between checking that the program code High Quality, and the need for the compiler to output a lot of trivial warnings that become difficult to nullify.

GCC is included with every distribution linux and is usually set by default. The GCC interface is a standard compiler interface on the UNIX platform, rooted in the late 60s, early 70s of the last century - a command line interface. Do not be afraid, in the past time the user interaction mechanism has been honed to the perfection possible in this case, and work with GCC (with a few additional utilities and a good text editor) is easier than with any modern visual IDE. The authors of the kit tried to automate the process of compiling and assembling applications as much as possible. The user calls the control program gcc, it interprets the passed command line arguments (options and filenames) and for each input file, in accordance with the programming language used, runs its own compiler, then, if necessary, gcc automatically calls the assembler and the linker (linker).

Curiously, compilers are one of the few UNIX applications that care about file extensions. By extension, GCC determines what kind of file is in front of it and what needs (can) be done with it. Language source files C must have the extension .c , in the language C++, alternatively, .cpp , header files in the language C.h , .o object files, and so on. If you use the wrong extension, gcc will not work correctly (if you agree to do anything at all).

Let's move on to practice. Let's write, compile and execute some simple program. Let's not be original, as the source file of an example program in the language C Let's create a file with the following content:

/* hello.c */

#include

Main( void )
{

Printf("Hello World\n" );

return 0 ;

Now in the directory c hello.c we issue the command:

$ gcc hello.c

After a few fractions of a second, the a.out file will appear in the directory:

$ls
a.out hello.c

This is the finished executable file of our program. Default gcc gives the output executable the name a.out (once upon a time this name meant assembler output).

$file a.out
a.out: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, not stripped

Let's run the resulting software:

$ ./a.out
hello world


Why is it necessary to explicitly specify the path to the file in the run command to execute a file from the current directory? If the path to the executable file is not explicitly specified, the shell, interpreting commands, looks for the file in the directories, the list of which is specified by the PATH system variable.

$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games

The directories in the list are separated by colons. When searching for files, the shell looks through the directories in the order in which they are listed. By default, for security reasons, the current directory is . is not included in the list; accordingly, the shell will not look for executable files in it.

Why is it not recommended to make . in PATH? It is believed that in a real multi-user system there will always be some bad person who will place a malicious program in the public directory with an executable file name that matches the name of some command that is often called by a local administrator with superuser rights ... The plot will succeed if . is at the beginning of the directory listing.


Utility file displays information about the type (from the point of view of the system) of the file passed in the command line, for some types of files it displays all sorts of additional information regarding the contents of the file.

$file hello.c
hello.c: ASCII C program text
$file annotation.doc
annotation.doc: CDF V2 Document, Little Endian, Os: Windows, Version 5.1, Code page: 1251, Author: MIH, Template: Normal.dot, Last Saved By: MIH, Revision Number: 83, Name of Creating Application: Microsoft Office Word, Total Editing Time: 09:37:00, Last Printed: Thu Jan 22 07:31:00 2009, Create Time/Date: Mon Jan 12 07:36:00 2009, Last Saved Time/Date: Thu Jan 22 07:34:00 2009, Number of Pages: 1, Number of Words: 3094, Number of Characters: 17637, Security: 0

That's actually all that is required from the user for successful application gcc :)

The name of the output executable file (as well as any other file generated by gcc) can be changed with options -o:

$ gcc -o hello hello.c
$ls
hello hello.c
$ ./hello
hello world


In our example, the main() function returns the seemingly unnecessary value 0 . In UNIX-like systems, at the end of the program, it is customary to return an integer to the shell - in case of successful completion, zero, any other otherwise. The shell interpreter will automatically assign the resulting value to an environment variable named ? . You can view its contents using the echo $? :

$ ./hello
hello world
$ echo $?
0

It was said above that gcc is a control program designed to automate the compilation process. Let's see what actually happens as a result of executing the gcc hello.c command.

The compilation process can be divided into 4 main stages: preprocessor processing, actual compilation, assembly, linking (binding).

Options gcc allow you to interrupt the process at any of these stages.

The preprocessor prepares the source file for compilation - cuts out comments, adds the contents of header files (preprocessor directive #include ), implements expansion of macros (symbolic constants, preprocessor directive #define ).

Taking advantage -E option further actions gcc you can interrupt and view the contents of the file processed by the preprocessor.

$ gcc -E -o hello.i hello.c
$ls
hello.c hello.i
$ less hello.i
. . .
# 1 "/usr/include/stdio.h" 1 3 4
# 28 "/usr/include/stdio.h" 3 4
# 1 "/usr/include/features.h" 1 3 4
. . .
typedef unsigned char __u_char;
typedef unsigned short int __u_short;
typedef unsigned int __u_int;
. . .
extern int printf (__const char *__restrict __format, ...);
. . .
#4 "hello.c" 2
main(void)
{
printf("Hello World\n");
return 0;
}

After processing by the preprocessor, the source text of our program swelled and acquired an unreadable form. The code that we once typed with our own hands was reduced to a few lines at the very end of the file. The reason is the inclusion of the header file of the standard library C. The header file stdio.h itself contains a lot of different things and also requires the inclusion of other header files.

Notice the file extension hello.i . By agreements gcc the .i extension corresponds to files with source code in the language C not requiring preprocessor processing. Such files are compiled bypassing the preprocessor:

$ gcc -o hello hello.i
$ls
hello hello.c hello.i
$ ./hello
hello world

After preprocessing comes the turn of compilation. The compiler converts the source code of the program in a high-level language into code in assembly language.

The meaning of the word compilation is vague. Wikipedians, for example, consider, referring to international standards that compilation is "transformation by the compiler program source code any program written in a high-level programming language, into a language close to machine code, or into object code." In principle, this definition suits us, assembly language is really closer to machine language than C. But in everyday life, compilation is most often understood simply as any operation that converts the source code of a program in any programming language into executable code. That is, a process that includes all four of the above steps can also be called compilation. A similar ambiguity is present in the present text. On the other hand, the operation of converting the source text of a program into assembly language code can also be denoted by the word translation - "transformation of a program presented in one of the programming languages ​​into a program in another language and, in a certain sense, equivalent to the first one."

Stop the process of creating an executable file at the end of compilation allows -S option:

$ gcc -S hello.c
$ls
hello.c hello.s
$file hello.s
hello.s: ASCII assembler program text
$ less hello.s
.file "hello.c"
.section .rodata
.LC0:
.string "Hello World"
.text
.glob main
.type main, @function
main:
pushl %ebp
movl %esp, %ebp
andl $-16, %esp
subl $16, %esp
movl $.LC0, (%esp)
call puts
movl $0, %eax
leave
ret
.size main, .-main


The file hello.s appeared in the directory, containing the implementation of the program in assembly language. Note that specifying the output file name with options -o in this case it was not necessary gcc automatically generated it by replacing the .c extension with .s in the name of the source file. For most basic operations gcc the name of the output file is formed by such a substitution. The .s extension is standard for assembly language source files.

Of course, you can also get the executable code from the hello.s file:

$ gcc -o hello hello.s
$ls
hello hello.c hello.s
$ ./hello
hello world

The next stage of the assembly operation is the translation of assembly language code into machine code. The result of the operation is an object file. An object file contains blocks of machine code ready for execution, data blocks, and a list of functions and external variables defined in the file ( symbol table ), but at the same time, it does not specify the absolute addresses of references to functions and data. An object file cannot be executed directly, but later (at the linking stage) it can be combined with other object files (in this case, in accordance with the symbol tables, the addresses of cross-references existing between files will be calculated and filled). Option gcc-c , stops the process at the end of the assembly step:

$ gcc -c hello.c
$ls
hello.c hello.o
$file hello.o
hello.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), not stripped

Object files use the standard .o extension.

If the received object file hello.o is passed to the linker, the latter will calculate the addresses of the links, add the code for starting and ending the program, the code for calling library functions, and as a result we will have a ready-made executable file of the program.

$ gcc -o hello hello.o
$ls
hello hello.c hello.o
$ ./hello
hello world

What we have now done (or rather gcc did for us) and there is the content of the last stage - linking (linking, linking).

Well, perhaps about the compilation and all. Now let's touch on some, in my opinion, important options. gcc.

Option -I path/to/directory/with/header/files - adds the specified directory to the list of search paths for header files. Directory added by option -I searched first, then the search continues in the standard system directories. If options -I multiple, the directories they specify are scanned from left to right as options appear.

-Wall option- displays warnings caused by potential errors in the code that do not prevent the compilation of the program, but which, according to the compiler, can lead to certain problems during its execution. An important and useful option, developers gcc recommend to use it always. For example, a lot of warnings will be issued when trying to compile such a file:

1 /* remark.c */
2
3 static int k = 0
4 static int l( int a);
5
6 main()
7 {
8
9 int a;
10
11 int b, c;
12
13b + 1;
14
15b=c;
16
17 int*p;
18
19b = *p;
20
21 }


$ gcc -o remark remark.c
$ gcc -Wall -o remark remark.c
remark.c:7: warning: return type defaults to 'int'

remark.c:13: warning: statement with no effect
remark.c:9: warning: unused variable 'a'
remark.c:21: warning: control reaches end of non-void function
remark.c: At top level:
remark.c:3: warning: 'k' defined but not used
remark.c:4: warning: 'l' declared 'static' but never defined
remark.c: In function 'main':
remark.c:15: warning: 'c' is used uninitialized in this function
remark.c:19: warning: 'p' is used uninitialized in this function

Option -Werror- turns all warnings into errors. Aborts the compilation process if a warning occurs. Used in conjunction with -Wall option.

$ gcc -Werror -o remark remark.c
$ gcc -Werror -Wall -o remark remark.c
cc1: warnings being treated as errors
remark.c:7: error: return type defaults to 'int'
remark.c: In function 'main':
remark.c:13: error: statement with no effect
remark.c:9: error: unused variable 'a'

Option -g- places information necessary for the debugger to work in an object or executable file gdb. When building a project for the purpose of subsequent debugging, option -g must be included both at compile time and at link time.

Options -O1 , -O2 , -O3- set the level of optimization of the code generated by the compiler. As the number increases, the degree of optimization increases. The action of the options can be seen in this example.

Original file:

/* circle.c */

Main( void )
{

int i;

for(i = 0 ; i< 10 ; ++i)
;

return i;

Compiling with the default optimization level:

$ gcc -S circle.c
$ less circles
.file "circle.c"
.text
.glob main
.type main, @function
main:
pushl %ebp
movl %esp, %ebp
subl $16, %esp
movl $0, -4(%ebp)
jmp .L2
.L3:
addl $1, -4(%ebp)
.L2:
cmpl $9, -4(%ebp)
jle .L3
movl -4(%ebp), %eax
leave
ret
.size main, .-main
.ident "GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3"
.section .note.GNU-stack,"",@progbits

Compilation with maximum optimization level:

$ gcc -S -O3 circle.c
$ less circles
.file "circle.c"
.text
.p2align 4.15
.glob main
.type main, @function
main:
pushl %ebp
movl $10, %eax
movl %esp, %ebp
popl %ebp
ret
.size main, .-main
.ident "GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3"
.section .note.GNU-stack,"",@progbits

In the second case, there is not even a hint of any cycle in the resulting code. Indeed, the value of i can be calculated at the compilation stage, which was done.

Alas, for real projects, the difference in performance at different optimization levels is almost imperceptible...

Option -O0- cancels any code optimization. The option is required at the stage of application debugging. As shown above, optimization can lead to a change in the structure of the program beyond recognition, the connection between the executable and source code will not be explicit, respectively, step-by-step debugging of the program will not be possible. When the option is enabled -g, it is recommended to include and -O0.

-Os option- specifies optimization not by code efficiency, but by the size of the resulting file. The performance of the program should be comparable to the performance of the code obtained by compiling with the default optimization level.

Option -march=architecture- specifies the target architecture of the processor. The list of supported architectures is extensive, for example, for processors of the family Intel/AMD you can set i386, pentium, prescott, opteron-sse3 etc. Users of binary distributions should keep in mind that in order for programs with the specified option to work correctly, it is desirable that all included libraries be compiled with the same option.

The options passed to the linker will be discussed below.

Small addition:

It was said above that gcc determines the type (programming language) of the transferred files by their extension and, in accordance with the guessed type (language), performs actions on them. The user is obliged to monitor the extensions of the files created, choosing them as required by the agreements gcc. In fact gcc you can put files with arbitrary names. gcc -x option allows you to explicitly specify the programming language of compiled files. The action of the option applies to all subsequent files listed in the command (up to the appearance of the next option -x). Possible option arguments:

c c-header c-cpp-output

c++ c++-header c++-cpp-output

objective-c objective-c-header objective-c-cpp-output

objective-c++ objective-c++-header objective-c++-cpp-output

assembler assembler-with-cpp

ada

f77 f77-cpp-input

f95 f95-cpp-input

java

The purpose of the arguments should be clear from their writing (here cpp has nothing to do with C++, this is a source code file preprocessed by the preprocessor). Let's check:

$mv hello.c hello.txt
$ gcc -Wall -x c -o hello hello.txt
$ ./hello
hello world

Separate compilation

Strong point of languages C/C++ is the ability to split the source code of the program into several files. You can even say more - the possibility of separate compilation is the basis of the language, without it effective use C not conceivable. It is multi-file programming that allows you to implement on C large projects such as linux(here under the word linux both the kernel and the system as a whole are implied). What gives separate compilation to the programmer?

1. Allows you to make the code of the program (project) more readable. The source file for several dozen screens becomes almost overwhelming. If, in accordance with some (prearranged) logic, break it into a number of small fragments (each in a separate file), it will be much easier to cope with the complexity of the project.

2. Reduces project recompilation time. If changes are made to one file, it makes no sense to recompile the entire project, it is enough to recompile only this changed file.

3. Allows you to distribute work on the project among several developers. Each programmer creates and debugs his part of the project, but at any time it will be possible to collect (rebuild) all the resulting developments into the final product.

4. Without separate compilation, there would be no libraries. Through libraries, reuse and distribution of code to C/C++, and the code is binary, which allows, on the one hand, to provide developers with a simple mechanism for including it in their programs, on the other hand, to hide specific implementation details from them. When working on a project, is it always worth thinking about, and not needing something from what has already been done sometime in the future? Maybe it's worth highlighting and arranging part of the code as a library in advance? In my opinion, this approach greatly simplifies life and saves a lot of time.

GCC, of course, supports separate compilation, and does not require any special instructions from the user. In general, everything is very simple.

Here is a practical example (though very, very conditional).

Set of source code files:

/* main.c */

#include

#include "first.h"
#include "second.h"

int main( void )
{

first();
second();

Printf("Main function... \n" );

return 0 ;


/* first.h */

void first( void );


/* first.c */

#include

#include "first.h"

void first( void )
{

Printf("First function... \n" );


/* second.h */

void second( void );


/* second.c */

#include

#include "second.h"

void second( void )
{

Printf("Second function... \n" );

In general, we have this:

$ls
first.c first.h main.c second.c second.h

All this economy can be compiled into one command:

$ gcc -Wall -o main main.c first.c second.c
$ ./main
First function...
Second function...
main function...

Only this will not give us practically any bonuses, well, with the exception of more structured and readable code, spread over several files. All the advantages listed above will appear in the case of this approach to compilation:

$ gcc -Wall -c main.c
$ gcc -Wall -c first.c
$ gcc -Wall -c second.c
$ls
first.c first.h first.o main.c main.o second.c second.h second.o
$ gcc -o main main.o first.o second.o
$ ./main
First function...
Second function...
main function...

What have we done? From each source file (compiling with the option -c) received an object file. The object files were then linked into the final executable. Of course commands gcc there are more, but no one assembles projects manually, there are assembler utilities for this (the most popular make). When using the assembler utilities, all of the advantages of separate compilation listed above will manifest themselves.

The question arises: how does the linker manage to put together object files, correctly calculating the call addressing? How does he even know that the second.o file contains the second() function code, and the main.o file code contains its call? It turns out that everything is simple - in the object file there is a so-called symbol table , which includes the names of some code positions (functions and external variables). The linker looks through the symbol table of each object file, looks for common (with matching names) positions, on the basis of which it draws conclusions about the actual location of the code of the used functions (or data blocks) and, accordingly, recalculates the call addresses in the executable file.

You can view the symbol table using the utility nm.

$nm main.o
U first
00000000 T main
U puts
U second
$nm first.o
00000000 T first
U puts
$nm second.o
U puts
00000000 T second

The puts call is due to the use of the standard library function printf() , which became puts() at compile time.

The symbol table is written not only to the object file, but also to the executable file:

$ nm main
08049f20d_DYNAMIC
08049ff4d _GLOBAL_OFFSET_TABLE_
080484fc R _IO_stdin_used
w _Jv_RegisterClasses
08049f10 d __CTOR_END__
08049f0cd __CTOR_LIST__
08049f18 D __DTOR_END__
08049f14 d __DTOR_LIST__
08048538r __FRAME_END__
08049f1cd __JCR_END__
08049f1c d __JCR_LIST__
0804a014 A __bss_start
0804a00c D __data_start
080484b0 t __do_global_ctors_aux
08048360 t __do_global_dtors_aux
0804a010 D __dso_handle
w __gmon_start__
080484aa T __i686.get_pc_thunk.bx
08049f0cd __init_array_end
08049f0cd __init_array_start
08048440 T __libc_csu_fini
08048450 T __libc_csu_init
U __libc_start_main@@GLIBC_2.0
0804a014 A _edata
0804a01c A_end
080484dc T_fini
080484f8 R_fp_hw
080482b8 T _init
08048330 T_start
0804a014b completed.7021
0804a00c W data_start
0804a018 b dtor_idx.7023
0804840c T first
080483c0 t frame_dummy
080483e4 T main
U puts@@GLIBC_2.0
08048420 T second

The inclusion of a symbol table in the executable is particularly necessary for ease of debugging. In principle, it is not really needed to run the application. For real program executables, with many function definitions and external variables, involving a bunch of different libraries, the symbol table becomes quite extensive. To reduce the size of the output file, it can be removed using gcc -s option.

$ gcc -s -o main main.o first.o second.o
$ ./main
First function...
Second function...
main function...
$ nm main
nm: main: no symbols

It should be noted that during linking, the linker does not make any checks on the context of the function call, it does not monitor the type of the returned value, nor the type and number of parameters received (and it has nowhere to get such information from). All call validation must be done at compile time. In the case of multi-file programming, it is necessary to use the mechanism of the header files of the language for this. C.

Libraries

Library - in language C, a file containing object code that can be attached to a program using the library at the linking stage. In fact, a library is a collection of specially linked object files.

The purpose of libraries is to provide the programmer with a standard mechanism for code reuse, and the mechanism is simple and reliable.

From the point of view of the operating system and application software, libraries are static And shared (dynamic ).

Static library code is included in the executable file during the linking of the latter. The library is "hardwired" into the file, the library code is "merged" with the rest of the file code. A program using static libraries becomes self-contained and can be run on virtually any computer with the right architecture and operating system.

The shared library code is loaded and linked to the program code by the operating system, at the request of the program during its execution. The dynamic library code is not included in the executable file of the program; only a link to the library is included in the executable file. As a result, a program using shared libraries is no longer standalone and can only be successfully run on a system where the libraries involved are installed.

The shared library paradigm provides three significant benefits:

1. The size of the executable file is greatly reduced. In a system that includes many binaries that use the same code, there is no need to keep a copy of that code for each executable file.

2. Shared library code used by several applications is stored in RAM in one instance (actually, it's not that simple...), resulting in a reduction in the system's need for available RAM.

3. There is no need to rebuild each executable if changes are made to the code of the shared library. Changes and corrections to the dynamic library code will automatically be reflected in each of the programs using it.

Without the shared library paradigm, there would be no precompiled (binary) distributions linux(yes, no matter what). Imagine the size of a distribution that would have standard library code placed in each binary. C(and all other included libraries). Just imagine what you would have to do in order to update the system, after fixing a critical vulnerability in one of the widely used libraries...

Now for some practice.

To illustrate, let's use the set of source files from the previous example. Let's place the code (implementation) of the functions first() and second() in our homemade library.

Linux has the following naming scheme for library files (although it is not always observed) - the name of the library file begins with the prefix lib , followed by the actual name of the library, at the end with the extension .a ( archive ) - for static library, .so ( shared object ) - for a shared (dynamic) one, after the expansion, the version number digits are listed through a dot (only for a dynamic library). The name of the header file corresponding to the library (again, as a rule) consists of the name of the library (without prefix and version) and the extension .h . For example: libogg.a , libogg.so.0.7.0 , ogg.h .

First, let's create and use a static library.

The first() and second() functions will make up the contents of our libhello library. The name of the library file, respectively, will be libhello.a . Let's compare the header file hello.h to the library.

/* hello.h */

void first( void );
void second( void );

Of course, the lines:

#include "first.h"


#include "second.h"

in the files main.c , first.c and second.c must be replaced with:

#include "hello.h"

Well, now, enter the following sequence of commands:

$ gcc -Wall -c first.c
$ gcc -Wall -c second.c
$ ar crs libhello.a first.o second.o
$filelibhello.a
libhello.a: current ar archive

As already mentioned, a library is a collection of object files. With the first two commands, we created these object files.

Next, you need to link the object files into a set. For this, an archiver is used. ar- the utility "glues" several files into one, in the resulting archive includes the information required to restore (extract) each individual file (including its attributes of ownership, access, time). Any "compression" of the contents of the archive or other transformation of the stored data is not performed.

c arname option- create an archive, if the archive with the name arname does not exist, it will be created, otherwise the files will be added to the existing archive.

r option- sets the archive update mode, if a file with the specified name already exists in the archive, it will be deleted, and the new file will be appended to the end of the archive.

Option s- adds (updates) archive index. In this case, the archive index is a table in which for each symbolic name (function or data block name) defined in the archived files, the corresponding object file name is associated with it. The archive index is necessary to speed up work with the library - in order to find the desired definition, there is no need to look through the symbol tables of all archive files, you can immediately go to the file containing the name you are looking for. You can view the archive index using the already familiar utility nm using it option -s(the symbol tables of all object files of the archive will also be shown):

$ nm -s libhello.a
archive index:
first in first.o
second in second.o

first.o:
00000000 T first
U puts

second.o:
U puts
00000000 T second

To create an archive index, there is a special utility ranlib. The libhello.a library could have been created like this:

$ ar cr libhello.a first.o second.o
$ranlib libhello.a

However, the library will work fine without an archive index.

Now let's use our library:

$ gcc -Wall -c main.c
$
$ ./main
First function...
Second function...
main function...

Works...

Well, now the comments ... There are two new options gcc:

-l name option- passed to the linker, indicates the need to link the libname library to the executable file. Connect means to indicate that such and such functions (external variables) are defined in such and such a library. In our example, the library is static, all symbolic names will refer to the code located directly in the executable file. Pay attention to options -l the library name is given as name without the lib prefix.

Option -L /path/to/directory/with/libraries - passed to the linker, specifies the path to the directory containing the linked libraries. In our case, the point . , the linker will first look for libraries in the current directory, then in directories defined in the system.

Here it is necessary to make a small remark. The fact is that for a number of options gcc the order in which they appear on the command line is important. This is how the linker looks for code that matches the names specified in the symbol table of the file in the libraries listed on the command line after the name of this file. The content of libraries listed before the file name is ignored by the linker:

$ gcc -Wall -c main.c
$ gcc -o main -L. -lhello main.o
main.o: In function `main":
main.c:(.text+0xa): undefined reference to `first"
main.c:(.text+0xf): undefined reference to `second"

$ gcc -o main main.o -L. -lhello
$ ./main
First function...
Second function...
main function...

This kind of behavior gcc due to the desire of developers to provide the user with the ability to combine files with libraries in different ways, use intersecting names ... In my opinion, if possible, it is better not to bother with this. In general, link libraries should be listed after the name of the file that references them.

Exists alternative way specifying the location of libraries in the system. Depending on the distribution, the LD_LIBRARY_PATH or LIBRARY_PATH environment variable can hold a colon-separated list of directories in which the linker should look for libraries. As a rule, by default, this variable is not defined at all, but nothing prevents it from being created:

$ echo $LD_LIBRARY_PATH

/usr/lib/gcc/i686-pc-linux-gnu/4.4.3/../../../../i686-pc-linux-gnu/bin/ld: cannot find -lhello
collect2: ld exited with return code 1
$ export LIBRARY_PATH=.
$ gcc -o main main.o -lhello
$ ./main
First function...
Second function...
main function...

Manipulations with environment variables are useful when creating and debugging your own libraries, as well as if it becomes necessary to connect some non-standard (outdated, updated, changed - generally different from that included in the distribution kit) shared library to the application.

Now let's create and use the dynamic library.

The set of source files remains unchanged. We enter commands, see what happened, read the comments:

$ gcc -Wall -fPIC -c first.c
$ gcc -Wall -fPIC -c second.c
$ gcc -shared -o libhello.so.2.4.0.5 -Wl,-soname,libhello.so.2 first.o second.o

What did you get as a result?

$ file libhello.so.2.4.0.5
libhello.so.2.4.0.5: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped

The file libhello.so.2.4.0.5 is our shared library. Let's talk about how to use it below.

Now the comments:

Option -fPIC- requires the compiler, when creating object files, to generate position independent code (PIC - Position Independent Code ), its main difference is in the way addresses are presented. Instead of specifying fixed (static) positions, all addresses are calculated based on the offsets specified in global offset table (global offset table - GOT ). The position-independent code format allows you to connect executable modules to the code of the main program at the time of its loading. Accordingly, the main purpose of position-independent code is the creation of dynamic (shared) libraries.

-shared option- indicates gcc, that as a result, not an executable file should be built, but a shared object - a dynamic library.

Option -Wl,-soname,libhello.so.2- sets soname libraries. We will talk about soname in detail in the next paragraph. Now let's discuss the format of the option. This strange, at first glance, construction with commas is intended for direct interaction between the user and the linker. During compilation gcc calls the linker automatically, automatically, at its own discretion, gcc passes it the options necessary for the successful completion of the task. If the user needs to intervene in the linking process himself, he can use the special option gcc -Wl, -option , value1 , value2 .... What does it mean to pass to the linker ( -Wl) option -option with arguments value1, value2 and so on. In our case, the linker was given the option -soname with an argument libhello.so.2.

Now about soname. When creating and distributing libraries, there is a problem of compatibility and version control. In order for the system, specifically the dynamic library loader, to have an idea of ​​which version of the library was used when compiling the application and, accordingly, is necessary for its successful operation, a special identifier was provided - soname , placed both in the library file itself and in the application executable file. The soname identifier is a string containing the name of the library prefixed with lib , a dot, the so extension, a dot again, and one or two (dot-separated) digits of the library's version, lib name .so. x . y . That is, soname matches the name of the library file up to the first or second digit of the version number. Let the name of our library's executable be libhello.so.2.4.0.5 , then the library's soname could be libhello.so.2 . When changing the library interface, its soname must be changed! Any code modification that causes incompatibility with previous releases must be accompanied by a new soname.

How does it all work? Let a library with the name hello be required for the successful execution of some application, let there be one in the system, and the library file name is libhello.so.2.4.0.5 , and the soname of the library written in it is libhello.so.2 . At the stage of compiling the application, the linker, according to the option -l hello, will search the system for a file named libhello.so . On a real system, libhello.so is a symbolic link to the file libhello.so.2.4.0.5 . Having accessed the library file, the linker reads the value of soname registered in it and, among other things, places it in the application's executable file. When the application is launched, the dynamic library loader will receive a request to include a library with the soname read from the executable file, and will try to find a library on the system whose file name matches the soname. That is, the loader will try to find the libhello.so.2 file. If the system is configured correctly, it should contain a symbolic link libhello.so.2 to the file libhello.so.2.4.0.5 , the loader will get access to the required library and then without hesitation (and without checking anything else) will connect it to the application. Now imagine that we have ported the application compiled in this way to another system where only the previous version of the library with soname libhello.so.1 is deployed. Attempting to run the program will result in an error, as there is no file named libhello.so.2 on this system.

So at compile time the linker needs to provide a library file (or a symbolic link to a library file) called lib name .so , at run time the loader needs a file (or a symbolic link) called lib name .so . x . y . What does the name lib name .so have to do with it. x . y must match the soname string of the used library.

In binary distributions, as a rule, the library file libhello.so.2.4.0.5 and the link to it libhello.so.2 will be placed in the libhello package, and the link libhello.so , which is necessary only for compilation, along with the library header file hello.h will be packaged in the libhello-devel package (the devel package will also contain the file of the static version of the library libhello.a , the static library can be used, also only at the compilation stage). When unpacking the package, all the listed files and links (except hello.h ) will be in the same directory.

Let's make sure that the given string soname is really registered in our library file. Let's use the mega utility objdump with option -p :

$ objdump -p libhello.so.2.4.0.5 | grep SONAME
libhello.so.2


Utility objdump- a powerful tool that allows you to get comprehensive information about the internal content (and structure) of an object or executable file. The man page for the utility says that objdump First of all, it will be useful for programmers who create debugging and compilation tools, and not just writing some application programs :) In particular, with the option -d it's a disassembler. We have used the option -p- display various meta-information about the object file.

In the above example of creating a library, we relentlessly followed the principles of separate compilation. Of course, it would be possible to compile the library like this, with one call gcc:

$ gcc -shared -Wall -fPIC -o libhello.so.2.4.0.5 -Wl,-soname,libhello.so.2 first.c second.c

Now let's try to use the resulting library:

$ gcc -Wall -c main.c
$
/usr/bin/ld: cannot find -lhello
collect2: ld returned 1 exit status

The linker swears. Remember what was said above about symbolic links. Create libhello.so and try again:

$ ln -s libhello.so.2.4.0.5 libhello.so
$ gcc -o main main.o -L. -lhello -Wl,-rpath,.

Now everyone is happy. Run the generated binary:

Error... The loader is complaining, cannot find the libhello.so.2 library. Let's make sure that the link to libhello.so.2 is really registered in the executable file:

$ objdump -p main | grep NEEDED
libhello.so.2
libc.so.6

$ ln -s libhello.so.2.4.0.5 libhello.so.2
$ ./main
First function...
Second function...
main function...

It worked... Now comments on new options gcc.

Option -Wl,-rpath,.- already familiar construction, pass an option to the linker -rpath with an argument . . By using -rpath in the executable file of the program, you can add additional paths where the shared library loader will search for library files. In our case, the path . - search for library files will start from the current directory.

$ objdump -p main | grep RPATH
RPATH .

Thanks to this option, when starting the program, there is no need to change environment variables. It is clear that if you move the program to another directory and try to run it, the library file will not be found and the loader will display an error message:

$mv main..
$ ../main
First function...
Second function...
main function...

You can also find out which shared libraries an application needs using the utility ldd:

$ ldd main
linux-vdso.so.1 => (0x00007fffaddff000)
libhello.so.2 => ./libhello.so.2 (0x00007f9689001000)
libc.so.6 => /lib/libc.so.6 (0x00007f9688c62000)
/lib64/ld-linux-x86-64.so.2 (0x00007f9689205000)

In conclusion ldd for each required library, its soname and the full path to the library file are specified, determined in accordance with the system settings.

Now is the time to talk about where the library files are supposed to be placed in the system, where the loader tries to find them, and how to manage this process.

According to agreements FHS (Filesystem Hierarchy Standard) the system must have two (at least) directories for storing library files:

/lib - here are collected the main libraries of the distribution kit, necessary for the work of programs from /bin and /sbin ;

/usr/lib - libraries needed by applications from /usr/bin and /usr/sbin are stored here;

The header files corresponding to the libraries must be in the /usr/include directory.

The default loader will look for library files in these directories.

In addition to those listed above, the /usr/local/lib directory must be present in the system - there must be libraries deployed by the user on their own, bypassing the package management system (not included in the distribution kit). For example, libraries compiled from sources will be in this directory by default (programs installed from sources will be placed in /usr/local/bin and /usr/local/sbin , of course, we are talking about binary distributions). The header files of the libraries in this case will be placed in /usr/local/include .

On some distributions (in ubuntu) the loader is not configured to view the /usr/local/lib directory, so if the user installs the library from source, the system will not see it. This distribution was made by the authors of the distribution specifically to teach the user to install software only through the package management system. How to proceed in this case will be described below.

In fact, in order to simplify and speed up the process of finding library files, the loader does not look at the above directories each time it is accessed, but uses the database stored in the file /etc/ld.so.cache (library cache). This contains information about where in the system the library file corresponding to the given soname is located. The loader, having received a list of libraries required by a particular application (a list of soname libraries specified in the program executable file), determines the path to the file of each required library using /etc/ld.so.cache and loads it into memory. Additionally, the loader can browse the directories listed in the LD_LIBRARY_PATH , LIBRARY_PATH system variables and in the executable's RPATH field (see above).

The utility is used to manage and maintain the library cache up to date. ldconfig. If run ldconfig without any options, the program will look at the directories specified on the command line, the trusted directories /lib and /usr/lib , the directories listed in the file /etc/ld.so.conf . For each library file found in the specified directories, a soname will be read, a symbolic link based on the soname will be created, and the information in /etc/ld.so.cache will be updated.

Let's make sure that:

$ls
hello.h libhello.so libhello.so.2.4.0.5 main.c
$
$ sudo ldconfig /full/path/to/dir/c/example
$ls
hello.h libhello.so libhello.so.2 libhello.so.2.4.0.5 main main.c
$ ./main
First function...
Second function...
main function...

First call ldconfig we cached our library, excluded it with the second call. Note that the main option was omitted when compiling -Wl,-rpath,., as a result, the loader searched for the required libraries only in the cache.

Now it should be clear what to do if, after installing the library from the source, the system does not see it. First of all, you need to add the full path to the directory with the library files (by default /usr/local/lib ) to the /etc/ld.so.conf file. Format /etc/ld.so.conf - the file contains a colon-, space-, tab- or newline-separated list of directories in which to search for libraries. Then call ldconfig without any options, but with superuser rights. Everything should work.

Well, in the end, let's talk about how static and dynamic versions of libraries get along together. What is the actual question? Above, when discussing the accepted names and location of library files, it was said that the files of the static and dynamic versions of the library are stored in the same directory. How gcc find out what type of library we want to use? By default, the dynamic library is preferred. If the linker finds a dynamic library file, it does not hesitate to link it to the program's executable file:

$ls
hello.h libhello.a libhello.so libhello.so.2 libhello.so.2.4.0.5 main.c
$ gcc -Wall -c main.c
$ gcc -o main main.o -L. -lhello -Wl,-rpath,.
$ ldd main
linux-vdso.so.1 => (0x00007fffe1bb0000)
libhello.so.2 => ./libhello.so.2 (0x00007fd50370b000)
libc.so.6 => /lib/libc.so.6 (0x00007fd50336c000)
/lib64/ld-linux-x86-64.so.2 (0x00007fd50390f000)
$ du -h main
12K main

Pay attention to the size of the program's executable file. It is the minimum possible. All libraries used are dynamically linked.

Exists gcc -static option- an instruction to the linker to use only static versions of all libraries necessary for the application:

$ gcc -static -o main main.o -L. -lhello
$ file main
main: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 2.6.15, not stripped
$ ldd main
is not a dynamic executable
$ du -h main
728K main

The size of the executable file is 60 times larger than in the previous example - the standard language libraries are included in the file C. Now our application can be safely transferred from directory to directory and even to other machines, the hello library code is inside the file, the program is completely autonomous.

What if it is necessary to statically link only a part of the used libraries? Possible variant the solution is to make the name of the static version of the library different from the name of the shared one, and when compiling the application, specify which version we want to use this time:

$ mv libhello.a libhello_s.a
$ gcc -o main main.o -L. -lhello_s
$ ldd main
linux-vdso.so.1 => (0x00007fff021f5000)
libc.so.6 => /lib/libc.so.6 (0x00007fd0d0803000)
/lib64/ld-linux-x86-64.so.2 (0x00007fd0d0ba4000)
$ du -h main
12K main

Since the size of the libhello library code is negligible,

$ du -h libhello_s.a
4.0K libhello.a

the size of the resulting executable file is practically the same as the size of the file created using dynamic linking.

Well, perhaps that's all. Many thanks to everyone who finished reading at this point.

It is widely believed that GCC lags behind other compilers in terms of performance. In this article, we will try to figure out what basic GCC compiler optimizations should be applied to achieve acceptable performance.

What are the default options in GCC?

(1) By default, GCC uses the "-O0" optimization level. It is clearly not optimal in terms of performance and is not recommended for compiling the final product.
GCC does not recognize the architecture on which compilation is being run until the ”-march=native” option is passed. By default, GCC uses the option set during its configuration. To find out the GCC configuration, just run:

This means that GCC will add "-march=corei7" to your options (unless another architecture is specified).
Most GCC compilers for x86 (basic for 64 bit Linux) add: “-mtune=generic -march=x86-64” to the given options, since no architecture-specific options were given in the configuration. You can always find out all the options passed when starting GCC, as well as its internal options, with the command:

As a result, commonly used:

Specifying the architecture to use is important for performance. The only exception can be considered those programs where the call of library functions takes almost the entire startup time. GLIBC can choose the optimal function for a given architecture at runtime. It is important to note that when statically linked, some GLIBC functions are not versioned for different architectures. That is, dynamic assembly is better if the speed of GLIBC functions is important..
(2) By default, most GCC compilers for x86 in 32 bit mode use the x87 floating point model, as they were configured without "-mfpmath=sse". Only if the GCC config contains "--with-mfpmath=sse":

the compiler will use the SSE model by default. In all other cases, it is better to add the “-mfpmath=sse” option to the build in 32-bit mode.
So, commonly used:

Adding the ”-mfpmath=sse” option is important in 32 bit mode! The exception is a compiler that has "--with-mfpmath=sse" in its configuration.

32 bit mode or 64 bit?

32-bit mode is usually used to reduce the amount of memory used and, as a result, to speed up work with it (more data is placed in the cache).
In 64 bit mode (compared to 32 bit) the number of available public registers increases from 6 to 14, XMM registers from 8 to 16. Also, all 64 bit architectures support the SSE2 extension, so in 64 bit mode you do not need to add the “-mfpmath” option =sse".
It is recommended to use 64 bit mode for computing tasks, and 32 bit mode for mobile applications.

How to get maximum performance?

There is no set set of options for maximizing performance, but there are many options in GCC that are worth trying. Below is a table with recommended options and growth forecasts for Intel Atom and 2nd Generation Intel Core i7 processors relative to the “-O2” option. The predictions are based on the geometric mean of the results of a specific set of tasks compiled by GCC version 4.7. It also assumes that the compiler configuration was done for x86-64 generic.
Forecast of performance increase for mobile applications relative to “-O2” (only in 32-bit mode, since it is the main one for the mobile segment):

Forecast of performance increase on computational tasks relative to “-O2” (in 64-bit mode):
-m64 -Ofast -flto ~17%
-m64 -Ofast -flto -march=native ~21%
-m64 -Ofast -flto -march=native -funroll-loops ~22%

The advantage of 64-bit mode over 32-bit for computing tasks with options “-O2 -mfpmath=sse” is about ~5%
All data in the article is a forecast based on the results of a certain set of benchmarks.
Below is a description of the options used in the article. Full description (in English): http://gcc.gnu.org/onlinedocs/gcc-4.7.1/gcc/Optimize-Options.html"
  • "-Ofast" like "-O3 -ffast-math" enables a higher level of optimizations and more aggressive optimizations for arithmetic calculations (like real reassociation)
  • "-flto" cross-module optimizations
  • "-m32" 32 bit mode
  • "-mfpmath=sse" enables XMM registers to be used in real arithmetic (instead of real stack in x87 mode)
  • "-funroll-loops" enables loop unrolling

Transport logistics(analysis of different types of transport: advantages, disadvantages)

Transport is a branch of material production that transports people and goods. in structure social production transport belongs to the sphere of production of material services.

It is noted that a significant part of the logistics operations on the path of material flow from the primary source of raw materials to final consumption is carried out using various Vehicle. The cost of these operations is up to 50% of the total cost of logistics.

According to the purpose, two main groups of transport are distinguished: Public transport - industry National economy, which satisfies the needs of all sectors of the economy and the population in the transportation of goods and passengers. Public transport serves the sphere of circulation and the population. It is often called the main line (the main line is the main, main line in some system, in this case, in the communication system). The concept of public transport covers railway transport, water transport (sea and river), road, air and pipeline transport).

Non-public transport - intra-production transport, as well as vehicles of all types belonging to non-transport organizations.

The organization of the movement of goods by non-public transport is the subject of study of industrial logistics. The problem of choosing distribution channels is solved in the field of distribution logistics.

So, there are the following main modes of transport:

railway

inland water river

automotive

air

pipeline

Each of the modes of transport has specific features in terms of logistics management, advantages and disadvantages that determine the possibility of its use in the logistics system. Different types of transport make up the transport complex. The transport complex of Russia is formed by legal entities and individuals registered on its territory - entrepreneurs who carry out transportation and forwarding activities on all types of transport, design, construction, repair and maintenance of railways, roads and structures on them, pipelines, work related to maintenance of navigable hydraulic structures, water and airways messages, holding scientific research and training of personnel, enterprises that are part of the transport system and manufacture vehicles, as well as organizations that perform other work related to the transport process. The TC of Russia is more than 160 thousand km of main railway and access roads, 750 thousand km of paved roads, 1.0 million km of sea shipping lines, 101 thousand km of inland waterways, 800 thousand km of airlines. About 4.7 million tons of cargo are transported through these communications only by public transport daily (according to data for 2000), more than 4 million people work in the TC, and the share of transport in the country's gross domestic product is about 9%. Thus, transport is an essential part of the infrastructure of the economy and the entire social and production potential of our country.

In table. 1 (4, 295) Comparative logistical characteristics of various modes of transport are given.

Table 1 Characteristics of modes of transport

Kind of transport

Advantages

Flaws

railway

high carrying capacity and throughput. Independence from climatic conditions, time of year and day.

High regularity of transportation. Relatively low rates; significant discounts for transit shipments. High speed delivery of goods over long distances.

Limited number of carriers. Large capital investments in the production and technical base. High material consumption and energy intensity of transportation. Low availability to end points of sale (consumption).

Insufficiently high safety of cargo.

Possibility of intercontinental transportation. Low cost of transportation over long distances. High carrying and carrying capacity. Low capital intensity of transportation.

Limited transportation.

Low delivery speed (long transit time).

Dependence on geographical, navigation and weather conditions.

The need to create a complex port infrastructure.

Internal Water (river)

High carrying capacity on deep-sea rivers and reservoirs.

Low cost of transportation. Low capital intensity.

Limited transportation. Low delivery speed.

Dependence on uneven depths of rivers and reservoirs, navigational conditions. Seasonality. Insufficient reliability of transportation and safety of cargo.

automotive

High availability.

Possibility of door-to-door cargo delivery

High maneuverability, flexibility, dynamism. High delivery speed. Possibility of using various routes and delivery schemes.

High security of cargo. Possibility of sending cargo in small batches.

Low performance. Dependence on weather and road conditions. relatively high cost of transportation over long distances.

Insufficient environmental cleanliness.

Air

The highest speed of cargo delivery. High reliability.

The highest safety of cargo.

The shortest transportation routes.

High cost of transportation, the highest rates among other modes of transport. High capital intensity, material and energy intensity of transportation. Weather dependent. Insufficient geographical accessibility.

pipeline

Low cost. High performance (bandwidth). High security of cargo. Low capital intensity.

Limited types of cargo (gas, oil products, emulsions raw materials). Insufficient availability of small volumes of transported goods.

So, first of all, the logistics manager must decide whether to create his own fleet of vehicles or use hired transport (public or private). When choosing an alternative, they usually proceed from a certain system of criteria, which include: The cost of creating and operating your own fleet of vehicles. The cost of paying for the services of transport, forwarding companies and other logistics intermediaries in transportation Transportation speed

Quality of transportation (delivery reliability, cargo safety, etc.)

In most cases, manufacturing firms resort to the services of specialized transport companies.