Table of Contents
List of Tables
Table of Contents
If one or more bashrc files exist in the following locations, they will be sourced before the ebuild is executed in the following order:
/etc/portage/bashrc
/etc/portage/env/${CATEGORY}/${PN}
/etc/portage/env/${CATEGORY}/${PN}:${SLOT}
/etc/portage/env/${CATEGORY}/${P}
/etc/portage/env/${CATEGORY}/${PF}
A phase hook function name begins with a pre_ or post_ prefix to indicate that it will be called before or after one of the ebuild phases. The prefix is followed by the name of the ebuild function that the hook will be associated with. For example, a hook named pre_src_compile will be called before src_compile, and a hook named post_src_compile will be called after src_compile.
The register_die_hook function registers one or more names of functions to call when the ebuild fails for any reason, including file collisions with other packages.
Table of Contents
There are multiple locations where portage looks for set configuration
files, which are usually named sets.conf
. Not all
of these locations have to contain a sets.conf, missing files are simply
ignored.
At first it reads the default configuration from all of the files
located in /usr/share/portage/config/sets
directory.
The default config includes sets that are expected on all systems and
often critical for normal operation, like world
,
system
or security
.
After that it will read configurations located in repositories
configured in repos.conf
.
Finally a system-specific set configuration may reside in
/etc/portage
to either define additional sets or
alter the default and repository sets.
Unlike other Portage configuration files sets.conf
uses Pythons ConfigParser
module, which implements
the syntax usually found in .ini files. At its core it allows various
named sections that each can contain any number of key-value pairs, see
the Python documentation
for the full details.
In a sets.conf
file, a section can define either a
single package set, or a complete class of sets. These cases are handled
in different ways, and will be explained in detail in the following sections.
The configuration of a single set can be very simple as in most cases
it only requires a single option class
to be
complete [1].
That option defines which handler class should be used to
create the set. Other universal options available for single sets are:
name
(which is usually not needed as the name
of the set is generated from the section name if name
is missing)
world-candidate
, which determines if
given package should be added to the world
set
Some handler classes might require additional options for their configuration, these will be covered later in this chapter.
Here are a few examples for single sets taken from the default configuration file:
# The classic world set [world] class = portage.sets.base.DummyPackageSet packages = @selected @system # The selected-packages set [selected-packages] class = portage.sets.files.WorldSelectedPackagesSet # The classic system set [system] class = portage.sets.profiles.PackagesSystemSet
As configuring each single set manually could be quite annoying if
you want many sets with the same options Portage also allows to
define whole classes of sets in a single section. Like with single
sets each section still requires the class
option,
but to indicate that the section should generate multiple sets it's
also necessary to set the multiset
option to
true
. The world-candidate
option also supported like with
single sets (they'll apply to all sets generated by the section).
As it doesn't make much sense to specify a single name for multiple sets
the name
option isn't available for multiset sections.
Most handler classes will have a reasonable default for generating names,
and usually you can (but don't have to) set the
name_pattern
option to change the naming rules. That
option generally has to include a (handler-specific) placeholder that
will be replaced with a unique identifier (e.g. for category sets the
category name). As with single sets handler classes might require and/or
support additional options, these will be discussed later.
Some examples for multiset configurations:
# generate a set for each file in /etc/portage/sets # this section is also in the default configuration [user sets] class = portage.sets.files.StaticFileSet multiset = true directory = /etc/portage/sets # Generate a set for each category that includes all installed packages # from that category. The sets will be named <category>/* [installed category packages] class = portage.sets.dbapi.CategorySet multiset = true repository = vartree name_pattern = $category/*
The following sections contain the available handler classes that can be
used for the class
option in
sets.conf
, together with a description about required
and optional configuration options for single and multi set configurations.
Note that not all classes support both configuration styles.
This class implements a simple file based package set. All atoms from
configured file are used to form the set, and currently only simple and
versioned atoms are supported (no use conditionals or any-of constructs).
For descriptive purposes the file can be accompanied by a file with the
same name plus a .metadata
suffix which can contain
metadata sections for description, author, location and so on. Each section
has the form key: value where value
can contain multiple lines. Therefore sections have to be separated by
blank lines. For example:
description: This is a somewhat longer description than usual. So it needs more than one line. homepage: https://www.foobar.org/ author: John Doe <john@doe.com>
In a single set configuration this class supports the following options:
filename
: Required. Specifies the path to the file
that should be used for the package set.
greedy
: Optional, defaults to false
.
Determines if atoms in the package should include all installed slots (when set to
true
) or if no slot expansion is wanted (when set to
false
). This option only affects packages that have multiple
slots available (e.g. sys-kernel/gentoo-sources
).
In a multi set configuration this class supports the following options:
directory
: Optional, defaults to
/etc/portage/sets
. Specifies the path to a directory
containing package set files. For each file (excluding metadata files) in
that location a separate package set is created.
name_pattern
: Optional, defaults to
$name
. This describes the naming pattern
to be used for creating the sets. It must contain either
$name
or ${name}
, which
will be replaced by the filename (without any directory components).
Similar to StaticFileSet
, but uses Portage configuration files.
Namely it can work with package.use
,
package.keywords
, package.mask
and package.unmask
. It does not support
.metadata
files, but ignores the extra data (like
USE flags or keywords) typically found in those files.
In a single set configuration this class supports the following options:
filename
: See
StaticFileSetStaticFileSet
In a multi set configuration this class supports the following options:
directory
: Optional, defaults to
/etc/portage
. Specifies the path to a directory
containing one or more of the following portage configuration files:
package.use
, package.keywords
,
package.mask
or package.unmask
.
No other files in that directory will be used.
name_pattern
: Optional, defaults to
package_$suffix
. This describes the naming
pattern to be used for creating the sets. It must contain either
$suffix
or ${suffix}
,
which will be replaced by the file suffix (e.g.
use
or mask
).
A minor variation of StaticFileSet
, mainly for implementation
reasons. It should never be used in user configurations as it's already configured
by default, doesn't support any options and will eventually be removed in a future version.
A minor variation of StaticFileSet
, mainly for implementation
reasons. It should never be used in user configurations as it's already configured
by default, doesn't support any options and will eventually be removed in a future version.
This class implements the classic system
set, based on the
packages
files in the profile.
There is no reason to use this in a user configuration as it is already
configured by default and doesn't support any options.
This class implements the profile
set, based on the
packages
files in the profile.
There is no reason to use this in a user configuration as it is already
confgured by default and doesn't support any options.
The set created by this class contains all atoms that need to be installed
to apply all GLSAs in the ebuild repository, no matter if they are already
applied or no (it's equivalent to the all
target of
glsa-check). Generally it should be avoided in configurations in favor of
NewAffectedSet
described below.
In single set configurations this class supports the following options:
use_emerge_resolver
: Optional, defaults to
false
. This option determines which resolver
strategy should be used for the set atoms. When set to
true
, it will use the default emerge algorithm
and use the highest visible version that matches the GLSA. If set
to false
it will use the default glsa-check
algorithm and use the lowest version that matches the GLSA and is
higher than the currently installed version (least change policy).
Like SecuritySetSecuritySet, but ignores all GLSAs that were already applied or injected previously.
In single set configurations this class supports the following options:
use_emerge_resolver
: See
SecuritySetSecuritySet
Like SecuritySetSecuritySet,
but ignores all GLSAs that were already applied or injected previously,
and all GLSAs that don't affect the current system. Practically there
should be no difference to NewGlsaSet
though.
In single set configurations this class supports the following options:
use_emerge_resolver
: See
SecuritySetSecuritySet
Like SecuritySetSecuritySet,
but ignores all GLSAs that don't affect the current system. Practically
there should be no difference to SecuritySet
though.
In single set configurations this class supports the following options:
use_emerge_resolver
: See
SecuritySetSecuritySet
As the name says, this class creates a package set based on the output of a given command. The command is run once when the set is accessed for the first time during the current session.
Package sets created by this class will include installed packages that have been installed before / after a given date.
In single set configurations this class supports the following options:
age
: Optional, defaults to 7. Specifies
the number of days passed since installation to use as cut-off point.
mode
: Optional, defaults to "older". Must
be either "older" or "newer" to select packages installed either
before resp. after the cut-off-date given by age
.
E.g. the defaults will select all installed packages that have been
installed more than one week ago.
This class simply creates a set with all packages in a given category.
In single set configurations this class supports the following options:
category
: Required. The name of an existing ebuild
category which should be used to create the package set.
repository
: Optional, defaults to
porttree
. It determines which repository class should
be used to create the package set. Valid values for this option are:
porttree
(normal ebuild repository),
vartree
(installed package repository)
and bintree
(local binary package repository).
only_visible
: Optional, defaults to true
.
When set to true
the set will only include visible packages,
when set to false
it will also include masked packages.
It's currently only effective in combination with the porttree
repository.
In multi set configurations this class supports the following options:
categories
: Optional, defaults to all categories.
If set it must be a space separated list of existing ebuild categories for
which package sets should be created.
repository
: See previous section.
only_visible
: See previous section.
name_pattern
: Optional, defaults to
$category/*
. This describes the naming pattern
to be used for creating the sets. It must contain either
$category
or ${category}
, which
will be replaced by the category name.
A superset of the classic world
target, a set created
by this class contains SLOT atoms to match all installed packages. Note that
use of this set makes it impossible for emerge to solve blockers by automatic
uninstallation of blocked packages.
Package set which contains all packages that own one or more files. This class supports the following options:
files
: Required. A list of file paths
that should be used to create the package set.
Package set which contains all packages that match specified values of specified variable. This class supports the following options:
variable
: The name of
the specified variable whose values are checked.
includes
: A list of
values that must be contained within the specified
variable.
excludes
: A list of
values that must not be contained within the specified
variable.
metadata-source
: Optional, defaults to
"vartree". Specifies the repository to use for getting the metadata
to check.
Package set which contains all installed packages for which there are no visible ebuilds corresponding to the same $CATEGORY/$PN:$SLOT. This class supports the following options:
metadata-source
: Optional, defaults to
"porttree". Specifies the repository to use for getting the metadata
to check.
Package set which contains all packages for which the subslot of the highest visible ebuild is different than the currently installed version. This class doesn't support any extra options.
Package set which contains all packages for which the highest visible ebuild version is lower than the currently installed version. This class doesn't support any extra options.
A special set used to rebuild all packages that need a preserved library that only
remains due to FEATURES="preserve-libs"
.
By default, Portage already creates a few default sets that can be used without further configuration. See the section called “sets.conf locations” and the section called “sets.conf Syntax” for details on how to change those defaults.
The default sets are:
world
: uses DummySet
profile
: uses ProfilePackageSet
selected
: uses WorldSelectedSet
system
: uses PackagesSystemSet
security
: uses NewAffectedSet
with default options
installed
: uses EverythingSet
preserved-rebuild
: uses PreservedLibraryConsumerSet
live-rebuild
: uses VariableSet
module-rebuild
: uses OwnerSet
downgrade
: uses DowngradeSet
unavailable
: uses UnavailableSet
Additionally the default configuration includes a multi set section based on
the StaticFileSet
defaults that creates a set for each
file in /etc/portage/sets
for convenience.
[1] Technically the class
option
isn't strictly required, but it should always be used as the default
handler might be changed in future versions.
Table of Contents
Table of Contents
Dependency resolution involves satisfaction of many constraints:
Persistent configuration parameters, like those that come from make.profile, make.conf, and the /etc/portage directory.
Current command parameters, which may include options, atoms, or sets.
If one package blocks another package, the two packages conflict such that they cannot be installed simultaneously. These conflicts are often due to file collisions. In some cases, packages that block each other can be temporarily installed simultaneously. In order to resolve file collisions that occur between two blocking packages that are installed simultaneously, the overlapping files must be removed from the contents list of the package which was installed first.
Some cases may exist such that temporary simultaneous installation of blocking packages will cause some sort of problem. However, this type of solution will only be chosen for blockers that can not be satisfied in any other way, such as by simple adjustment of merge order. In addition, this type of solution will not be chosen if a blocking package will overwrite files belonging to packages from the system set, or packages that are runtime dependencies of Portage itself. These constraints serve to limit the probability that a chosen solution will cause an unforeseen problem.
If two different packages that occupy the same slot are chosen to satisfy dependencies, a slot conflict occurs. The two packages cannot be installed simultaneously and therefore the respective dependencies will not be satisfied simultaneously.
In order to significantly reduce the resources consumed by the modeling process, the dependencies of installed packages may be neglected.
If a more complete dependency calculation is desired, there is a --complete-graph option which will ensure that the dependencies of installed packages are properly considered.
Table of Contents
In terms of boolean logic, a dependency expression can be expressed in disjunctive normal form (DNF), which is a disjunction of conjunctive clauses. Each conjunctive clause represents one possible alternative combination of dependency atoms capable of satisfying the dependency expression.
Disjunctive dependencies, of which virtuals are a special case, can be satisfied by multiple choices of dependency atoms. These choices are delayed until as late as possible in the dependency calculation, after packages have been selected to satisfy as many non-disjunctive dependencies as possible. As a consequence of this delayed evaluation, there is maximal information available which makes it possible to optimize choices such that the total number of packages required to satisfy all dependencies is minimized.
When there are multiple combinations to choose from, a look-ahead mechanism will choose an optimal combination to satisfy constraints and minimize cost. The following package states influence the cost calculation for a given combination:
installed
selected (for installation)
not selected (for installation)
In cost calculations, virtual packages by themselves are considered to cost nothing since they do not directly install anything. It is the dependencies of a virtual package that contribute to it's cost.
Combinations that include packages from the "installed" or "selected" categories are less costly than those that include packages from the "not selected" category. When a package is chosen for installation, it transitions to the "selected" state. This state change propagates to the cost calculations of later decisions, influencing later decisions to be consistent with earlier decisions. This feedback mechanism serves to propagate constraints and can influence the modeling process to converge on a more optimal final state.
When evaluating virtual atoms, an expanded search space is considered which recursively traverses the dependencies of virtual packages from all slots matching a given virtual atom. All combinations in this expanded search space are considered when choosing an optimal combination to satisfy constraints with minimal cost.
All tasks are executed in an order such that a task's dependencies are satisfied when it is executed. Dependency relationships between tasks form a directed graph.
Sometimes a package installation order exists such that it is possible to avoid having two conflicting packages installed simultaneously. If a currently installed package conflicts with a new package that is planned to be installed, it may be possible to solve the conflict by replacing the installed package with a different package that occupies the same slot.
In order to avoid a conflict, a package may need to be uninstalled rather than replaced. The following constraints protect inappropriate packages from being chosen for automatic uninstallation:
Installed packages that have been pulled into the current dependency graph will not be uninstalled. Due to dependency neglection and special properties of packages in the "system" set, other checks may be necessary in order to protect inappropriate packages from being uninstalled.
An installed package that is matched by a dependency atom from the "system" set will not be uninstalled in advance since it might not be safe. Such a package will only be uninstalled through replacement.
An installed package that is matched by a dependency atom from the "world" set will not be uninstalled if the dependency graph does not contain a replacement package that is matched by the same dependency atom.
In order to ensure that package files remain installed in a usable state whenever possible, uninstallation operations are not executed until after all associated conflicting packages have been installed. When file collisions occur between conflicting packages, the contents entries for those files are removed from the packages that are scheduled for uninstallation. This prevents uninstallation operations from removing overlapping files that have been claimed by conflicting packages.
TODO: Automatically solve circular dependencies by temporarily disabling conditional dependencies and then rebuilding packages with the conditional dependencies enabled.
The algorithm used to choose packages that will execute concurrently with other packages is as conservative as possible in the sense that a given package will not be executed if the subgraph composed of its direct and indirect dependencies contains any scheduled merges. By ensuring that the subgraph of deep dependencies is fully up to date in this way, potential problems are avoided which could be triggered by other build orders that are less optimal.
Table of Contents
Table of Contents
Ebuild execution is divided into a series of phases. In order to implement a phase, an ebuild defines a function to serve as an entry point for execution of that phase. This design is similar to the template method pattern that is commonly used in object oriented programming languages. An ebuild can inherit or override a template method from an eclass.
The function names for the ebuild phases, listed in order of execution:
pkg_pretend
pkg_setup
src_unpack
src_prepare
src_configure
src_compile
src_test
src_install
pkg_preinst
pkg_postinst
pkg_prerm
pkg_postrm
The order for upgrade and downgrade operations changed in version 2.1.5, but the order for reinstall operations remained unchanged.
pkg_preinst
pkg_postinst
pkg_prerm
pkg_postrm
The new order for upgrades and downgrades is identical to the order used for reinstall operations:
pkg_preinst
pkg_prerm
pkg_postrm
pkg_postinst
Now that pkg_postinst is called after all other phases, it's not possible to call has_version in pkg_postinst to detect whether the current install operation is an upgrade or downgrade. If this information is needed during the pkg_postinst phase, do the has_version call in an earlier phase (such as pkg_preinst) and store the result in a global variable to be accessed by pkg_postinst when it is called.
Like einfo, we output a helpful message and then hint that the following operation may take some time to complete. Once the task is finished, you need to call eend.
Followup the ebegin message with an appropriate "OK" or "!!" (for errors) marker. If status is non-zero, then the additional error message is displayed.
Same as elog, but should be used when the message isn't important to the user (like progress or status messages during the build process).
If you need to display a message that you wish the user to read and take notice of, then use elog. It works just like echo(1), but adds a little more to the output so as to catch the user's eye. The message will also be logged by portage for later review.
A + or - prefix added to the beginning of a flag in IUSE creates a default USE setting that respectively enables or disables the corresponding USE flag.
Support for the ECONF_SOURCE variable, which is also supported by econf, has been added to the default src_compile implementation.
src_compile() { if [[ -x ${ECONF_SOURCE:-.}/configure ]] ; then econf fi if [ -f Makefile ] || [ -f GNUmakefile ] || [ -f makefile ] ; then emake || die "emake failed" fi }
Blocker atoms which use the previously existing !atom syntax now have a slightly different meaning. These blocker atoms indicate that conflicting packages may be temporarily installed simultaneously. When temporary simultaneous installation of conflicting packages occurs, the installation of a newer package may overwrite any colliding files that belong to an older package which is explicitly blocked. When such file collisions occur, the colliding files cease to belong to the older package, and they remain installed after the older package is eventually uninstalled. The older package is uninstalled only after any newer blocking packages have been merged on top of it.
A new !!atom syntax is now supported, for use in special cases for which temporary simultaneous installation of conflicting packages should not be allowed. If a given package happens to be blocked my a mixture of atoms consisting of both the !atom and !!atom syntaxes, the !!atom syntax takes precedence over the !atom syntax.
A new syntax is supported which allows customization of the output file name for a given URI. In order to customize the output file name, a given URI should be followed by a "->" operator which, in turn, should be followed by the desired output file name. As usual, all tokens, including the operator and output file name, should be separated by whitespace.
A new src_prepare function is called after the src_unpack function, with cwd initially set to $S.
The configure portion of the src_compile function has been split into a separate function which is named src_configure. The src_configure function is called in-between the src_prepare and src_compile functions.
src_configure() { if [[ -x ${ECONF_SOURCE:-.}/configure ]] ; then econf fi } src_compile() { if [ -f Makefile ] || [ -f GNUmakefile ] || [ -f makefile ] ; then emake || die "emake failed" fi }
Table 6.5. Execution Order of Phase Functions
Phase Function Name |
---|
pkg_setup |
src_unpack |
src_prepare |
src_configure |
src_compile |
src_test |
src_install |
pkg_preinst |
pkg_postinst |
pkg_prerm |
pkg_postrm |
The default pkg_nofetch and src_* phase functions are now accessible via a function having a name that begins with default_ and ends with the respective phase function name. For example, a call to a function with the name default_src_compile is equivalent to a call to the default src_compile implementation.
Table 6.6. Default Phase Functions
Function Name |
---|
default_pkg_nofetch |
default_src_unpack |
default_src_prepare |
default_src_configure |
default_src_compile |
default_src_test |
A function named "default" is redefined for each phase so that it will call the default_* function corresponding to the current phase. For example, a call to the function named "default" during the src_compile phase is equivalent to a call to the function named default_src_compile.
Beginning with EAPI 3, all helpers use ${ED} instead of ${D} when appropriate. For example, see econf and einstall below.
${ECONF_SOURCE:-.}/configure \ ${CBUILD:+--build=${CBUILD}} \ --datadir="${EPREFIX}"/usr/share \ --host=${CHOST} \ --infodir="${EPREFIX}"/usr/share/info \ --localstatedir="${EPREFIX}"/var/lib \ --prefix="${EPREFIX}"/usr \ --mandir="${EPREFIX}"/usr/share/man \ --sysconfdir="${EPREFIX}"/etc \ ${CTARGET:+--target=${CTARGET}} \ ${EXTRA_ECONF} \ configure options || die "econf failed"
Note that, for make-based packages, 'emake install DESTDIR=${D}' (with DESTDIR=${D} rather than ${ED}) is still preferred over einstall.
make \ prefix=${ED}/usr \ datadir=${ED}/usr/share \ infodir=${ED}/usr/share/info \ localstatedir=${ED}/var/lib \ mandir=${ED}/usr/share/man \ sysconfdir=${ED}/etc \ ${EXTRA_EINSTALL} \ make options \ install
Table 6.7. Installation Prefix Variables
Variable Name | Description |
---|---|
ED | Contains the path "${D%/}${EPREFIX}/" for convenience purposes. For EAPI values prior to EAPI 3 which do not support ${ED}, helpers use ${D} where they would otherwise use ${ED}. Do not modify this variable. |
EPREFIX | Contains the offset that this Portage was configured for during installation. The offset is sometimes necessary in an ebuild or eclass, and is available in such cases as ${EPREFIX}. EPREFIX does not contain a trailing slash, therefore an absent offset is represented by the empty string. Do not modify this variable. |
EROOT | Contains "${ROOT%/}${EPREFIX}/" for convenience purposes. Do not modify this variable. |
All helpers now die automatically whenever some sort of error occurs. Helper calls may be prefixed with the 'nonfatal' helper in order to prevent errors from being fatal.
In EAPI 4, the package manager may optionally compress a subset of the files under the D directory. To control which directories may or may not be compressed, the package manager shall maintain two lists:
An inclusion list, which initially contains /usr/share/doc, /usr/share/info and /usr/share/man.
An exclusion list, which initially contains /usr/share/doc/${PF}/html.
The optional compression shall be carried out after src_install has completed, and before the execution of any subsequent phase function. For each item in the inclusion list, pretend it has the value of the D variable prepended, then:
If it is a directory, act as if every file or directory immediately under this directory were in the inclusion list.
If the item is a file, it may be compressed unless it has been excluded as described below.
If the item does not exist, it is ignored.
Whether an item is to be excluded is determined as follows: For each item in the exclusion list, pretend it has the value of the D variable prepended, then:
If it is a directory, act as if every file or directory immediately under this directory were in the exclusion list.
If the item is a file, it shall not be compressed.
If the item does not exist, it is ignored.
The package manager shall take appropriate steps to ensure that its compression mechanisms behave sensibly even if an item is listed in the inclusion list multiple times, if an item is a symlink, or if a file is already compressed.
The following commands may be used in src_install to alter these lists. It is an error to call any of these functions from any other phase.
The doins and newins helpers now preserve symlinks. In earlier EAPIs symlinks are dereferenced rather than preserved.
When the doman helper is called with the -i18n option, this takes precedence over the filename language suffix.
The econf helper now adds --disable-dependency-tracking to the configure arguments if the string disable-dependency-tracking occurs in the output of configure --help.
When the RDEPEND variable is unset within an ebuild, it will remain empty. In prior EAPIs, if RDEPEND was left unset then it was implicitly set to the value of DEPEND.
In a 3-style use dependency, the flag name may immediately be followed by a default specified by either (+) or (-). The former indicates that, when applying the use dependency to a package that does not have the flag in question in IUSE_REFERENCEABLE, the package manager shall behave as if the flag were present and enabled; the latter, present and disabled.
Unless a 3-style default is specified, it is an error for a use dependency to be applied to an ebuild which does not have the flag in question in IUSE_REFERENCEABLE.
Note: By extension of the above, a default that could reference an ebuild using an EAPI not supporting profile IUSE injections cannot rely upon any particular behaviour for flags that would not have to be part of IUSE.
It is an error for an ebuild to use a conditional use dependency when that ebuild does not have the flag in IUSE_EFFECTIVE.
This new REQUIRED_USE metadata key is used to specify what USE flag combinations are disallowed for a specific pkg.
It's a semi common occurrence that an ebuild may need to state that they disallow USE flags in specific combinations- either mysql or sqlite for example, but not both.
Existing solutions rely on checking the USE configuration in pkg_setup which is non-optimal due to pkg_setup being ran potentially hours after the initial emerge -p invocation.
Current versions of EAPI4 support a phase hook pkg_pretend that is intended to move pre-build checks to just after resolution. It has been proposed that pkg_pretend should continue the tradition of adhoc shell code validating the USE state- this too is non optimal for the following reasons-
The only way to find out if the USE state is disallowed is to run the code
The common implementation of this can result in an iterative process where the user hits a USE constraint, fixes it, reruns the emerge invocation only to find that there is another constraint still violated for the ebuild, thus requiring them to fix it, rerun emerge, etc.
For a package manager to classify the error, the only option it has is to try and parse adhoc output written by an ebuild dev. This effectively disallows package manager options for providing a more informative error message. A simple example would be if the package manager wanted to integrate in the flag descriptions from use.desc/use.local.desc; this would be effectively impossible.
Fundamentally these constraints are data, yet they're being encoded as executable code- this effectively blocks the possibility of doing a wide variety of QA/tree scans. For example it blocks the possibility of sanely scanning for USE flag induced hard dependency cycles, because the tools in question cannot get that info out of adhoc shell code. More importantly if the manager cannot know what the allowed USE states are for the ebuild in question, this eliminates the possibility of ever sanely breaking dependency cycles caused by USE flags.
Just as .sh scripts are considered a poor archival form due to their opaqueness, pkg_setup and pkg_pretend aren't a proper solution for this. pkg_pretend in particular makes the situation slightly worse due to ebuild devs being expected to convert their ebuilds to the pkg_pretend form when using EAPI4. In doing so they'll have to do work w/out the gains REQUIRED_USE provides and have to repeat the same conversion work when REQUIRED_USE lands in a later EAPI.
Essentially REQUIRED_USE is proposed to be an analog of DEPENDS style syntax- a list of assertions that must be met for this USE configuration to be valid for this ebuild. For example, to state "if build is set, python must be unset":
REQUIRED_USE="build? ( !python )"
To state "either mysql or sqlite must be set, but not both":
REQUIRED_USE="mysql? ( !sqlite ) !mysql? ( sqlite )"
Note that the mysql/sqlite relationship is that of an Exclusive OR (XOR). While an XOR can be formed from existing syntax, it's suggested that a specific operator be added for this case using ^^. Reformatting the "mysql or sqlite, but not both" with XOR results in:
REQUIRED_USE="^^ ( mysql sqlite )"
Like any block operator, this can be combined with other constraints. For example if the user has flipped on the client flag, one gui must be choosen:
REQUIRED_USE="client? ( ^^ ( gtk qt motif ) )"
If the pkg is implemented sanely and requires at least one gui, but can support multiple it would be:
REQUIRED_USE="client? ( || ( gtk qt motif ) )"
Because ARCH is integrated into the USE space, this also allows for specifying corner cases like "at least one gui must be specified, but on mips only one gui can be specified":
REQUIRED_USE="client? ( !mips? ( || ( gtk qt motif ) ) mips? ( ^^ ( gtk qt motif ) ) )"
Please note that the AND operator is of course supported- if to enable client you must choose at least one gui and enable the python bindings the syntax would be:
REQUIRED_USE="client? ( python || ( gtk qt motif x11 ) )"
Finally, please note that this new metadata key can be set by eclasses, and the inherit implementation should protect the eclass set value just the same as how eclass defined DEPEND is protected.
The pkg_pretend function may be used to carry out sanity checks early on in the install process. For example, if an ebuild requires a particular kernel configuration, it may perform that check in pkg_pretend and call eerror and then die with appropriate messages if the requirement is not met.
pkg_pretend is run separately from the main phase function sequence, and does not participate in any kind of environment saving. There is no guarantee that any of an ebuild's dependencies will be met at this stage, and no guarantee that the system state will not have changed substantially before the next phase is executed.
pkg_pretend must not write to the filesystem.
src_install() { if [[ -f Makefile || -f GNUmakefile || -f makefile ]] ; then emake DESTDIR="${D}" install fi if ! declare -p DOCS &>/dev/null ; then local d for d in README* ChangeLog AUTHORS NEWS TODO CHANGES \ THANKS BUGS FAQ CREDITS CHANGELOG ; do [[ -s "${d}" ]] && dodoc "${d}" done elif [[ $(declare -p DOCS) == "declare -a "* ]] ; then dodoc "${DOCS[@]}" else dodoc ${DOCS} fi }
For any of the src_* phases that executes after src_unpack, it is invalid for the S variable to refer to a non-existent directory. However, these src_* phases are exempt from this requirement if none of the prior src_* phases are defined by the ebuild. When a src_* phase is exempt from this requirement, if the S variable does not refer to an existing directory, the WORKDIR directory will be used instead of S as the initial working directory.
The AA and KV variables are no longer exported to the ebuild environment.
The type of package that is being merged. Possible values are: "source" if building and installing a package from source, "binary" if installing a binary package, and "buildonly" if building a binary package without installing it.
The REPLACING_VERSIONS variable shall be defined in pkg_preinst and pkg_postinst. In addition, it may be defined in pkg_pretend and pkg_setup, although ebuild authors should take care to handle binary package creation and installation correctly when using it in these phases.
REPLACING_VERSIONS is a list, not a single optional value, to handle pathological cases such as installing foo-2:2 to replace foo-2:1 and foo-3:2.
The REPLACED_BY variable shall be defined in pkg_prerm and pkg_postrm. It shall contain at most one value.
In order to represent cases in which an upgrade to a new version of a package requires reverse dependencies to be rebuilt, the SLOT variable may contain an optional "sub-slot" ABI part that is delimited by a '/' character.
For example, the package 'dev-libs/glib-2.30.2' may set SLOT="2/2.30" in order to indicate a sub-slot value of "2.30". This package will be matched by dependency atoms such as 'dev-libs/glib:2' or 'dev-libs/glib:2/2.30', where the sub-slot part of the atom is optional.
If SLOT does not contain a sub-slot part, then it is considered to have an implicit sub-slot that is equal to the SLOT value. For example, SLOT="0" is implicitly equal to SLOT="0/0".
Refer to the := operator documentation for more information about sub-slot usage.
Dependency atom syntax now supports slot/sub-slot := operators which allow the specific slot/sub-slot that a package is built against to be recorded, so that it's possible to automatically determine when a package needs to be rebuilt due to having a dependency upgraded to a different slot/sub-slot.
For example, if a package is built against the package 'dev-libs/glib-2.30.2' with SLOT="2/2.30", then dependency atoms such as 'dev-libs/glib:=' or 'dev-libs/glib:2=' will be rewritten at build time to be recorded as 'dev-libs/glib:2/2.30='.
For another example, if a package is built against the package 'sys-libs/db-4.8.30' with SLOT="4.8", then a dependency atom such as 'sys-libs/db:=' will be rewritten at build time to be recorded as 'sys-libs/db:4.8/4.8='. In this case, since SLOT="4.8" does not contain a sub-slot part, the sub-slot is considered to be implicitly equal to "4.8".
When dependencies are rewritten as described above, the slot/sub-slot recorded in the atom is always equal to that of the highest matched version that is installed at build time.
The new :* operator is used to express dependencies that can change versions at runtime without requiring reverse dependencies to be rebuilt. For example, a dependency atom such as 'dev-libs/glib:*' can be used to match any slot of the 'dev-libs/glib' package, and dependency atom such as 'dev-libs/glib:2*' can be used to specifically match slot '2' of the same package (ignoring its sub-slot).
The new at-most-one-of operator consists of the string '??', and is satisfied if zero or one (but no more) of its child elements is matched.
The SLOT variable may contain an optional sub-slot part that follows the regular slot and is delimited by a / character. The sub-slot must be a valid slot name. The sub-slot is used to represent cases in which an upgrade to a new version of a package with a different sub-slot may require dependent packages to be rebuilt. When the sub-slot part is omitted from the SLOT definition, the package is considered to have an implicit sub-slot which is equal to the regular slot.
Refer to the slot operators documentation for more information about sub-slot usage.
A slot dependency may contain an optional sub-slot part that follows the regular slot and is delimited by a / character. An operator slot dependency consists of a colon followed by one of the following operators:
* Indicates that any slot value is acceptable. In addition, for runtime dependencies, indicates that the package will not break if the matched package is uninstalled and replaced by a different matching package in a different slot.
= Indicates that any slot value is acceptable. In addition, for runtime dependencies, indicates that the package will break unless a matching package with slot and sub-slot equal to the slot and sub-slot of the best installed version at the time the package was installed is available.
slot= Indicates that only a specific slot value is acceptable, and otherwise behaves identically to the plain equals slot operator.
To implement the equals slot operator, the package manager will need to store the slot/sub-slot pair of the best installed version of the matching package. This syntax is only for package manager use and must not be used by ebuilds. The package manager may do this by inserting the appropriate slot/sub-slot pair between the colon and equals sign when saving the package's dependencies. The sub-slot part must not be omitted here (when the SLOT variable omits the sub-slot part, the package is considered to have an implicit sub-slot which is equal to the regular slot).
IUSE_EFFECTIVE is a variable calculated from IUSE and a variety of other sources described below. It is purely a conceptual variable; it is not exported to the ebuild environment. Values in IUSE_EFFECTIVE may legally be used in queries about an ebuild's state (for example, for use dependencies, for the use function, and for use in dependency specification conditional blocks).
For EAPIs that support profile defined IUSE injection, IUSE_EFFECTIVE contains the following values:
All values in the calculated IUSE value.
All values in the profile IUSE_IMPLICIT variable.
All values in the profile variable named USE_EXPAND_VALUES_${v}, where ${v} is any value in the intersection of the profile USE_EXPAND_UNPREFIXED and USE_EXPAND_IMPLICIT variables.
All values for ${lower_v}_${x}, where ${x} is all values in the profile variable named USE_EXPAND_VALUES_${v}, where ${v} is any value in the intersection of the profile USE_EXPAND and USE_EXPAND_IMPLICIT variables and ${lower_v} is the lowercase equivalent of ${v}.
Table 6.8. Example Variable Settings
Variable | Value |
---|---|
IUSE_IMPLICIT | prefix selinux |
USE_EXPAND | ELIBC KERNEL USERLAND |
USE_EXPAND_UNPREFIXED | ARCH |
USE_EXPAND_IMPLICIT | ARCH ELIBC KERNEL USERLAND |
USE_EXPAND_VALUES_ARCH | amd64 ppc ppc64 x86 x86-fbsd x86-solaris |
USE_EXPAND_VALUES_ELIBC | FreeBSD glibc |
USE_EXPAND_VALUES_KERNEL | FreeBSD linux SunOS |
USE_EXPAND_VALUES_USERLAND | BSD GNU |
In profile directories with an EAPI supporting stable masking, new USE configuration files are supported: use.stable.mask, use.stable.force, package.use.stable.mask and package.use.stable.force. These files behave similarly to previously supported USE configuration files, except that they only influence packages that are merged due to a stable keyword.
This option will automatically be passed if --disable-silent-rules occurs in the output of configure --help.
Standard input is read when the first parameter is - (a hyphen).
This option --host-root will cause the query to apply to the host root instead of ROOT.
Installs the given header files into /usr/include/, by default with file mode 0644. This can be overridden by setting INSOPTIONS with the insopts function.
Table of Contents
Table of Contents
Here we'll go over each QA notice and what you (as a developer) can do to fix the issue. If you're a user, you should of course go file a bug. We'll only cover the non-obvious notices here.
In pretty much all cases, you should try and get these issues resolved upstream rather than simply fixing them in our ebuilds.
QA Notice: The following files contain insecure RUNPATHs
Some of the ELFs that would be installed on the system have insecure dynamic RUNPATH tags. RUNPATH tags are a hardcoded list of filesystem paths that will be searched at runtime when the ELF is executed. If the ELF has a world accessible directory hardcoded in it, then a malicious person can inject code at runtime by adding their own libraries to the directory.
Here are some of the common problems and their solutions.
Libtool - old versions of libtool would use too many -rpath flags
Solution: Regenerate the autotool code
Perl - some versions of perl would use incorrect -rpath flags
Solution: Upgrade system perl build modules
Crappy build system - the custom build system uses -rpath incorrectly
Solution: Review the LDFLAGS in the build system and make them not suck
Crappy ebuild - the ebuild installs ELFs instead of using the package's build system
Solution: Fix the crappy ebuild to use the package's build system
QA Notice: The following files contain runtime text relocations
Please see the Gentoo Hardened PIC Fix Guide.
QA Notice: The following files contain executable stacks
Please see the Gentoo Hardened GNU Stack Guide.
QA Notice: The following shared libraries lack a SONAME
A shared library that you would link against lacks an ELF SONAME tag. With simpler libraries, this can be acceptable, but with any sort of ABI sane setup, you need the SONAME tag. This tag is how the system linker tells the loader what libraries a program needs at runtime. With a missing SONAME, the linker needs to guess and with many cases, this guess will not work for long.
To fix this issue, make sure the shared library is linked with the proper
-Wl,-soname,...
flag. You will need to replace the
...
part with the actual ABI name. For example,
if the library is named libfoo.so.1.2.3
, you will
probably want to specify -Wl,-soname,libfoo.so.1
.
Note that this warning only applies to shared libraries that you would link
against. It certainly does not apply to plugins that you would dynamically
load. However, plugins should not exist in the main library directory, but
rather an application specific subdirectory in the library directory. In
other words, it should be /usr/lib/app/plugin.so
rather
than /usr/lib/plugin.so
.
QA Notice: The following shared libraries lack NEEDED entries
This warning comes up when a library does not actually seem to need any other libraries in order to run. Rarely is this true as almost every library will need at least the system C library.
Once you've determined that the library is indeed being generated incorrectly, you will need to dig into the build system to make sure that it pulls in the libraries it needs. Often times, this is because the build system invokes the system linker (ld) directly instead of the system compiler driver (gcc).
QA Notice: Unresolved soname dependencies
This warning comes up when a library or executable has one or more soname dependencies (found in its NEEDED.ELF.2 metadata) that could not be resolved by usual means. If you run ldd on files like these then it will report a "not found" error for each unresolved soname dependency. In order to correct problems with soname dependency resolution, use one or more of the approaches described in the following sections.
Content of the NEEDED.ELF.2 metadata file may be useful for debugging purposes. Find the NEEDED.ELF.2 file in the ${D}/../build-info/ directory after the ebuild src_install phase completes, or in the /var/db/pkg/*/*/ directory for an installed package. Each line of the NEEDED.ELF.2 file contains semicolon separated values for a single ELF file. The soname dependencies are found in the DT_NEEDED column:
E_MACHINE;path;DT_SONAME;DT_RUNPATH;DT_NEEDED;multilib category
For packages that install pre-built binaries, it may be possible to resolve soname dependencies simply by adding dependencies for one or more other packages that are known to provide the needed sonames.
For packages that install pre-built binaries, it may be possible to resolve soname dependencies simply by removing unnecessary files which have unresolved soname dependencies. For example, some pre-built binary packages include binaries intended for irrelevant architectures or operating systems, and these files can simply be removed because they are unnecessary.
If the relevant dependencies are installed in a location that is not included in the dynamic linker search path, then it's necessary for files to include a DT_RUNPATH entry which refers to the appropriate directory. The special $ORIGIN value can be used to create a relative path reference in DT_RUNPATH, where $ORIGIN is a placeholder for the directory where the file having the DT_RUNPATH entry is located.
For pre-built binaries, it may be necessary to fix up DT_RUNPATH using patchelf --set-rpath. For example, use patchelf --set-rpath '$ORIGIN' if a given binary should link to libraries found in the same directory as the binary itself, or use patchelf --set-rpath '$ORIGIN/libs' if a given binary should link to libraries found in a subdirectory named libs found in the same directory as the binary itself.
For binaries built from source, a flag like
-Wl,-rpath,/path/of/directory/containing/libs
will
create binaries with the desired DT_RUNPATH entry.
If a package installs dynamic libraries which do not set DT_SONAME,
then this can lead to unresolved soname dependencies.
For dynamic libraries built from source, a flag like
-Wl,-soname=foo.so.1
will create a DT_SONAME setting.
For pre-built dynamic libraries, it may be necessary to fix up
DT_SONAME using patchelf --set-soname.
It may be necessary to adjust Portage soname resolution logic in order to account for special circumstances. For example, Portage soname resolution tolerates missing DT_SONAME for dynamic libraries that a package installs in a directory that its binaries reference via DT_RUNPATH. This behavior is useful for packages that have internal dynamic libraries stored in a private directory. An example is ebtables, as discussed in bug 646190.
QA Notice: Found an absolute symlink in a library directory
If you want to use symlinks in library directories, please use either a relative symlink or a linker script. This can cause problems when working with cross-compiler systems or when accessing systems in a different ROOT directory.
If you have a library installed into /lib/
and you want
to have it accessible in /usr/lib/
, then you should
generate a linker script so that the system toolchain can handle it properly.
Please see the linker script section
for more information.
QA Notice: Missing gen_usr_ldscript
If you have a shared library in /lib/
and a static
library in /usr/lib/
, but no linker script in
/usr/lib/
, then the toolchain will choose the incorrect
version when linking. The system linker will find the static library first
and not bother searching for a dynamic version. To overcome this, you need
to use the gen_usr_ldscript function found in the
toolchain-funcs.eclass. Refer to the
man page for information on how to use it. See this
bug report for some history
on this issue.
QA Notice: Excessive files found in the / partition
You should not store files that are not critical to boot and recovery in
the root filesystem. This means that static libraries and libtool scripts do
not belong in the /lib/
directory. Fix your ebuild so
it does not install there.
QA Notice: ... appears to contain PORTAGE_TMPDIR paths
Older versions of libtool would incorrectly record the build and/or install directory in the libtool script (*.la). This would lead to problems when building other things against your package as libtool would be confused by the old paths.
You may be able to cheat and use the elibtoolize function in the libtool.eclass. However, if that does not help, you will probably need to regenerate all of the autotool files.
QA Notice: Package has poor programming practices which may compile fine but exhibit random runtime failures. ...: warning: dereferencing type-punned pointer will break strict-aliasing rules
This warning crops up when code starts casting distinct pointer types and then dereferencing them. Generally, this is a violation of aliasing rules which are part of the C standard. Historically, these warnings did not show up as the optimization was not turned on by default. With gcc-4.1.x and newer though, the -O2 optimization level enables strict aliasing support. For information, please review these links: NetBSD Explanation, Gentoo Dev Thread, GCC Docs, Practical examples.
To fix this issue, use the methods proposed in the links mentioned earlier. If you're unable to do so, then a work around would be to append the gcc -fno-strict-aliasing flag to CFLAGS in the ebuild.
QA Notice: Package has poor programming practices which may compile fine but exhibit random runtime failures. ...: warning: implicit declaration of function ... ...: warning: incompatible implicit declaration of built-in function ...
Your code is calling functions which lack prototypes. In C++, this would have been a build failure, but C is lazy so you just get a warning. This can be a problem as gcc has to guess at what sort of arguments a function takes based upon how it was called and often times, this is not the same as what the function actually takes. The function return type is also unknown so it's just assumed to be an integer (which is often times wrong). This can get to be a problem when the size of the types guessed do not actually match the size of the types the function expects. Generally, this corresponds directly to proper coding practices (and the lack thereof). Also, by including proper prototypes, the compiler often helps by checking types used, proper number of arguments passed, etc...
To fix this, just include the proper header files for the functions in question. If the function is a package-specific one, then you may have to create a header/function prototype for it.
QA Notice: Package has poor programming practices which may compile fine but exhibit random runtime failures. ...: warning: is used uninitialized in this function
This means code uses a variable without actually setting it first. In other words, the code is basically using random garbage.
The fix here is simple: make sure variables are initialized properly before using them.
QA Notice: Package has poor programming practices which may compile fine but exhibit random runtime failures. ...: warning: comparisons like X<=Y<=Z do not have their mathematical meaning
This warning crops up either when the programmer expected the expression to work or they just forgot to use sufficient parentheses. For example, the following code snippets are wrong (we won't get into the technical argument of this being valid C code; just change the code to not be ambiguous).
if (x <= y <= z) ...; if (a < b <= c) ...;
To fix this, read the code to figure out what exactly the programmer meant.
QA Notice: Package has poor programming practices which may compile fine but exhibit random runtime failures. ...: warning: null argument where non-null required
Many functions take pointers as arguments and require that the pointer never be NULL. To this end, you can declare function prototypes that instruct the compiler to do simple checks to make sure people do not incorrectly call the function with NULL values. This warning pops up when someone calls a function and they use NULL when they should not. Depending on the library, the function may actually crash (they told you not to use NULL after-all, so it's your fault :P).
You will need to read the code and fix it so that it does not incorrectly call the relevant functions with NULL values.
QA Notice: Package has poor programming practices which may compile but will almost certainly crash on 64bit architectures.
A large portion of code in the open source world is developed on the 32bit x86 architecture. Unfortunately, this has led to many pieces of code not handling pointer types properly. When compiled and run on a 64bit architecture, the code in question will probably crash horribly. Some common examples are assuming that an integer type is large enough to hold pointers. This is true on 32bit architectures (an integer can hold 32bits and a pointer is 32bits big), but not true on 64bit architectures (an integer still holds just 32bits, but a pointer is 64bits big).
Since this issue can manifest itself in many ways (as there are many ways to improperly truncate a pointer), you will need to read the source code starting with the displayed warning. Make sure types are declared, used, and passed properly. Make sure that all function prototypes are found (see the Implicit Declarations section for more information). So on and so forth.