Let's begin with a Linux x86-64 example involving global variables exhibiting various properties such as read-only versus writable, zero-initialized versus non-zero, and more.
1 | #include <stdio.h> |
1 | % clang -c -fpie a.c |
(We will discuss -Wl,-z,separate-loadable-segments
later.)
We can see that these functions and global variables are placed in different sections.
.rodata
: read-only data without dynamic relocations, constant in the link unit.text
: functions.data.rel.ro
: read-only data associated with dynamic relocations, constant after relocation resolving, part of thePT_GNU_RELRO
segment.data
: writable data.bss
: writable data known to be zeros
Section and segment layout
TODO I may write more about how linkers layout sections and segments.
Anyhow, the linker will place .data
and
.bss
in the same PT_LOAD
program header
(segment) and the rest into different PT_LOAD
segments.
(There are some nuances. If you use GNU ld's -z noseparate-code
or lld's --no-rosegment
,
.rodata
and .text
will be placed in the same
PT_LOAD
segment.)
The PT_LOAD
segments have different flags
(p_flags
): PF_R
, PF_R|PF_X
,
PF_R|PF_W
.
Subsequently, the dynamic loader, also known as the dynamic linker,
will invoke mmap
to map the file into memory using
permissions specified by p_flags
. For a
PT_LOAD
segment, its associated memory area starts at
alignDown(p_vaddr, pagesize)
and ends at
alignUp(p_vaddr+p_memsz, pagesize)
.
1 | Start Addr End Addr Size Offset Perms objfile |
Let's assume the page size is 4096 bytes. We'll calculate the
alignDown(p_vaddr, pagesize)
values and display them
alongside the "Start Addr" values:
1 | Start Addr alignDown(p_vaddr, pagesize) |
We observe that the start address equals the base address plus
alignDown(p_vaddr, pagesize)
.
--no-rosegment
1 | Start Addr End Addr Size Offset Perms objfile |
MAXPAGESIZE
A page serves as the granularity at which memory exhibits different
permissions, and within a page, we cannot have varying permissions.
Using the previous example where p_align
is 4096, if the
page size is larger, for example, 65536 bytes, the program might
crash.
Typically, the dynamic loader allocates memory for the first
PT_LOAD
segment (PF_R
) at a specific address
allocated by the kernel. Subsequent PT_LOAD
segments then
overwrite the previous memory regions. Consequently, certain code pages
or significant global variables might be replaced by garbage, leading to
a crash.
So, how can we create a link unit that works across different page
sizes? We simply determine the maximum page size, let's say, 2097152,
and then pass -z max-page-size=2097152
to the linker. The
linker will set p_align
values of PT_LOAD
segments to MAXPAGESIZE.
1 | Program Headers: |
In a linker script, the max-page-size
can be obtained
using CONSTANT(MAXPAGESIZE)
.
-z separate-loadable-segments
In previous examples using
-z separate-loadable-segments
, the p_vaddr
values of PT_LOAD
segments are multiples of MAXPAGESIZE.
The generic ABI says "loadable process segments must have congruent
values for p_vaddr and p_offset, modulo the page size."
p_offset - This member gives the offset from the beginning of the file at which the first byte of the segment resides.
p_vaddr - This member gives the virtual address at which the first byte of the segment resides in memory.
This alignment requirement aligns with the mmap
documentation. For example, Linux man-pages specifies, "offset must be a
multiple of the page size as returned by sysconf(_SC_PAGE_SIZE)."
The p_offset
values are also multiples of MAXPAGESIZE.
After layouting out a PT_LOAD
segment, the linker must pad
the end by inserting zeros so that the next PT_LOAD
segment
starts at a multiple of MAXPAGESIZE.
However, the alignment padding is wasteful. Fortunately, we can link
a.o
using different MAXPAGESIZE and different alignment
settings:
-z noseparate-code
,-z separate-code
,-z separate-loadable-segments
.
1 | clang -pie -fuse-ld=lld -Wl,-z,noseparate-code a.o -o a0.4096 |
1 | % stat -c %s a0.4096 a0.65536 a0.2097152 |
We can derive two properties:
- Under one MAXPAGESIZE, we have
size(noseparate-code) < size(separate-code) < size(separate-loadable-segments)
. - For
-z noseparate-code
, increasing MAXPAGESIZE does not change the output size.
-z noseparate-code
How does -z noseparate-code
work? Let's illustrate this
with an example.
At the end of the read-only PT_LOAD
segment, the address
is 0x628. Instead of starting the next segment at
alignUp(0x628, MAXPAGESIZE) = 0x1000
, we start at
alignUp(0x628, MAXPAGESIZE) + 0x628 % MAXPAGESIZE = 0x1628
.
Since the .text
section has an alignment
(sh_addralign
) of 16, we start at 0x1630. Although the
address is advanced beyond necessity, the file offset (congruent to the
address, modulo MAXPAGESIZE) can be decreased to 0x630, merely 8 bytes
(due to alignment padding) after the previous section's end.
Moving forward, the end of the executable PT_LOAD
segment has an address of 0x17b0. Instead of starting the next segment
at alignUp(0x17b0, MAXPAGESIZE) = 0x2000
, we start at
alignUp(0x17b0, MAXPAGESIZE) + 0x17c0 % MAXPAGESIZE = 0x27b0
.
While we advance the address more than needed, the file offset can be
decreased to 0x7b0, precisely at the previous section's end.
1 | % readelf -WSl a0.4096 |
-z separate-code
performs the trick when transiting from
the first RW PT_LOAD
segment to the second, whereas
-z separate-loadable-segments
doesn't.
When MAXPAGESIZE is larger than the actual page size
Let's consider two adjacement PT_LOAD
segments. The
memory area associated with the first segment ends at
alignUp(load[i].p_vaddr+load[i].p_memsz, pagesize)
while
the memory area associated with the second one starts at
alignDown(load[i+1].p_vaddr, pagesize)
. When the actual
page size equals MAXPAGESIZE, the two addresses are identical. However,
if the actual page size is smaller, a gap emerges between these
addresses.
A typical link unit generally presents three gaps. These gaps might
either be unmapped or mapped. When mapped, they necessitate
struct vm_area_struct
objects within the Linux kernel. As
of Linux 6.3.13, the size of struct vm_area_struct
is 152
bytes. For instance, 10000 mapped object files would require
10000 * 3 * sizeof(struct vm_area_struct) = 4,560,000 bytes
,
signifying a considerable memory footprint. You can refer to Extra
struct vm_area_struct with ---p created when PAGE_SIZE <
max-page-size.
Dynamic loaders typically invoke mmap
using
PROT_READ
, encompassing the whole file, followed by
multiple mmap
calls using MAP_FIXED
and the
corresponding flags. When dynamic loaders, like musl, don't process
gaps, the gaps retain r--p
permissions. However, in glibc's
elf/dl-map-segments.h
, the has_holes
code
employs mprotect
to transition permissions from
r--p
to ---p
.
While ---p
might be perceived as a security enhancement,
personally, I don't believe it significantly impacts exploitability.
While there might be numerous gadgets in r-xp
areas,
reducing gadgets in r--p
areas doesn't seem notably
impactful. (https://isopenbsdsecu.re/mitigations/rop_removal/)
Unmap the gap
Within Linux kernel loads the executable and its interpreter (it
present) (fs/binfmt_elf.c
), the gap gets unmapped, thereby
freeing a struct vm_area_struct
object. Implementing a
similar approach in dynamic loaders could yield comparable savings.
However, unmapping the gap carries the risk of an unrelated future
mmap
occupying the gap:
1 | 564d8e90f000-564d8e910000 r--p 00000000 08:05 2519504 /sample/build/main |
It is not clear whether the potential occurrence of an unrelated mmap
considered a regression in security. Personally, I don't think this
poses a significant issue as the program does not access the gaps. This
property can be guaranteed for direct access when input relocations to
the linker use symbols with in-bounds addends (e.g. when x is defined
relative to an input section, we know R_X86_64_PC32(x)
must
be in-bounds).
However, some programs may expect contiguous maps areas of a file
(such as when glibc link_map::l_contiguous
is set to 1).
Does this choice render the program exploitable if an attacker can
ensure a map within the gap instead of outside the file? It seems to me
that they could achieve everything with a map outside of the file.
Having said that, the presence of an unrelated map between maps associated with a single file descriptor remains odd, so it's preferable to avoid it if possible.
Extend the memory area to cover the gap
This appears the best solution.
When creating a memory area, instead of setting the end to
alignUp(load[i].p_vaddr+load[i].p_memsz, pagesize)
, we can
extend the end to
min(alignDown(min(load[i+1].p_vaddr), pagesize), alignUp(file_end_addr, pagesize))
.
1 | 564d8e90f000-**564d8e91f000** r--p 00000000 08:05 2519504 /sample/build/main (the end is extended) |
For the last PT_LOAD
segment, we could also just use
alignDown(min(load[i+1].p_vaddr), pagesize)
and ignore
alignUp(file_end_addr, pagesize))
. Accessing a byte beyond
the backed file will result to a SIGBUS
signal.
A new linker option?
Personally I favor end extending approach. I've also pondered whether this falls under the purview of linkers. Such a change seems intrusive and unsightly. If the linker extends the end of p_memsz to cover the gap, should it also extend p_filesz?
- If it doesn't, we create a PT_LOAD with p_filesz/p_memsz that is not for BSS, which is weird.
- If it does, we have an output file featuring overlapping file offset ranges, which is weird as well.
Moreover, a PT_LOAD whose end isn't backed by a section is unusual. I'm concerned that many binary manipulation tools may not handle this case correctly. Utilizing a linker script can intentionally create discontiguous address ranges. I'm concerned that the linker might not discern such cases with intelligent logic regarding p_filesz/p_memsz.
This feature request seems to be within the realm of loaders and specific information, such as the page size, is only accessible to loaders. I believe loaders are better equipped to handle this task."
Transparent huge pages for mapped files
Some programs optimize their usage of the limited Translation
Lookaside Buffer (TLB) by employing transparent huge pages. When the
Linux kernel loads an executable, it takes into account the
p_align
field to create a memory area. If
p_align
is 4096, the memory area will commence at a
multiple of 4096, but not necessarily at a multiple of a huge page.
Transparent huge pages for mapped files require both the start
address and the start file offset to align with a huge page. To ensure
compatibility with MADV_HUGEPAGE
, linking the executable
using -z max-page-size=
with the huge page size is
recommended. However, in -z noseparate-code
layouts, the
file content might start somewhere at the first page, potentially
wasting half a huge page on unrelated content.
Switching to -z separate-code
allows reclaiming the
benefits of the half huge page but increases the file size. Balancing
these aspects poses a challenge. One potential solution is using
fallocate(FALLOC_FL_PUNCH_HOLE)
, which introduces
complexity into the linker. However, this approach feels like a
workaround to address a kernel limitation. It would be preferable if a
file-backed huge page didn't necessitate a file offset aligned to a huge
page boundary.
Cost of RELRO
To accommodate PT_GNU_RELRO
, the RW PT_LOAD
segment will possess two permissions after the runtime linker maps the
program. While lld employs two explicit RW PT_LOAD
segments, GNU ld provides one RW segment split by the runtime linker.
Ultimately, the effects of lld and GNU ld are similar.
Due to RELRO, covering the two RW PT_LOAD
segments
necessitates a minimum of 2 (huge) pages. In contrast, without RELRO,
only one (huge) page is required at minimum. This means potentially
wasting up to MAXPAGESIZE-1 bytes, which could otherwise be utilized to
cover more data.
Nowadays, RELRO is considered a security baseline and removing it might unsettle security-minded individuals.