time to bleed by Joe Damato

technical ramblings from a wanna-be unix dinosaur

Hot patching inlined functions with x86_64 asm metaprogramming

View Comments

If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.


The tricks, techniques, and ugly hacks in this article are PLATFORM SPECIFIC, DANGEROUS, and NOT PORTABLE.

This article will make reference to information in my previous article Rewrite your Ruby VM at runtime to hot patch useful features so be sure to check it out if you find yourself lost during this article.

Also, this might not qualify as metaprogramming in the traditional definition1, but this article will show how to generate assembly at runtime that works well with the particular instructions generated for a binary. In other words, the assembly is constructed based on data collected from the binary at runtime. When I explained this to Aman, he called it assembly metaprogramming.


This article expands on a previous article by showing how to hook functions which are inlined by the compiler. This technique can be applied to other binaries, but the binary in question is Ruby Enterprise Edition 1.8.7. The use case is to build a memory profiler without requiring patches to the VM, but just a Ruby gem.

It’s on GitHub

The memory profiler is NOT DONE, yet. It will be soon. Stay tuned.

The code described here is incorporated into a Ruby Gem which can be found on github: http://github.com/ice799/memprof specifically at: http://github.com/ice799/memprof/blob/master/ext/memprof.c#L202-318

Overview of the plan of attack

The plan of attack is relatively straight forward:

  1. Find the inlined code.
  2. Overwrite part of it to redirect to a stub.
  3. Call out to a handler from the stub.
  4. Make sure the return path is sane.

As simple as this seems, implementing these steps is actually a bit tricky.

Finding pieces of inlined code

Before finding pieces of inlined code, let’s first examine the C code we want to hook. I’m going to be showing how to hook the inline function add_freelist.

The code for add_freelist is short:

static inline void
    RVALUE *p;
    if (p->as.free.flags != 0)
        p->as.free.flags = 0;
    if (p->as.free.next != freelist)
        p->as.free.next = freelist;
    freelist = p;

There is one really important feature of this code which stands out almost immediately. freelist has (at least) compilation unit scope. This is awesome because freelist serves as a marker when searching for assembly instructions to overwrite. Since the freelist has compilation unit scope, it’ll live at some static memory location.

If we find writes to this static memory location, we find our inline function code.

Let’s take a look at the instructions generated from this C code (unrelated instructions snipped out):

  437f21:       48 c7 00 00 00 00 00    movq   $0x0,(%rax)
   . . . . .
  437f2c:       48 8b 05 65 de 2d 00    mov    0x2dde65(%rip),%rax  # 715d98 [freelist]
   . . . . .
  437f48:       48 89 05 49 de 2d 00    mov    %rax,0x2dde49(%rip)  # 715d98 [freelist]

The last instruction above updates freelist, it is the instruction generated for the C statement freelist = p;.

As you can see from the instruction, the destination is freelist. This makes it insanely easy to locate instances of this inline function. Just need to write a piece of C code which scans the binary image in memory, searching for mov instructions where the destination is freelist and I’ve found the inlined instances of add_freelist.

Why not insert a trampoline by overwriting that last mov instruction?

Overwriting with a jmp

The mov instruction above is 7 bytes wide. As long as the instruction we’re going to implant is 7 bytes or thinner, everything is good to go. Using a callq is out of the question because we can’t ensure the stack is 16-byte aligned as per the x86_64 ABI2. As it turns out, a jmp instruction that uses a 32bit displacement from the instruction pointer only requires 5 bytes. We’ll be able to implant the instruction that’s needed, and even have room to spare.

I created a struct to encapsulate this short 7 byte trampoline. 5 bytes for the jmp, 2 bytes for NOPs. Let’s take a look:

  struct tramp_inline tramp = {
    .jmp           = {'\xe9'},
    .displacement  = 0,
    .pad           = {'\x90', '\x90'},

Let’s fill in the displacement later, after actually finding the instruction that’s going to get overwritten.

So, to find the instruction that’ll be overwritten, just look for a mov opcode and check that the destination is freelist:

    /* make sure it is a mov instruction */
    if (byte[1] == '\x89') {

      /* Read the REX byte to make sure it is a mov that we care about */
      if ( (byte[0] == '\x48') ||
          (byte[0] == '\x4c') ) {

        /* Grab the target of the mov. REMEMBER: in this case the target is 
         * a 32bit displacment that gets added to RIP (where RIP is the adress of
         * the next instruction).
        mov_target = *(uint32_t *)(byte + 3);

        /* Sanity check. Ensure that the displacement from freelist to the next
         * instruction matches the mov_target. If so, we know this mov is
         * updating freelist.
        if ( (freelist - (void *)(byte+7) ) == mov_target) {

At this point we’ve definitely found a mov instruction with freelist as the destination. Let’s calculate the displacement to the stage 2 trampoline for our jmp instruction and write the instruction into memory.

/* Setup the stage 1 trampoline. Calculate the displacement to
 * the stage 2 trampoline from the next instruction.
 * REMEMBER!!!! The next instruction will be NOP after our stage 1
 * trampoline is written. This is 5 bytes into the structure, even
 * though the original instruction we overwrote was 7 bytes.
 tramp.displacement = (uint32_t)(destination - (void *)(byte+5));

/* Figure out what page the stage 1 tramp is gonna be written to, mark
 * it WRITE, write the trampoline in, and then remove WRITE permission.
 aligned_addr = page_align(byte);
 mprotect(aligned_addr, (void *)byte - aligned_addr + 10,
 memcpy(byte, &tramp, sizeof(struct tramp_inline));
 mprotect(aligned_addr, (void *)byte - aligned_addr + 10,

Cool, all that’s left is to build the stage 2 trampoline which will set everything up for the C level handler.

An assembly stub to set the stage for our C handler

So, what does the assembly need to do to call the C handler? Quite a bit actually so let’s map it out, step by step:

  1. Replicate the instruction which was overwritten so that the object is actually added to the freelist.
  2. Save the value of rdi register. This register is where the first argument to a function lives and will store the obj that was added to the freelist for the C handler to do analysis on.
  3. Load the object being added to the freelist into rdi
  4. Save the value of rbx so that we can use the register as an operand for an absolute indirect callq instruction.
  5. Save rbp and rsp to allow a way to undo the stack alignment later.
  6. Align the stack to a 16-byte boundary to comply with the x86_64 ABI.
  7. Move the address of the handler into rbx
  8. Call the handler through rbx.
  9. Restore rbp, rsp, rdi, rbx.
  10. Jump back to the instruction after the instruction which was overwritten.

To accomplish this let’s build out a structure with as much set up as possible and fill in the displacement fields later. This “base” struct looks like this:

  struct inline_tramp_tbl_entry inline_ent = {
    .rex     = {'\x48'},
    .mov     = {'\x89'},
    .src_reg = {'\x05'},
    .mov_displacement = 0,

    .frame = {
      .push_rdi = {'\x57'},
      .mov_rdi = {'\x48', '\x8b', '\x3d'},
      .rdi_source_displacement = 0,
      .push_rbx = {'\x53'},
      .push_rbp = {'\x55'},
      .save_rsp = {'\x48', '\x89', '\xe5'},
      .align_rsp = {'\x48', '\x83', '\xe4', '\xf0'},
      .mov = {'\x48', '\xbb'},
      .addr = error_tramp,
      .callq = {'\xff', '\xd3'},
      .leave = {'\xc9'},
      .rbx_restore = {'\x5b'},
      .rdi_restore = {'\x5f'},

    .jmp  = {'\xe9'},
    .jmp_displacement = 0,

So, what’s left to do:

  1. Copy the REX and source register bytes of the instruction which was overwritten to replicate it.
  2. Calculate the displacement to freelist to fully generate the overwritten mov.
  3. Calculate the displacement to freelist so that it can be stored in rdi as an argument to the C handler.
  4. Fill in the absolute address for the handler.
  5. Calculate the displacement to the instruction after the stage 1 trampoline in order to jmp back to resume execution as normal.

Doing that is relatively straight-forward. Let’s take a look at the C snippets that make this happen:

/* Before the stage 1 trampoline gets written, we need to generate
 * the code for the stage 2 trampoline. Let's copy over the REX byte
 * and the byte which mentions the source register into the stage 2
 * trampoline.
inl_tramp_st2 = inline_tramp_table + entry;
inl_tramp_st2->rex[0] = byte[0];
inl_tramp_st2->src_reg[0] = byte[2];

. . . . . 

/* Finish setting up the stage 2 trampoline. */

/* calculate the displacement to freelist from the next instruction.
 * This is used to replicate the original instruction we overwrote.
inl_tramp_st2->mov_displacement = freelist - (void *)&(inl_tramp_st2->frame);

/* fill in the displacement to freelist from the next instruction.
 * This is to arrange for the new value in freelist to be in %rdi, and as such
 * be the first argument to the C handler. As per the amd64 ABI.
inl_tramp_st2->frame.rdi_source_displacement = freelist - 
                                          (void *)&(inl_tramp_st2->frame.push_rbx);

/* jmp back to the instruction after stage 1 trampoline was inserted 
 * This can be 5 or 7, it doesn't matter. If its 5, we'll hit our 2
 * NOPS. If its 7, we'll land directly on the next instruction.
inl_tramp_st2->jmp_displacement = (uint32_t)((void *)(byte + 7) -
                                         (void *)(inline_tramp_table + entry + 1));

/* write the address of our C level trampoline in to the structure */
inl_tramp_st2->frame.addr = freelist_tramp;


We’ve successfully patched the binary in memory, inserted an assembly stub which was generated at runtime, called a hook function, and ensured that execution can resume normally.

So, what’s the status on that memory profiler?

Almost done, stay tuned for more updates coming SOON.


  • Hackery like this is unmaintainable, unstable, stupid, but also fun to work on and think about.
  • Being able to hook add_freelist like this provides the last tool needed to implement a version of bleak_house (a Ruby memory profiler) without patching the Ruby VM.
  • x86_64 instruction set is a painful instruction set.
  • Use the GNU assembler (gas) instead of trying to generate opcodes by reading the Intel instruction set PDFs if you value your sanity.

Thanks for reading and don’t forget to subscribe (via RSS or e-mail) and follow me on twitter.


  1. http://en.wikipedia.org/wiki/Metaprogramming []
  2. x86_64 ABI []

Written by Joe Damato

December 10th, 2009 at 5:59 am

  • Cjacktx

    thanks man!

  • Matthieu

    Hi !
    Thank you for this article, it helps me for my work !
    I'm not really sure, but i think that you need to substract 128 to %rsp before doing any push, according to the "red zone" described in the ABI, page 16.
    As some functions may use references liks -8(%rsp), doing a push will erase that value.
    Best regards

  • I've long kept a shell script that launches an editor on a new temporary file, assembles it to a bin file with nasm, then disassembles the bin file to a listing and prints it on the screen, for exactly this sort of thing. Very handy for finding out opcodes for instructions, figuring out if an instruction supports a particular addressing mode without looking it up, writing bits of shellcode or templates for trampolines, compiler intrinsics, whatever. I'm sure you could rig the same thing up with gas, but I prefer Intel syntax, plus nasm just makes it absolutely effortless. Playing with machine code is great fun.

  • You know you're going to hell for this, right? :)

    More seriously, I really like your use of structs to define the trampolines.

blog comments powered by Disqus