time to bleed by Joe Damato

technical ramblings from a wanna-be unix dinosaur

Archive for the ‘ruby’ Category

String together global offset tables to build a Ruby memory profiler

View Comments

If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.


The tricks, techniques, and ugly hacks in this article are PLATFORM SPECIFIC, DANGEROUS, and NOT PORTABLE.

This is the third article in a series of articles describing a set of low level hacks that I used to create memprof a Ruby level memory profiler. You should be able to survive without reading the other articles in this series, but you can check them out here and here.

How is this different from the other hooking articles/techniques?

The previous articles explained how to insert trampolines in the .text segment of a binary. This article explains a cool technique for hooking functions in the .text segment of shared libraries, allowing your handler to run, and then resuming execution. Hooking shared libraries turns out to be less work than hooking the binary (in the case of Ruby, that is), but making it all happen was a bit tricky. Read on to learn more.

The “problem” with shared libraries

The problem is that if a trampoline is inserted into the code of the shared library, the trampoline will need to invoke the dynamic linker to resolve the function that is being hooked, call the function, do whatever additional logic is desired, and then resume execution.

In other words you need to (somehow) insert a trampoline for a function that will call the function being trampolined without ending up in an infinite loop.

The additional complexity occurs because when shared libraries are loaded, the kernel decides at runtime where exactly in memory the library should be loaded. Since the exact location of symbols is not known at link time, a procedure linkage table (.plt) is created so that the program and the dynamic linker can work together to resolve symbol addresses.

I explained how .plts work in a previous article, but looking at this again is worthwhile. I’ve simplified the explanation a bit1, but at a high level:

  1. Program calls a function in a shared object, the link editor makes sure that the program jumps to a stub function in the .plt
  2. The program sets some data up for the dynamic linker and then hands control over to it.
  3. The dynamic linker looks at the info set up by the program and fills in the absolute address of the function that was called in the .plt in the global offset table (.got).
  4. Then the dynamic linker calls the function.
  5. Subsequent calls to the same function jump to the same stub in the .plt, but every time after the first call the absolute address is already in the .got (because when the dynamic linker is invoked the first time, it fills in the absolute address in the .got).

Disassembling a short Ruby VM function that calls rb_newobj (a memory allocation routine that we’d like to hook), shows the calls to the .plt:

000000000001af10 :
   . . . . 
   1af14:       e8 e7 c6 ff ff          callq  17600 [rb_newobj@plt]
   . . . . 

Let’s take a look at the corresponding .plt stub:

0000000000017600 :
   17600:       ff 25 6a 9c 2c 00       jmpq   *0x2c9c6a(%rip) # 2e1270 [_GLOBAL_OFFSET_TABLE_+0x288]
   17606:       68 4e 00 00 00          pushq  $0x4e
   1760b:       e9 00 fb ff ff          jmpq   17110 <_init+0x18>

Important fact: The program and each shared library has its own .plt and .got sections (amongst other sections). Keep this in mind as it’ll be handy very shortly.

That is a lot of stub code to reproduce in the trampoline. Reproducing that stuff in the trampoline shouldn’t be hard, but invites a large number of bugs over to play. Is there a better way?

What is a global offset table (.got)?

The global offset table (.got) is a table of absolute addresses that can be filled in at runtime. In the assembly dump above, the .got entry for rb_newobj is referenced in the .plt stub code.

Intercepting a function call

It would be awesome if it were possible to overwrite the .got entry for rb_newobj and insert the address of a trampoline. But how would the intercepting function call rb_newobj itself without ending up in an infinite loop?

The important fact above comes in to save the day.

Since each shared object has its own .plt and .got sections, it is possible to overwrite the .got entry for rb_newobj in every shared object except for the object where the trampoline lives. Then, when rb_newobj is called, the .plt entry will redirect execution to the trampoline. The trampoline then calls out to its .plt entry for rb_newobj which is left untouched allowing rb_newobj to be resolved and called out to successfully.

Not as easy as it sounds, though

This solution is less work than the other hooking methods, but it has its own particular details as well:

  1. You’ll need to walk the link map at runtime to determine the base address for the shared library you are hooking (it could be anywhere).
  2. Next, you’ll need to parse the .rela.plt section which contains information on the location of each .plt stub, relative to the base address of the shared object.
  3. Once you have the address of the .plt stub, you’ll need to determine the absolute address of the .got entry by parsing the first instruction of the .plt stub (a jmp) as seen in the disassembly above.
  4. Finally, you can write to the .got entry the address of your trampoline, as long as the trampoline lives in a different shared library.

You’ve now successfully managed to poison the .got entry of a symbol in one shared library to direct execution to your own function which can then call the intercepted function itself without getting stuck in an infinite loop.


  • There are lots of sections in each ELF object. Each section is special and important.
  • ELF documentation can be difficult to obtain and understand.
  • Got pretty lucky this time around. I was getting a little worried that it would get complicated. Made it out alive, though.

Thanks for reading and don’t forget to subscribe (via RSS or e-mail) and follow me on twitter.


  1. System V Application Binary Interface AMD64 Architecture Processor Supplement, p 78 []

Written by Joe Damato

January 25th, 2010 at 5:59 am

What is a ruby object? (introducing Memprof.dump)

View Comments

If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.
After Joe released memprof a few days ago, I started thinking about ways to add more functionality.

The initial Memprof release only offered a simple stats api, inspired by the one in bleak_house:

require 'memprof'
o = Object.new
      1 test.rb:3:Object

With the help of lloyd‘s excellent yajl json library, I’ve slowly been building a full-featured heap dumper: Memprof.dump.

require 'memprof'
    "address": "0xea52f0",
    "source": "test.rb:3",
    "type": "array",
    "length": 0

Where can I find it?

This new heap dumper will be in the next release of Memprof. If you want to play with it, checkout the heap_dump branch on github.

What else is planned?

Over the next few days, I’m going to add a Memprof.dump_all method to dump out the entire ruby heap. This full dump will contain complete knowledge of the ruby object graph (what objects point to other objects), and its json format will allow for easy analysis. I’m envisioning a set of post-processing tools that can find leaks, calculate object memory usage, and generate various visualizations of memory consumption and object hierarchies.

Why should I care?

In building and testing Memprof.dump, I’ve learned a lot about different types of ruby objects. The rest of this post covers interesting details about common ruby objects, with examples of how they’re created and what they look like inside the MRI VM.

Read the rest of this entry »

Written by Aman Gupta

December 14th, 2009 at 5:59 am

memprof: A Ruby level memory profiler

View Comments

If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.

What is memprof and why do I care?

memprof is a Ruby gem which supplies memory profiler functionality similar to bleak_house without patching the Ruby VM. You just install the gem, call a function or two, and off you go.

Where do I get it?

memprof is available on gemcutter, so you can just:

gem install memprof

Feel free to browse the source code at: http://github.com/ice799/memprof.

How do I use it?

Using memprof is simple. Before we look at some examples, let me explain more precisely what memprof is measuring.

memprof is measuring the number of objects created and not destroyed during a segment of Ruby code. The ideal use case for memprof is to show you where objects that do not get destroyed are being created:

  • Objects are created and not destroyed when you create new classes. This is a good thing.
  • Sometimes garbage objects sit around until garbage_collect has had a chance to run. These objects will go away.
  • Yet in other cases you might be holding a reference to a large chain of objects without knowing it. Until you remove this reference, the entire chain of objects will remain in memory taking up space.

memprof will show objects created in all cases listed above.

OK, now Let’s take a look at two examples and their output.

A simple program with an obvious memory “leak”:

require 'memprof'

@blah = Hash.new([])

100.times {
  @blah[1] << "aaaaa"

1000.times {
   @blah[2] << "bbbbb"

This program creates 1100 objects which are not destroyed during the start and stop sections of the file because references are held for each object created.

Let's look at the output from memprof:

   1000 test.rb:11:String
    100 test.rb:7:String

In this example memprof shows the 1100 created, broken up by file, line number, and type.

Let's take a look at another example:

require 'memprof'
require "stringio"

This simple program is measuring the number of objects created when requiring stringio.

Let's take a look at the output:

    108 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:__node__
     14 test2.rb:3:String
      2 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:Class
      1 test2.rb:4:StringIO
      1 test2.rb:4:String
      1 test2.rb:3:Array
      1 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:Enumerable

This output shows an internal Ruby interpreter type __node__ was created (these represent code), as well as a few Strings and other objects. Some of these objects are just garbage objects which haven't had a chance to be recycled yet.

What if nudge the garbage_collector along a little bit just for our example? Let's add the following two lines of code to our previous example:


We're now nudging the garbage collector and outputting memprof stats information again. This should show fewer objects, as the garbage collector will recycle some of the garbage objects:

    108 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:__node__
      2 test2.rb:3:String
      2 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:Class
      1 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:Enumerable

As you can see above, a few Strings and other objects went away after the garbage collector ran.

Which Rubies and systems are supported?

  • Only unstripped binaries are supported. To determine if your Ruby binary is stripped, simply run: file `which ruby`. If it is, consult your package manager's documentation. Most Linux distributions offer a package with an unstripped Ruby binary.
  • Only x86_64 is supported at this time. Hopefully, I'll have time to add support for i386/i686 in the immediate future.
  • Linux Ruby Enterprise Edition (1.8.6 and 1.8.7) is supported.
  • Linux MRI Ruby 1.8.6 and 1.8.7 built with --disable-shared are supported. Support for --enable-shared binaries is coming soon.
  • Snow Leopard support is experimental at this time.
  • Ruby 1.9 support coming soon.

How does it work?

If you've been reading my blog over the last week or so, you'd have noticed two previous blog posts (here and here) that describe some tricks I came up with for modifying a running binary image in memory.

memprof is a combination of all those tricks and other hacks to allow memory profiling in Ruby without the need for custom patches to the Ruby VM. You simply require the gem and off you go.

memprof works by inserting trampolines on object allocation and deallocation routines. It gathers metadata about the objects and outputs this information when the stats method is called.

What else is planned?

Myself, Jake Douglas, and Aman Gupta have lots of interesting ideas for new features. We don't want to ruin the surprise, but stay tuned. More cool stuff coming really soon :)

Thanks for reading and don't forget to subscribe (via RSS or e-mail) and follow me on twitter.

Written by Joe Damato

December 11th, 2009 at 5:59 am

Hot patching inlined functions with x86_64 asm metaprogramming

View Comments

If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.


The tricks, techniques, and ugly hacks in this article are PLATFORM SPECIFIC, DANGEROUS, and NOT PORTABLE.

This article will make reference to information in my previous article Rewrite your Ruby VM at runtime to hot patch useful features so be sure to check it out if you find yourself lost during this article.

Also, this might not qualify as metaprogramming in the traditional definition1, but this article will show how to generate assembly at runtime that works well with the particular instructions generated for a binary. In other words, the assembly is constructed based on data collected from the binary at runtime. When I explained this to Aman, he called it assembly metaprogramming.


This article expands on a previous article by showing how to hook functions which are inlined by the compiler. This technique can be applied to other binaries, but the binary in question is Ruby Enterprise Edition 1.8.7. The use case is to build a memory profiler without requiring patches to the VM, but just a Ruby gem.

It’s on GitHub

The memory profiler is NOT DONE, yet. It will be soon. Stay tuned.

The code described here is incorporated into a Ruby Gem which can be found on github: http://github.com/ice799/memprof specifically at: http://github.com/ice799/memprof/blob/master/ext/memprof.c#L202-318

Overview of the plan of attack

The plan of attack is relatively straight forward:

  1. Find the inlined code.
  2. Overwrite part of it to redirect to a stub.
  3. Call out to a handler from the stub.
  4. Make sure the return path is sane.

As simple as this seems, implementing these steps is actually a bit tricky.

Finding pieces of inlined code

Before finding pieces of inlined code, let’s first examine the C code we want to hook. I’m going to be showing how to hook the inline function add_freelist.

The code for add_freelist is short:

static inline void
    RVALUE *p;
    if (p->as.free.flags != 0)
        p->as.free.flags = 0;
    if (p->as.free.next != freelist)
        p->as.free.next = freelist;
    freelist = p;

There is one really important feature of this code which stands out almost immediately. freelist has (at least) compilation unit scope. This is awesome because freelist serves as a marker when searching for assembly instructions to overwrite. Since the freelist has compilation unit scope, it’ll live at some static memory location.

If we find writes to this static memory location, we find our inline function code.

Let’s take a look at the instructions generated from this C code (unrelated instructions snipped out):

  437f21:       48 c7 00 00 00 00 00    movq   $0x0,(%rax)
   . . . . .
  437f2c:       48 8b 05 65 de 2d 00    mov    0x2dde65(%rip),%rax  # 715d98 [freelist]
   . . . . .
  437f48:       48 89 05 49 de 2d 00    mov    %rax,0x2dde49(%rip)  # 715d98 [freelist]

The last instruction above updates freelist, it is the instruction generated for the C statement freelist = p;.

As you can see from the instruction, the destination is freelist. This makes it insanely easy to locate instances of this inline function. Just need to write a piece of C code which scans the binary image in memory, searching for mov instructions where the destination is freelist and I’ve found the inlined instances of add_freelist.

Why not insert a trampoline by overwriting that last mov instruction?

Overwriting with a jmp

The mov instruction above is 7 bytes wide. As long as the instruction we’re going to implant is 7 bytes or thinner, everything is good to go. Using a callq is out of the question because we can’t ensure the stack is 16-byte aligned as per the x86_64 ABI2. As it turns out, a jmp instruction that uses a 32bit displacement from the instruction pointer only requires 5 bytes. We’ll be able to implant the instruction that’s needed, and even have room to spare.

I created a struct to encapsulate this short 7 byte trampoline. 5 bytes for the jmp, 2 bytes for NOPs. Let’s take a look:

  struct tramp_inline tramp = {
    .jmp           = {'\xe9'},
    .displacement  = 0,
    .pad           = {'\x90', '\x90'},

Let’s fill in the displacement later, after actually finding the instruction that’s going to get overwritten.

So, to find the instruction that’ll be overwritten, just look for a mov opcode and check that the destination is freelist:

    /* make sure it is a mov instruction */
    if (byte[1] == '\x89') {

      /* Read the REX byte to make sure it is a mov that we care about */
      if ( (byte[0] == '\x48') ||
          (byte[0] == '\x4c') ) {

        /* Grab the target of the mov. REMEMBER: in this case the target is 
         * a 32bit displacment that gets added to RIP (where RIP is the adress of
         * the next instruction).
        mov_target = *(uint32_t *)(byte + 3);

        /* Sanity check. Ensure that the displacement from freelist to the next
         * instruction matches the mov_target. If so, we know this mov is
         * updating freelist.
        if ( (freelist - (void *)(byte+7) ) == mov_target) {

At this point we’ve definitely found a mov instruction with freelist as the destination. Let’s calculate the displacement to the stage 2 trampoline for our jmp instruction and write the instruction into memory.

/* Setup the stage 1 trampoline. Calculate the displacement to
 * the stage 2 trampoline from the next instruction.
 * REMEMBER!!!! The next instruction will be NOP after our stage 1
 * trampoline is written. This is 5 bytes into the structure, even
 * though the original instruction we overwrote was 7 bytes.
 tramp.displacement = (uint32_t)(destination - (void *)(byte+5));

/* Figure out what page the stage 1 tramp is gonna be written to, mark
 * it WRITE, write the trampoline in, and then remove WRITE permission.
 aligned_addr = page_align(byte);
 mprotect(aligned_addr, (void *)byte - aligned_addr + 10,
 memcpy(byte, &tramp, sizeof(struct tramp_inline));
 mprotect(aligned_addr, (void *)byte - aligned_addr + 10,

Cool, all that’s left is to build the stage 2 trampoline which will set everything up for the C level handler.

An assembly stub to set the stage for our C handler

So, what does the assembly need to do to call the C handler? Quite a bit actually so let’s map it out, step by step:

  1. Replicate the instruction which was overwritten so that the object is actually added to the freelist.
  2. Save the value of rdi register. This register is where the first argument to a function lives and will store the obj that was added to the freelist for the C handler to do analysis on.
  3. Load the object being added to the freelist into rdi
  4. Save the value of rbx so that we can use the register as an operand for an absolute indirect callq instruction.
  5. Save rbp and rsp to allow a way to undo the stack alignment later.
  6. Align the stack to a 16-byte boundary to comply with the x86_64 ABI.
  7. Move the address of the handler into rbx
  8. Call the handler through rbx.
  9. Restore rbp, rsp, rdi, rbx.
  10. Jump back to the instruction after the instruction which was overwritten.

To accomplish this let’s build out a structure with as much set up as possible and fill in the displacement fields later. This “base” struct looks like this:

  struct inline_tramp_tbl_entry inline_ent = {
    .rex     = {'\x48'},
    .mov     = {'\x89'},
    .src_reg = {'\x05'},
    .mov_displacement = 0,

    .frame = {
      .push_rdi = {'\x57'},
      .mov_rdi = {'\x48', '\x8b', '\x3d'},
      .rdi_source_displacement = 0,
      .push_rbx = {'\x53'},
      .push_rbp = {'\x55'},
      .save_rsp = {'\x48', '\x89', '\xe5'},
      .align_rsp = {'\x48', '\x83', '\xe4', '\xf0'},
      .mov = {'\x48', '\xbb'},
      .addr = error_tramp,
      .callq = {'\xff', '\xd3'},
      .leave = {'\xc9'},
      .rbx_restore = {'\x5b'},
      .rdi_restore = {'\x5f'},

    .jmp  = {'\xe9'},
    .jmp_displacement = 0,

So, what’s left to do:

  1. Copy the REX and source register bytes of the instruction which was overwritten to replicate it.
  2. Calculate the displacement to freelist to fully generate the overwritten mov.
  3. Calculate the displacement to freelist so that it can be stored in rdi as an argument to the C handler.
  4. Fill in the absolute address for the handler.
  5. Calculate the displacement to the instruction after the stage 1 trampoline in order to jmp back to resume execution as normal.

Doing that is relatively straight-forward. Let’s take a look at the C snippets that make this happen:

/* Before the stage 1 trampoline gets written, we need to generate
 * the code for the stage 2 trampoline. Let's copy over the REX byte
 * and the byte which mentions the source register into the stage 2
 * trampoline.
inl_tramp_st2 = inline_tramp_table + entry;
inl_tramp_st2->rex[0] = byte[0];
inl_tramp_st2->src_reg[0] = byte[2];

. . . . . 

/* Finish setting up the stage 2 trampoline. */

/* calculate the displacement to freelist from the next instruction.
 * This is used to replicate the original instruction we overwrote.
inl_tramp_st2->mov_displacement = freelist - (void *)&(inl_tramp_st2->frame);

/* fill in the displacement to freelist from the next instruction.
 * This is to arrange for the new value in freelist to be in %rdi, and as such
 * be the first argument to the C handler. As per the amd64 ABI.
inl_tramp_st2->frame.rdi_source_displacement = freelist - 
                                          (void *)&(inl_tramp_st2->frame.push_rbx);

/* jmp back to the instruction after stage 1 trampoline was inserted 
 * This can be 5 or 7, it doesn't matter. If its 5, we'll hit our 2
 * NOPS. If its 7, we'll land directly on the next instruction.
inl_tramp_st2->jmp_displacement = (uint32_t)((void *)(byte + 7) -
                                         (void *)(inline_tramp_table + entry + 1));

/* write the address of our C level trampoline in to the structure */
inl_tramp_st2->frame.addr = freelist_tramp;


We’ve successfully patched the binary in memory, inserted an assembly stub which was generated at runtime, called a hook function, and ensured that execution can resume normally.

So, what’s the status on that memory profiler?

Almost done, stay tuned for more updates coming SOON.


  • Hackery like this is unmaintainable, unstable, stupid, but also fun to work on and think about.
  • Being able to hook add_freelist like this provides the last tool needed to implement a version of bleak_house (a Ruby memory profiler) without patching the Ruby VM.
  • x86_64 instruction set is a painful instruction set.
  • Use the GNU assembler (gas) instead of trying to generate opcodes by reading the Intel instruction set PDFs if you value your sanity.

Thanks for reading and don’t forget to subscribe (via RSS or e-mail) and follow me on twitter.


  1. http://en.wikipedia.org/wiki/Metaprogramming []
  2. x86_64 ABI []

Written by Joe Damato

December 10th, 2009 at 5:59 am

Debugging Ruby: Understanding and Troubleshooting the VM and your Application

View Comments

Download the PDF here.

Debugging Ruby

Written by Aman Gupta

December 2nd, 2009 at 8:30 pm