time to bleed by Joe Damato

technical ramblings from a wanna-be unix dinosaur

Archive for the ‘monitoring’ Category

How do debuggers keep track of the threads in your program?

View Comments

If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.


This post describes the relatively undocumented API for debuggers (or other low level programs) that can be used to enumerate the existing threads in a process and receive asynchronous notifications when threads are created or destroyed. This API also provides asynchronous notifications of other interesting thread-related events and feels very similar to the interface exposed by libdl for notifying debuggers when libraries are loaded dynamically at run time.

amd64 and gnu syntax

As usual, everything below refers to amd64 unless otherwise noted. Also, all assembly is in AT&T syntax.

software breakpoints

It’s important to begin first by examining how software breakpoints work. We’ll see shortly why this is important, but for now just trust me.

A debugger sets a software breakpoint by using the ptrace system call to write a special instruction into a target process’ address space. That instruction raises software interrupt #3 which is defined as the Breakpoint Exception in the Intel 64 Architecture Developers Manual.1 When this interrupt is raised, the processor undergoes a privilege level change and calls a function specified by the kernel to handle the exception.

The exception handler in the kernel executes to deliver the SIGTRAP signal to the process. However, if a debugger is attached to a process with ptrace, all signals are first delivered to the debugger. In the case of SIGTRAP, the debugger can examine the list of breakpoints set by the user and take the appropriate action (draw a UI, update the console, or whatever).

The debugger finishes up by masking this signal from the process it is attached to, preventing that process from being killed (most processes will not have a signal handler for SIGTRAP).

In practice most binaries generated by compilers will not have this instruction; it is up to the debugger to write this instruction into the process’ address space during runtime. If you are so inclined, you can raise interrupt #3 via inline assembly or by calling an assembly stub yourself. Many debuggers will catch this signal and trigger an update of some form in the UI.

All that said, this is what the instruction looks like when disassembled:

int 0x03

You may find it useful to check out an earlier and more in-depth article I wrote a while ago about signal handling.

Enumerating threads when first attaching

When a debugger first attaches to a program the program has an unknown number of threads that must be enumerated. glibc exposes a straightforward API for this called td_ta_thr_iter2 found in glibc at nptl_db/td_ta_thr_iter.c. This function takes a callback as one of its arguments. The callback is called once per thread and is passed a handle to an object describing each thread in the process.

We can see the code in GDB3 which uses this API to hand over a callback which will be hit to enumerate the existing threads in a process:

static int
find_new_threads_once (struct thread_db_info *info, int iteration,
      				   td_err_e *errp)
  volatile struct gdb_exception except;
  struct callback_data data;
  td_err_e err = TD_ERR;

  data.info = info;
  data.new_threads = 0;

      /* Iterate over all user-space threads to discover new threads.  */
      err = info->td_ta_thr_iter_p (info->thread_agent,
  /* ... */

That’s pretty straightforward, but there are some hairy race conditions, as we can see in this code snippet from thread_db_find_new_threads_2 which calls find_new_threads_once:

if (until_no_new)
    /* Require 4 successive iterations which do not find any new threads.
 	The 4 is a heuristic: there is an inherent race here, and I have
 	seen that 2 iterations in a row are not always sufficient to
 	"capture" all threads.  */
    for (i = 0, loop = 0; loop < 4; ++i, ++loop)
 	if (find_new_threads_once (info, i, NULL) != 0)
 	  /* Found some new threads.  Restart the loop from beginning.»·*/
 	  loop = -1;

It's fiiiiiiiiiinnnneeee.

Now, on to the more interesting interface that is, IMHO, much less straightforward.

Notification of thread create and destroy

A debugger can also gather thread create and destroy events through an interesting asynchronous interface. Let's go step by step and see how a debugger can listen for create and destroy events.

Enable event notification

First, process wide event notification has to be enabled. This API looks very much like some pieces of the signal API. First we have to create a set of events of we care about (from GDB4 ):

static void
enable_thread_event_reporting (void)
  td_thr_events_t events;
  td_err_e err;

  /* ... */

  /* Set the process wide mask saying which events we're interested in.  */
  td_event_emptyset (&events);
  td_event_addset (&events, TD_CREATE);

  /* ... */

  td_event_addset (&events, TD_DEATH);
  /* NB: the following is just a pointer to the function td_ta_set_event on linux */
  err = info->td_ta_set_event_p (info->thread_agent, &events);

The above code adds TD_CREATE and TD_DEATH to the (empty) set of events that GDB wants to get notifications about. Then the event mask is handed over to glibc with a call to the function td_ta_set_event, which just happens to be stored in a function pointer named td_ta_set_event_p in GDB.

Set asynchronous notification breakpoints

The next step is interesting.

The debugger must use an API to get the addresses of a functions that will be called whenever a thread is created or destroyed. The debugger will then set a software breakpoint at those addresses. When the program creates a thread or a thread is killed the breakpoint will be triggered and the debugger can walk the thread list and update its internal state that describes the threads in the process.

This API is td_ta_event_addr. Let's check out how GDB uses this API. This code is from the same function as above, but happens after the code shown above:

static void
enable_thread_event_reporting (void)

	/* ... code above here ... */

	/* Delete previous thread event breakpoints, if any.  */
	remove_thread_event_breakpoints ();
	info->td_create_bp_addr = 0;
	info->td_death_bp_addr = 0;
	/* Set up the thread creation event.  */
	err = enable_thread_event (TD_CREATE, &info->td_create_bp_addr);
	/* ... */

	/* Set up the thread death event.  */
	err = enable_thread_event (TD_DEATH, &info->td_death_bp_addr);

GDB's helper function enable_thread_event is pretty straightforward:

static td_err_e
enable_thread_event (int event, CORE_ADDR *bp)
  td_notify_t notify;
  td_err_e err;
  struct thread_db_info *info;

  info = get_thread_db_info (GET_PID (inferior_ptid));

  /* Access an lwp we know is stopped.  */
  info->proc_handle.ptid = inferior_ptid;

  /* Get the breakpoint address for thread EVENT.  */
  err = info->td_ta_event_addr_p (info->thread_agent, event, &notify);
  /* ... */

  /* Set up the breakpoint.  */
  gdb_assert (exec_bfd);
  (*bp) = (gdbarch_convert_from_func_ptr_addr
		   /* Do proper sign extension for the target.  */
		   (bfd_get_sign_extend_vma (exec_bfd) > 0
		    ? (CORE_ADDR) (intptr_t) notify.u.bptaddr
		    : (CORE_ADDR) (uintptr_t) notify.u.bptaddr),

  create_thread_event_breakpoint (target_gdbarch, *bp);

  return TD_OK;

So, GDB stores the addresses of the functions that get called on TD_CREATE and TD_DEATH in td_create_bp_addr and td_death_bp_addr, respectively and sets breakpoints on these addresses in enable_thread_event.

Check if the event has been triggered and drain the event queue

Next time a thread is stopped because a breakpoint has been hit, the debugger needs to check if the breakpoint occurred on an address that is associated with the registered events. If so, the thread event queue needs to be drained with a call to td_ta_event_getmsg and the thread's information can be retrieved with a call to td_thr_get_info .

GDB does all this in a function called check_event:

/* Check if PID is currently stopped at the location of a thread event
   breakpoint location.  If it is, read the event message and act upon
   the event.  */

static void
check_event (ptid_t ptid)
  /* ... */
  td_event_msg_t msg;
  td_thrinfo_t ti;
  td_err_e err;
  CORE_ADDR stop_pc;
  int loop = 0;
  struct thread_db_info *info;

  info = get_thread_db_info (GET_PID (ptid));

  /* Bail out early if we're not at a thread event breakpoint.  */
  stop_pc =  /* ... */
  if (stop_pc != info->td_create_bp_addr
      && stop_pc != info->td_death_bp_addr)

  /* Access an lwp we know is stopped.  */
  info->proc_handle.ptid = ptid;

  /* ... */

  /* If we are at a create breakpoint, we do not know what new lwp
     was created and cannot specifically locate the event message for it.
     We have to call td_ta_event_getmsg() to get
     the latest message.  Since we have no way of correlating whether
     the event message we get back corresponds to our breakpoint, we must
     loop and read all event messages, processing them appropriately.
     This guarantees we will process the correct message before continuing
     from the breakpoint.

     Currently, death events are not enabled.  If they are enabled,
     the death event can use the td_thr_event_getmsg() interface to
     get the message specifically for that lwp and avoid looping
     below.  */

  loop = 1;

      err = info->td_ta_event_getmsg_p (info->thread_agent, &msg);
	  /* ... */
      err = info->td_thr_get_info_p (msg.th_p, &ti);
	  /* ... */

      ptid = ptid_build (GET_PID (ptid), ti.ti_lid, 0);

      switch (msg.event)
		case TD_CREATE:
		  /* Call attach_thread whether or not we already know about a
		     thread with this thread ID.  */
		  attach_thread (ptid, msg.th_p, &ti);
		case TD_DEATH:
		  if (!in_thread_list (ptid))
		    error (_("Spurious thread death event."));
		  detach_thread (ptid);
		  error (_("Spurious thread event."));
  while (loop);

And that is how GDB finds out about existing threads and gets notified about new threads being created or existing threads dying. This asynchronous breakpoint interface is very similar to the interface exposed by libdl that I described briefly toward the end of a blog post I wrote a while ago.

Notifications for other interesting events

Other interesting events are supported by the API but are currently not implemented in glibc, but a motivated programmer could build a shim which implements these events. Doing so would allow you to build some very interesting visualization applications for lock contention and scheduling:

/* Events reportable by the thread implementation.  */
typedef enum
  TD_ALL_EVENTS,			/* Pseudo-event number.  */
  TD_EVENT_NONE = TD_ALL_EVENTS, 	/* Depends on context.  */
  TD_READY,				/* Is executable now. */
  TD_SLEEP,				/* Blocked in a synchronization obj.  */
  TD_SWITCHTO,				/* Now assigned to a process.  */
  TD_SWITCHFROM,			/* Not anymore assigned to a process.  */
  TD_LOCK_TRY,				/* Trying to get an unavailable lock.  */
  TD_CATCHSIG,				/* Signal posted to the thread.  */
  TD_IDLE,				/* Process getting idle.  */
  TD_CREATE,				/* New thread created.  */
  TD_DEATH,				/* Thread terminated.  */
  TD_PREEMPT,				/* Preempted.  */
  TD_PRI_INHERIT,			/* Inherited elevated priority.  */
  TD_REAP,				/* Reaped.  */
  TD_CONCURRENCY,			/* Number of processes changing.  */
  TD_TIMEOUT,				/* Conditional variable wait timed out.  */
  TD_EVENTS_ENABLE = 31		/* Event reporting enabled.  */
} td_event_e;

Take my shovel and flashlight and go look around

Check the reference section below which has links to some of the source file mentioned above. Also, be sure to check out the header file:


That header lists the exported functions from glibc as well as the various flags and types necessary for interacting with this interface.


  • Debuggers have really interesting ways of interacting with lower level system libraries.
  • Comments found tucked away in these pits of despair are pretty amazing.
  • Don't be scared. Grab a shovel and see what other interesting things you can dig up in glibc or elsewhere.

If you enjoyed this article, subscribe (via RSS or e-mail) and follow me on twitter.


  1. Intel 64 Architecture Developers Manual Volume 3A 6-31 []
  2. glibc/nptl_db/td_ta_thr_iter.c []
  3. gdb/linux-thread-db.c []
  4. gdb/linux-thread-db.c []

Written by Joe Damato

July 2nd, 2012 at 7:30 am

an obscure kernel feature to get more info about dying processes

View Comments

If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.


This post will describe how I stumbled upon a code path in the Linux kernel which allows external programs to be launched when a core dump is about to happen. I provide a link to a short and ugly Ruby script which captures a faulting process, runs gdb to get a backtrace (and other information), captures the core dump, and then generates a notification email.

I don’t care about faults because I use monit, god, etc


Your processes may get automatically restarted when a fault occurs and you may even get an email letting you know your process died. Both of those things are useful, but it turns out that with just a tiny bit of extra work you can actually get very detailed emails showing a stack trace, register information, and a snapshot of the process’ files in /proc.

random walking the linux kernel

One day I was sitting around wondering how exactly the coredump code paths are wired. I cracked open the kernel source and started reading.

It wasn’t long until I saw this piece of code from exec.c1:

void do_coredump(long signr, int exit_code, struct pt_regs *regs)
  /* .... */
  ispipe = format_corename(corename, signr);

   if (ispipe) {
   /* ... */

Hrm. ispipe? That seems interesting. I wonder what format_corename does and what ispipe means. Following through and reading format_corename2:

static int format_corename(char *corename, long signr)
	/* ... */

        const char *pat_ptr = core_pattern;
        int ispipe = (*pat_ptr == '|');

	/* ... */

        return ispipe;

In the above code, core_pattern (which can be set with a sysctl or via /proc/sys/kernel/core_pattern) to determine if the first character is a |. If so, format_corename returns 1. So | seems relatively important, but at this point I’m still unclear on what it actually means.

Scanning the rest of the code for do_coredump reveals something very interesting3 (this is more code from the function in the first code snippet above):

     /* ... */

     helper_argv = argv_split(GFP_KERNEL, corename+1, NULL);

     /* ... */

     retval = call_usermodehelper_fns(helper_argv[0], helper_argv,
                             NULL, UMH_WAIT_EXEC, umh_pipe_setup,
                             NULL, &cprm);

    /* ... */

WTF? call_usermodehelper_fns? umh_pipe_setup? This is looking pretty interesting. If you follow the code down a few layers, you end up at call_usermodehelper_exec which has the following very enlightening comment:

 * call_usermodehelper_exec - start a usermode application
 *  . . .
 * Runs a user-space application.  The application is started
 * asynchronously if wait is not set, and runs as a child of keventd.
 * (ie. it runs with full root capabilities).

what it all means

All together this is actually pretty fucking sick:

  • You can set /proc/sys/kernel/core_pattern to run a script when a process is going to dump core.
  • Your script is run before the process is killed.
  • A pipe is opened and attached to your script. The kernel writes the coredump to the pipe. Your script can read it and write it to storage.
  • Your script can attach GDB, get a backtrace, and gather other information to send a detailed email.

But the coolest part of all:

  • All of the files in /proc/[pid] for that process remain intact and can be inspected. You can check the open file descriptors, the process’s memory map, and much much more.

ruby codez to harness this amazing code path

I whipped up a pretty simple, ugly, ruby script. You can get it here. I set up my system to use it by:

% echo "|/path/to/core_helper.rb %p %s %u %g" > /proc/sys/kernel/core_pattern 


  • %pPID of the dying process
  • %s – signal number causing the core dump
  • %u – real user id of the dying process
  • %g – real group id of the dyning process

Why didn’t you read the documentation instead?

This (as far as I can tell) little-known feature is documented at linux-kernel-source/Documentation/sysctl/kernel.txt under the “core_pattern” section. I didn’t read the documentation because (little known fact) I actually don’t know how to read. I found the code path randomly and it was much more fun an interesting to discover this little feature by diving into the code.


  • This could/should probably be a feature/plugin/whatever for god/monit/etc instead of a stand-alone script.
  • Reading code to discover features doesn’t scale very well, but it is a lot more fun than reading documentation all the time. Also, you learn stuff and reading code makes you a better programmer.


  1. http://lxr.linux.no/linux+v2.6.35.4/fs/exec.c#L1836 []
  2. http://lxr.linux.no/linux+v2.6.35.4/fs/exec.c#L1446 []
  3. http://lxr.linux.no/linux+v2.6.35.4/fs/exec.c#L1836 []

Written by Joe Damato

September 20th, 2010 at 4:59 am

Slides from Defcon 18: Function hooking for OSX and Linux

View Comments

Written by Aman Gupta

August 1st, 2010 at 11:24 am

memprof: A Ruby level memory profiler

View Comments

If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.

What is memprof and why do I care?

memprof is a Ruby gem which supplies memory profiler functionality similar to bleak_house without patching the Ruby VM. You just install the gem, call a function or two, and off you go.

Where do I get it?

memprof is available on gemcutter, so you can just:

gem install memprof

Feel free to browse the source code at: http://github.com/ice799/memprof.

How do I use it?

Using memprof is simple. Before we look at some examples, let me explain more precisely what memprof is measuring.

memprof is measuring the number of objects created and not destroyed during a segment of Ruby code. The ideal use case for memprof is to show you where objects that do not get destroyed are being created:

  • Objects are created and not destroyed when you create new classes. This is a good thing.
  • Sometimes garbage objects sit around until garbage_collect has had a chance to run. These objects will go away.
  • Yet in other cases you might be holding a reference to a large chain of objects without knowing it. Until you remove this reference, the entire chain of objects will remain in memory taking up space.

memprof will show objects created in all cases listed above.

OK, now Let’s take a look at two examples and their output.

A simple program with an obvious memory “leak”:

require 'memprof'

@blah = Hash.new([])

100.times {
  @blah[1] << "aaaaa"

1000.times {
   @blah[2] << "bbbbb"

This program creates 1100 objects which are not destroyed during the start and stop sections of the file because references are held for each object created.

Let's look at the output from memprof:

   1000 test.rb:11:String
    100 test.rb:7:String

In this example memprof shows the 1100 created, broken up by file, line number, and type.

Let's take a look at another example:

require 'memprof'
require "stringio"

This simple program is measuring the number of objects created when requiring stringio.

Let's take a look at the output:

    108 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:__node__
     14 test2.rb:3:String
      2 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:Class
      1 test2.rb:4:StringIO
      1 test2.rb:4:String
      1 test2.rb:3:Array
      1 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:Enumerable

This output shows an internal Ruby interpreter type __node__ was created (these represent code), as well as a few Strings and other objects. Some of these objects are just garbage objects which haven't had a chance to be recycled yet.

What if nudge the garbage_collector along a little bit just for our example? Let's add the following two lines of code to our previous example:


We're now nudging the garbage collector and outputting memprof stats information again. This should show fewer objects, as the garbage collector will recycle some of the garbage objects:

    108 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:__node__
      2 test2.rb:3:String
      2 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:Class
      1 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:Enumerable

As you can see above, a few Strings and other objects went away after the garbage collector ran.

Which Rubies and systems are supported?

  • Only unstripped binaries are supported. To determine if your Ruby binary is stripped, simply run: file `which ruby`. If it is, consult your package manager's documentation. Most Linux distributions offer a package with an unstripped Ruby binary.
  • Only x86_64 is supported at this time. Hopefully, I'll have time to add support for i386/i686 in the immediate future.
  • Linux Ruby Enterprise Edition (1.8.6 and 1.8.7) is supported.
  • Linux MRI Ruby 1.8.6 and 1.8.7 built with --disable-shared are supported. Support for --enable-shared binaries is coming soon.
  • Snow Leopard support is experimental at this time.
  • Ruby 1.9 support coming soon.

How does it work?

If you've been reading my blog over the last week or so, you'd have noticed two previous blog posts (here and here) that describe some tricks I came up with for modifying a running binary image in memory.

memprof is a combination of all those tricks and other hacks to allow memory profiling in Ruby without the need for custom patches to the Ruby VM. You simply require the gem and off you go.

memprof works by inserting trampolines on object allocation and deallocation routines. It gathers metadata about the objects and outputs this information when the stats method is called.

What else is planned?

Myself, Jake Douglas, and Aman Gupta have lots of interesting ideas for new features. We don't want to ruin the surprise, but stay tuned. More cool stuff coming really soon :)

Thanks for reading and don't forget to subscribe (via RSS or e-mail) and follow me on twitter.

Written by Joe Damato

December 11th, 2009 at 5:59 am

Rewrite your Ruby VM at runtime to hot patch useful features

View Comments

If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.

Some notes before the blood starts flowin’

  • CAUTION: What you are about to read is dangerous, non-portable, and (in most cases) stupid.
  • The code and article below refer only to the x86_64 architecture.
  • Grab some gauze. This is going to get ugly.


This article shows off a Ruby gem which has the power to overwrite a Ruby binary in memory while it is running to allow your code to execute in place of internal VM functions. This is useful if you’d like to hook all object allocation functions to build a memory profiler.

This gem is on GitHub

Yes, it’s on GitHub: http://github.com/ice799/memprof.

I want a memory profiler for Ruby

This whole science experiment started during RubyConf when Aman and I began brainstorming ways to build a memory profiling tool for Ruby.

The big problem in our minds was that for most tools we’d have to include patches to the Ruby VM. That process is long and somewhat difficult, so I started thinking about ways to do this without modifying the Ruby source code itself.

The memory profiler is NOT DONE just yet. I thought that the hack I wrote to let us build something without modifying Ruby source code was interesting enough that it warranted a blog post. So let’s get rolling.

What is a trampoline?

Let’s pretend you have 2 functions: functionA() and functionB(). Let’s assume that functionA() calls functionB().

Now also imagine that you’d like to insert a piece of code to execute in between the call to functionB(). You can imagine inserting a piece of code that diverts execution elsewhere, creating a flow: functionA() –> functionC() –> functionB()

You can accomplish this by inserting a trampoline.

A trampoline is a piece of code that program execution jumps into and then bounces out of and on to somewhere else1.

This hack relies on the use of multiple trampolines. We’ll see why shortly.

Two different kinds of trampolines

There are two different kinds of trampolines that I considered while writing this hack, let’s take a closer look at both.

Caller-side trampoline

A caller-side trampoline works by overwriting the opcodes in the .text segment of the program in the calling function causing it to call a different function at runtime.

The big pros of this method are:

  • You aren’t overwriting any code, only the address operand of a callq instruction.
  • Since you are only changing an operand, you can hook any function. You don’t need to build custom trampolines for each function.

This method also has some big cons too:

  • You’ll need to scan the entire binary in memory and find and overwrite all address operands of callq. This is problematic because if you overwrite any false-positives you might break your application.
  • You have to deal with the implications of callq, which can be painful as we’ll see soon.

Callee-side trampoline

A callee-side trampoline works by overwriting the opcodes in the .text segment of the program in the called function, causing it to call another function immediately

The big pro of this method is:

  • You only need to overwrite code in one place and don’t need to worry about accidentally scribbling on bytes that you didn’t mean to.

this method has some big cons too:

  • You’ll need to carefully construct your trampoline code to only overwrite as little of the function as possible (or some how restore opcodes), especially if you expect the original function to work as expected later.
  • You’ll need to special case each trampoline you build for different optimization levels of the binary you are hooking into.

I went with a caller-side trampoline because I wanted to ensure that I can hook any function and not have to worry about different Ruby binaries causing problems when they are compiled with different optimization levels.

The stage 1 trampoline

To insert my trampolines I needed to insert some binary into the process and then overwrite callq instructions like this:

  41150b:       e8 cc 4e 02 00         callq  4363dc [rb_newobj]
  411510:       48 89 45 f8             ....

In the above code snippet, the byte e8 is the callq opcode and the bytes cc 4e 02 00 are the distance to rb_newobj from the address of the next instruction, 0×411510

All I need to do is change the 4 bytes following e8 to equal the displacement between the next instruction, 0×411510 in this case, and my trampoline.


My first cut at this code lead me to an important realization: the callq instructions used expect a 32bit displacement from the function I am calling and not absolute addresses. But, the 64bit address space is very large. The displacement between the code for the Ruby binary that lives in the .text segment is so far away from my Ruby gem that the displacement cannot be represented with only 32bits.

So what now?

Well, luckily mmap has a flag MAP_32BIT which maps a page in the first 2GB of the address space. If I map some code there, it should be well within the range of values whose displacement I can represent in 32bits.

So, why not map a second trampoline to that page which can contains code that can call an absolute address?

My stage 1 trampoline code looks something like this:

  /* the struct below is just a sequence of bytes which represent the
    *  following bit of assembly code, including 3 nops for padding:
    *  mov $address, %rbx
    *  callq *%rbx
    *  ret
    *  nop
    *  nop
    *  nop
  struct tramp_tbl_entry ent = {
    .mov = {'\x48','\xbb'},
    .addr = (long long)&error_tramp,
    .callq = {'\xff','\xd3'},
    .ret = '\xc3',
    .pad =  {'\x90','\x90','\x90'},

  tramp_table = mmap(NULL, 4096, PROT_WRITE|PROT_READ|PROT_EXEC, 
                                   MAP_32BIT|MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
  if (tramp_table != MAP_FAILED) {
    for (; i < 4096/sizeof(struct tramp_tbl_entry); i ++ ) {
      memcpy(tramp_table + i, &ent, sizeof(struct tramp_tbl_entry));

It mmaps a single page and writes a table of default trampolines (like a jump table) that all call an error trampoline by default. When a new trampoline is inserted, I just go to that entry in the table and insert the address that should be called.

To get around the displacement challenge described above, the addresses I insert into the stage 1 trampoline table are addresses for stage 2 trampolines.

The stage 2 trampoline

Setting up the stage 2 trampolines are pretty simple once the stage 1 trampoline table has been written to memory. All that needs to be done is update the address field in a free stage 1 trampoline to be the address of my stage 2 trampoline. These trampolines are written in C and live in my Ruby gem.

static void
insert_tramp(char *trampee, void *tramp) {
  void *trampee_addr = find_symbol(trampee);
  int entry = tramp_size;
  tramp_table[tramp_size].addr = (long long)tramp;
  update_image(entry, trampee_addr);

An example of a stage 2 trampoline for rb_newobj might be:

static VALUE
newobj_tramp() {
  /* print the ruby source and line number where the allocation is occuring */
  printf("source = %s, line = %d\n", ruby_sourcefile, ruby_sourceline);

  /* call newobj like normal so the ruby app can continue */
  return rb_newobj();

Programatically rewriting the Ruby binary in memory

Overwriting the Ruby binary to cause my stage 1 trampolines to get hit is pretty simple, too. I can just scan the .text segment of the binary looking for bytes which look like callq instructions. Then, I can sanity check by reading the next 4 bytes which should be the displacement to the original function. Doing that sanity check should prevent false positives.

static void
update_image(int entry, void *trampee_addr) {
  char *byte = text_segment;
  size_t count = 0;
  int fn_addr = 0;
  void *aligned_addr = NULL;

 /* check each byte in the .text segment */
  for(; count < text_segment_len; count++) {

    /* if it looks like a callq instruction... */
    if (*byte == '\xe8') {

      /* the next 4 bytes SHOULD BE the original displacement */
      fn_addr = *(int *)(byte+1);

      /* do a sanity check to make sure the next few bytes are an accurate displacement.
        * this helps to eliminate false positives.
      if (trampee_addr - (void *)(byte+5) == fn_addr) {
        aligned_addr = (void*)(((long)byte+1)&~(0xffff));

        /* mark the page in the .text segment as writable so it can be modified */
        mprotect(aligned_addr, (void *)byte+1 - aligned_addr + 10, 

        /* calculate the new displacement and write it */
        *(int  *)(byte+1) = (uint32_t)((void *)(tramp_table + entry) 
                                     - (void *)(byte + 5));
        /* disallow writing to this page of the .text segment again  */
        mprotect(aligned_addr, (((void *)byte+1) - aligned_addr) + 10,

Sample output

After requiring my ruby gem and running a test script which creates lots of objects, I see this output:

source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8

Showing the file name and line number for each object getting allocated. That should be a strong enough primitive to build a Ruby memory profiler without requiring end users to build a custom version of Ruby. It should also be possible to re-implement bleak_house by using this gem (and maybe another trick or two).



  • One step closer to building a memory profiler without requiring end users to find and use patches floating around the internet.
  • It is unclear whether cheap tricks like this are useful or harmful, but they are fun to write.
  • If you understand how your system works at an intimate level, nearly anything is possible. The work required to make it happen might be difficult though.

Thanks for reading and don't forget to subscribe (via RSS or e-mail) and follow me on twitter.


  1. http://en.wikipedia.org/wiki/Trampoline_(computers) []

Written by Joe Damato

November 23rd, 2009 at 5:59 am