time to bleed by Joe Damato

technical ramblings from a wanna-be unix dinosaur

Archive for the ‘python’ Category

Rewrite your Ruby VM at runtime to hot patch useful features

View Comments


If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.

Some notes before the blood starts flowin’

  • CAUTION: What you are about to read is dangerous, non-portable, and (in most cases) stupid.
  • The code and article below refer only to the x86_64 architecture.
  • Grab some gauze. This is going to get ugly.

TLDR

This article shows off a Ruby gem which has the power to overwrite a Ruby binary in memory while it is running to allow your code to execute in place of internal VM functions. This is useful if you’d like to hook all object allocation functions to build a memory profiler.

This gem is on GitHub

Yes, it’s on GitHub: http://github.com/ice799/memprof.

I want a memory profiler for Ruby

This whole science experiment started during RubyConf when Aman and I began brainstorming ways to build a memory profiling tool for Ruby.

The big problem in our minds was that for most tools we’d have to include patches to the Ruby VM. That process is long and somewhat difficult, so I started thinking about ways to do this without modifying the Ruby source code itself.

The memory profiler is NOT DONE just yet. I thought that the hack I wrote to let us build something without modifying Ruby source code was interesting enough that it warranted a blog post. So let’s get rolling.

What is a trampoline?

Let’s pretend you have 2 functions: functionA() and functionB(). Let’s assume that functionA() calls functionB().

Now also imagine that you’d like to insert a piece of code to execute in between the call to functionB(). You can imagine inserting a piece of code that diverts execution elsewhere, creating a flow: functionA() –> functionC() –> functionB()

You can accomplish this by inserting a trampoline.

A trampoline is a piece of code that program execution jumps into and then bounces out of and on to somewhere else1.

This hack relies on the use of multiple trampolines. We’ll see why shortly.

Two different kinds of trampolines

There are two different kinds of trampolines that I considered while writing this hack, let’s take a closer look at both.

Caller-side trampoline

A caller-side trampoline works by overwriting the opcodes in the .text segment of the program in the calling function causing it to call a different function at runtime.

The big pros of this method are:

  • You aren’t overwriting any code, only the address operand of a callq instruction.
  • Since you are only changing an operand, you can hook any function. You don’t need to build custom trampolines for each function.

This method also has some big cons too:

  • You’ll need to scan the entire binary in memory and find and overwrite all address operands of callq. This is problematic because if you overwrite any false-positives you might break your application.
  • You have to deal with the implications of callq, which can be painful as we’ll see soon.

Callee-side trampoline

A callee-side trampoline works by overwriting the opcodes in the .text segment of the program in the called function, causing it to call another function immediately

The big pro of this method is:

  • You only need to overwrite code in one place and don’t need to worry about accidentally scribbling on bytes that you didn’t mean to.

this method has some big cons too:

  • You’ll need to carefully construct your trampoline code to only overwrite as little of the function as possible (or some how restore opcodes), especially if you expect the original function to work as expected later.
  • You’ll need to special case each trampoline you build for different optimization levels of the binary you are hooking into.

I went with a caller-side trampoline because I wanted to ensure that I can hook any function and not have to worry about different Ruby binaries causing problems when they are compiled with different optimization levels.

The stage 1 trampoline

To insert my trampolines I needed to insert some binary into the process and then overwrite callq instructions like this:

  41150b:       e8 cc 4e 02 00         callq  4363dc [rb_newobj]
  411510:       48 89 45 f8             ....

In the above code snippet, the byte e8 is the callq opcode and the bytes cc 4e 02 00 are the distance to rb_newobj from the address of the next instruction, 0×411510

All I need to do is change the 4 bytes following e8 to equal the displacement between the next instruction, 0×411510 in this case, and my trampoline.

Problem.

My first cut at this code lead me to an important realization: the callq instructions used expect a 32bit displacement from the function I am calling and not absolute addresses. But, the 64bit address space is very large. The displacement between the code for the Ruby binary that lives in the .text segment is so far away from my Ruby gem that the displacement cannot be represented with only 32bits.

So what now?

Well, luckily mmap has a flag MAP_32BIT which maps a page in the first 2GB of the address space. If I map some code there, it should be well within the range of values whose displacement I can represent in 32bits.

So, why not map a second trampoline to that page which can contains code that can call an absolute address?

My stage 1 trampoline code looks something like this:

  /* the struct below is just a sequence of bytes which represent the
    *  following bit of assembly code, including 3 nops for padding:
    *  
    *  mov $address, %rbx
    *  callq *%rbx
    *  ret
    *  nop
    *  nop
    *  nop
    */
  struct tramp_tbl_entry ent = {
    .mov = {'\x48','\xbb'},
    .addr = (long long)&error_tramp,
    .callq = {'\xff','\xd3'},
    .ret = '\xc3',
    .pad =  {'\x90','\x90','\x90'},
  };

  tramp_table = mmap(NULL, 4096, PROT_WRITE|PROT_READ|PROT_EXEC, 
                                   MAP_32BIT|MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
  if (tramp_table != MAP_FAILED) {
    for (; i < 4096/sizeof(struct tramp_tbl_entry); i ++ ) {
      memcpy(tramp_table + i, &ent, sizeof(struct tramp_tbl_entry));
    }
  }
}

It mmaps a single page and writes a table of default trampolines (like a jump table) that all call an error trampoline by default. When a new trampoline is inserted, I just go to that entry in the table and insert the address that should be called.

To get around the displacement challenge described above, the addresses I insert into the stage 1 trampoline table are addresses for stage 2 trampolines.

The stage 2 trampoline

Setting up the stage 2 trampolines are pretty simple once the stage 1 trampoline table has been written to memory. All that needs to be done is update the address field in a free stage 1 trampoline to be the address of my stage 2 trampoline. These trampolines are written in C and live in my Ruby gem.

static void
insert_tramp(char *trampee, void *tramp) {
  void *trampee_addr = find_symbol(trampee);
  int entry = tramp_size;
  tramp_table[tramp_size].addr = (long long)tramp;
  tramp_size++;
  update_image(entry, trampee_addr);
}

An example of a stage 2 trampoline for rb_newobj might be:

static VALUE
newobj_tramp() {
  /* print the ruby source and line number where the allocation is occuring */
  printf("source = %s, line = %d\n", ruby_sourcefile, ruby_sourceline);

  /* call newobj like normal so the ruby app can continue */
  return rb_newobj();
}

Programatically rewriting the Ruby binary in memory

Overwriting the Ruby binary to cause my stage 1 trampolines to get hit is pretty simple, too. I can just scan the .text segment of the binary looking for bytes which look like callq instructions. Then, I can sanity check by reading the next 4 bytes which should be the displacement to the original function. Doing that sanity check should prevent false positives.

static void
update_image(int entry, void *trampee_addr) {
  char *byte = text_segment;
  size_t count = 0;
  int fn_addr = 0;
  void *aligned_addr = NULL;

 /* check each byte in the .text segment */
  for(; count < text_segment_len; count++) {

    /* if it looks like a callq instruction... */
    if (*byte == '\xe8') {

      /* the next 4 bytes SHOULD BE the original displacement */
      fn_addr = *(int *)(byte+1);

      /* do a sanity check to make sure the next few bytes are an accurate displacement.
        * this helps to eliminate false positives.
        */
      if (trampee_addr - (void *)(byte+5) == fn_addr) {
        aligned_addr = (void*)(((long)byte+1)&~(0xffff));

        /* mark the page in the .text segment as writable so it can be modified */
        mprotect(aligned_addr, (void *)byte+1 - aligned_addr + 10, 
                       PROT_READ|PROT_WRITE|PROT_EXEC);

        /* calculate the new displacement and write it */
        *(int  *)(byte+1) = (uint32_t)((void *)(tramp_table + entry) 
                                     - (void *)(byte + 5));
 
        /* disallow writing to this page of the .text segment again  */
        mprotect(aligned_addr, (((void *)byte+1) - aligned_addr) + 10,
                      PROT_READ|PROT_EXEC);
      }
    }
    byte++;
  }
}

Sample output

After requiring my ruby gem and running a test script which creates lots of objects, I see this output:

...
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
...

Showing the file name and line number for each object getting allocated. That should be a strong enough primitive to build a Ruby memory profiler without requiring end users to build a custom version of Ruby. It should also be possible to re-implement bleak_house by using this gem (and maybe another trick or two).

Awesome.

Conclusion

  • One step closer to building a memory profiler without requiring end users to find and use patches floating around the internet.
  • It is unclear whether cheap tricks like this are useful or harmful, but they are fun to write.
  • If you understand how your system works at an intimate level, nearly anything is possible. The work required to make it happen might be difficult though.

Thanks for reading and don't forget to subscribe (via RSS or e-mail) and follow me on twitter.

References

  1. http://en.wikipedia.org/wiki/Trampoline_(computers) []

Written by Joe Damato

November 23rd, 2009 at 5:59 am

Extending ltrace to make your Ruby/Python/Perl/PHP apps faster

View Comments


If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.

A few days ago, Aman (@tmm1) was complaining to me about a slow running process:

I want to see what is happening in userland and trace calls to extensions. Why doesn’t ltrace work for Ruby processes? I want to figure out which MySQL queries are causing my app to be slow.

It turns out that ltrace did not have support for libraries loaded with libdl. This is a problem for languages like Ruby, Python, PHP, Perl, and others because in many cases extensions, libraries, and plugins for these languages are loaded by the VM using libdl. This means that ltrace is somewhat useless for tracking down performance issues in dynamic languages.

A couple late nights of hacking and I managed to finagle libdl support in ltrace. Since most people probably don’t care about the technical details of how it was implemented, I’ll start with showing how to use the patch I wrote and what sort of output you can expect. This patch has made tracking down slow queries (among other things) really easy and I hope others will find this useful.

How to use ltrace:

After you’ve applied my patch (below) and rebuilt ltrace, let’s say you’d like to trace MySQL queries and have ltrace tell you when the query was executed and how long it took. There are two steps:

  1. Give ltrace info so it can pretty print – echo “int mysql_real_query(addr,string,ulong);” > custom.conf
  2. Tell ltrace you want to hear about mysql_real_query: ltrace -F custom.conf -ttTgx mysql_real_query -p <pid>

Here’s what those arguments mean:

  • -F use a custom config file when pretty-printing (default: /etc/ltrace.conf, add your stuff there to avoid -F if you wish).
  • -tt print the time (including microseconds) when the call was executed
  • -T time the call and print how long it took
  • -x tells ltrace the name of the function you care about
  • -g avoid placing breakpoints on all library calls except the ones you specify with -x. This is optional, but it makes ltrace produce much less output and is a lot easier to read if you only care about your one function.

PHP

Test script

mysql_connect("localhost", "root");
while(true){
    mysql_query("SELECT sleep(1)");
}

ltrace output

22:31:50.507523 zend_hash_find(0x025dc3a0, "mysql_query", 12) = 0 <0.000029>
22:31:50.507781 mysql_real_query(0x027bc540, "SELECT sleep(1)", 15) = 0 <1.000600>
22:31:51.508531 zend_hash_find(0x025dc3a0, "mysql_query", 12) = 0 <0.000025>
22:31:51.508675 mysql_real_query(0x027bc540, "SELECT sleep(1)", 15) = 0 <1.000926>

ltrace command

ltrace -ttTg -x zend_hash_find -x mysql_real_query -p [pid of script above]

Python

Test script

import MySQLdb
db = MySQLdb.connect("localhost", "root", "", "test")
cursor = db.cursor()
sql = """SELECT sleep(1)"""
while True:
	cursor.execute(sql)
	data = cursor.fetchone()
db.close()

ltrace output

22:24:39.104786 PyEval_SaveThread() = 0x21222e0 <0.000029>
22:24:39.105020 PyEval_SaveThread() = 0x21222e0 <0.000024>
22:24:39.105210 PyEval_SaveThread() = 0x21222e0 <0.000024>
22:24:39.105303 mysql_real_query(0x021d01d0, "SELECT sleep(1)", 15) = 0 <1.002083>
22:24:40.107553 PyEval_SaveThread() = 0x21222e0 <0.000026>
22:24:40.107713 PyEval_SaveThread()= 0x21222e0 <0.000024>
22:24:40.107909 PyEval_SaveThread() = 0x21222e0 <0.000025>
22:24:40.108013 mysql_real_query(0x021d01d0, "SELECT sleep(1)", 15) = 0 <1.001821>

ltrace command

ltrace -ttTg -x PyEval_SaveThread -x mysql_real_query -p [pid of script above]

Perl

Test script

#!/usr/bin/perl
use DBI;

$dsn = "DBI:mysql:database=test;host=localhost";
$dbh = DBI->connect($dsn, "root", "");
$drh = DBI->install_driver("mysql");
@databases = DBI->data_sources("mysql");
$sth = $dbh->prepare("SELECT SLEEP(1)");

while (1) {
  $sth->execute;
}

ltrace output

22:42:11.194073 Perl_push_scope(0x01bd3010) =  <0.000028>
22:42:11.194299 mysql_real_query(0x01bfbf40, "SELECT SLEEP(1)", 15) = 0 <1.000876>
22:42:12.195302 Perl_push_scope(0x01bd3010) =  <0.000024>
22:42:12.195408 mysql_real_query(0x01bfbf40, "SELECT SLEEP(1)", 15) = 0 <1.000967>

ltrace command

ltrace -ttTg -x mysql_real_query -x Perl_push_scope -p [pid of script above]

Ruby

Test script

require 'rubygems'
require 'sequel'

DB = Sequel.connect('mysql://root@localhost/test')

while true
  p DB['select sleep(1)'].select.first
  GC.start
end

snip of ltrace output

22:10:00.195814 garbage_collect()  = 0 <0.022194>
22:10:00.218438 mysql_real_query(0x02740000, "select sleep(1)", 15) = 0 <1.001100>
22:10:01.219884 garbage_collect() = 0 <0.021401>
22:10:01.241679 mysql_real_query(0x02740000, "select sleep(1)", 15) = 0 <1.000812>

ltrace command used:

ltrace -ttTg -x garbage_collect -x mysql_real_query -p [pid of script above]

Where to get it

How ltrace works normally

ltrace works by setting software breakpoints on entries in a process’ Procedure Linkage Table (PLT).

What is a software breakpoint

A software breakpoint is just a series of bytes (0xcc on the x86 and x86_64) that raise a debug interrupt (interrupt 3 on the x86 and x86_64). When interrupt 3 is raised, the CPU executes a handler installed by the kernel. The kernel then sends a signal to the process that generated the interrupt. (Want to know more about how signals and interrupts work? Check out an earlier blog post: here)

What is a PLT and how does it work?

A PLT is a table of absolute addresses to functions. It is used because the link editor doesn’t know where functions in shared objects will be located. Instead, a table is created so that the program and the dynamic linker can work together to find and execute functions in shared objects. I’ve simplified the explanation a bit1, but at a high level:

  1. Program calls a function in a shared object, the link editor makes sure that the program jumps to a slot in the PLT.
  2. The program sets some data up for the dynamic linker and then hands control over to it.
  3. The dynamic linker looks at the info set up by the program and fills in the absolute address of the function that was called in the PLT.
  4. Then the dynamic linker calls the function.
  5. Subsequent calls to the same function jump to the same slot in the PLT, but every time after the first call the absolute address is already in the PLT (because when the dynamic linker is invoked the first time, it fills in the absolute address in the PLT).

Since all calls to library functions occur via the PLT, ltrace sets breakpoints on each PLT entry in a program.

Why ltrace didn’t work with libdl loaded libraries

Libraries loaded with libdl are loaded at run time and functions (and other symbols) are accessed by querying the dynamic linker (by calling dlsym()). The compiler and link editor don’t know anything about libraries loaded this way (they may not even exist!) and as such no PLT entries are created for them.

Since no PLT entries exist, ltrace can’t trace these functions.

What needed to be done to make ltrace libdl-aware

OK, so we understand the problem. ltrace only sets breakpoints on PLT entries and libdl loaded libraries don’t have PLT entries. How can this be fixed?

Luckily, the dynamic linker and ELF all work together to save your ass.

Executable and Linking Format (ELF) is a file format for executables, shared libraries, and more2. The file format can get a bit complicated, but all you really need to know is: ELF consists of different sections which hold different types of entries. There is a section called .dynamic which has an entry named DT_DEBUG. This entry stores the address of a debugging structure in the address space of the process. In Linux, this struct has type struct r_debug.

How to use struct r_debug to win the game

The debug structure is updated by the dynamic linker at runtime to reflect the current state of shared object loading. The structure contains 3 things that will help us in our quest:

  1. state – the current state of the mapping change taking place (begin add, begin delete, consistent)
  2. brk – the address of a function internal to the dynamic linker that will be called when the linker maps, unmaps, or has completed mapping a shared object.
  3. link map – Pointer to the start of a list of currently loaded objects. This list is called the link map and is represented as a struct link_map in Linux.

Tie it all together and bring it home

To add support for libdl loaded libraries to ltrace, the steps are:

  1. Find the address of the debug structure in the .dynamic section of the program.
  2. Set a software breakpoint on brk.
  3. When the dynamic linker updates the link map, it will trigger the software breakpoint.
  4. When the breakpoint is triggered, check state in the debug structure.
  5. If a new library has been added, walk the link map and figure out what was added.
  6. Search the added library’s symbol table for the symbols we care about.
  7. Set a software breakpoints on whatever is found.
  8. Steps 3-8 repeat.

That isn’t too hard all thanks to the dynamic linker providing a way for us to hook into its internal events.

Conclusion

  • Read the System V ABI for your CPU. It is filled with insanely useful information that can help you be a better programmer.
  • Use the source. A few times while hacking on this patch I looked through the source for GDB and glibc to help me figure out what was going on.
  • Understanding how things work at a low-level can help you build tools to solve your high-level problems.

Thanks for reading and don’t forget to subscribe (via RSS or e-mail) and follow me on twitter.

References

  1. System V Application Binary Interface AMD64 Architecture Processor Supplement, p 78 []
  2. Executable and Linking Format (ELF) Specification []

Written by Joe Damato

October 8th, 2009 at 4:59 am