time to bleed by Joe Damato

technical ramblings from a wanna-be unix dinosaur

Archive for the ‘bugfix’ Category

memprof: A Ruby level memory profiler

View Comments


If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.

What is memprof and why do I care?

memprof is a Ruby gem which supplies memory profiler functionality similar to bleak_house without patching the Ruby VM. You just install the gem, call a function or two, and off you go.

Where do I get it?

memprof is available on gemcutter, so you can just:

gem install memprof

Feel free to browse the source code at: http://github.com/ice799/memprof.

How do I use it?

Using memprof is simple. Before we look at some examples, let me explain more precisely what memprof is measuring.

memprof is measuring the number of objects created and not destroyed during a segment of Ruby code. The ideal use case for memprof is to show you where objects that do not get destroyed are being created:

  • Objects are created and not destroyed when you create new classes. This is a good thing.
  • Sometimes garbage objects sit around until garbage_collect has had a chance to run. These objects will go away.
  • Yet in other cases you might be holding a reference to a large chain of objects without knowing it. Until you remove this reference, the entire chain of objects will remain in memory taking up space.

memprof will show objects created in all cases listed above.

OK, now Let’s take a look at two examples and their output.

A simple program with an obvious memory “leak”:

require 'memprof'

@blah = Hash.new([])

Memprof.start
100.times {
  @blah[1] << "aaaaa"
}

1000.times {
   @blah[2] << "bbbbb"
}
Memprof.stats
Memprof.stop

This program creates 1100 objects which are not destroyed during the start and stop sections of the file because references are held for each object created.

Let's look at the output from memprof:

   1000 test.rb:11:String
    100 test.rb:7:String

In this example memprof shows the 1100 created, broken up by file, line number, and type.

Let's take a look at another example:

require 'memprof'
Memprof.start
require "stringio"
StringIO.new
Memprof.stats

This simple program is measuring the number of objects created when requiring stringio.

Let's take a look at the output:

    108 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:__node__
     14 test2.rb:3:String
      2 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:Class
      1 test2.rb:4:StringIO
      1 test2.rb:4:String
      1 test2.rb:3:Array
      1 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:Enumerable

This output shows an internal Ruby interpreter type __node__ was created (these represent code), as well as a few Strings and other objects. Some of these objects are just garbage objects which haven't had a chance to be recycled yet.

What if nudge the garbage_collector along a little bit just for our example? Let's add the following two lines of code to our previous example:

GC.start
Memprof.stats

We're now nudging the garbage collector and outputting memprof stats information again. This should show fewer objects, as the garbage collector will recycle some of the garbage objects:

    108 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:__node__
      2 test2.rb:3:String
      2 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:Class
      1 /custom/ree/lib/ruby/1.8/x86_64-linux/stringio.so:0:Enumerable

As you can see above, a few Strings and other objects went away after the garbage collector ran.

Which Rubies and systems are supported?

  • Only unstripped binaries are supported. To determine if your Ruby binary is stripped, simply run: file `which ruby`. If it is, consult your package manager's documentation. Most Linux distributions offer a package with an unstripped Ruby binary.
  • Only x86_64 is supported at this time. Hopefully, I'll have time to add support for i386/i686 in the immediate future.
  • Linux Ruby Enterprise Edition (1.8.6 and 1.8.7) is supported.
  • Linux MRI Ruby 1.8.6 and 1.8.7 built with --disable-shared are supported. Support for --enable-shared binaries is coming soon.
  • Snow Leopard support is experimental at this time.
  • Ruby 1.9 support coming soon.

How does it work?

If you've been reading my blog over the last week or so, you'd have noticed two previous blog posts (here and here) that describe some tricks I came up with for modifying a running binary image in memory.

memprof is a combination of all those tricks and other hacks to allow memory profiling in Ruby without the need for custom patches to the Ruby VM. You simply require the gem and off you go.

memprof works by inserting trampolines on object allocation and deallocation routines. It gathers metadata about the objects and outputs this information when the stats method is called.

What else is planned?

Myself, Jake Douglas, and Aman Gupta have lots of interesting ideas for new features. We don't want to ruin the surprise, but stay tuned. More cool stuff coming really soon :)

Thanks for reading and don't forget to subscribe (via RSS or e-mail) and follow me on twitter.

Written by Joe Damato

December 11th, 2009 at 5:59 am

Debugging Ruby: Understanding and Troubleshooting the VM and your Application

View Comments

Download the PDF here.

Debugging Ruby

Written by Aman Gupta

December 2nd, 2009 at 8:30 pm

Rewrite your Ruby VM at runtime to hot patch useful features

View Comments


If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.

Some notes before the blood starts flowin’

  • CAUTION: What you are about to read is dangerous, non-portable, and (in most cases) stupid.
  • The code and article below refer only to the x86_64 architecture.
  • Grab some gauze. This is going to get ugly.

TLDR

This article shows off a Ruby gem which has the power to overwrite a Ruby binary in memory while it is running to allow your code to execute in place of internal VM functions. This is useful if you’d like to hook all object allocation functions to build a memory profiler.

This gem is on GitHub

Yes, it’s on GitHub: http://github.com/ice799/memprof.

I want a memory profiler for Ruby

This whole science experiment started during RubyConf when Aman and I began brainstorming ways to build a memory profiling tool for Ruby.

The big problem in our minds was that for most tools we’d have to include patches to the Ruby VM. That process is long and somewhat difficult, so I started thinking about ways to do this without modifying the Ruby source code itself.

The memory profiler is NOT DONE just yet. I thought that the hack I wrote to let us build something without modifying Ruby source code was interesting enough that it warranted a blog post. So let’s get rolling.

What is a trampoline?

Let’s pretend you have 2 functions: functionA() and functionB(). Let’s assume that functionA() calls functionB().

Now also imagine that you’d like to insert a piece of code to execute in between the call to functionB(). You can imagine inserting a piece of code that diverts execution elsewhere, creating a flow: functionA() –> functionC() –> functionB()

You can accomplish this by inserting a trampoline.

A trampoline is a piece of code that program execution jumps into and then bounces out of and on to somewhere else1.

This hack relies on the use of multiple trampolines. We’ll see why shortly.

Two different kinds of trampolines

There are two different kinds of trampolines that I considered while writing this hack, let’s take a closer look at both.

Caller-side trampoline

A caller-side trampoline works by overwriting the opcodes in the .text segment of the program in the calling function causing it to call a different function at runtime.

The big pros of this method are:

  • You aren’t overwriting any code, only the address operand of a callq instruction.
  • Since you are only changing an operand, you can hook any function. You don’t need to build custom trampolines for each function.

This method also has some big cons too:

  • You’ll need to scan the entire binary in memory and find and overwrite all address operands of callq. This is problematic because if you overwrite any false-positives you might break your application.
  • You have to deal with the implications of callq, which can be painful as we’ll see soon.

Callee-side trampoline

A callee-side trampoline works by overwriting the opcodes in the .text segment of the program in the called function, causing it to call another function immediately

The big pro of this method is:

  • You only need to overwrite code in one place and don’t need to worry about accidentally scribbling on bytes that you didn’t mean to.

this method has some big cons too:

  • You’ll need to carefully construct your trampoline code to only overwrite as little of the function as possible (or some how restore opcodes), especially if you expect the original function to work as expected later.
  • You’ll need to special case each trampoline you build for different optimization levels of the binary you are hooking into.

I went with a caller-side trampoline because I wanted to ensure that I can hook any function and not have to worry about different Ruby binaries causing problems when they are compiled with different optimization levels.

The stage 1 trampoline

To insert my trampolines I needed to insert some binary into the process and then overwrite callq instructions like this:

  41150b:       e8 cc 4e 02 00         callq  4363dc [rb_newobj]
  411510:       48 89 45 f8             ....

In the above code snippet, the byte e8 is the callq opcode and the bytes cc 4e 02 00 are the distance to rb_newobj from the address of the next instruction, 0×411510

All I need to do is change the 4 bytes following e8 to equal the displacement between the next instruction, 0×411510 in this case, and my trampoline.

Problem.

My first cut at this code lead me to an important realization: the callq instructions used expect a 32bit displacement from the function I am calling and not absolute addresses. But, the 64bit address space is very large. The displacement between the code for the Ruby binary that lives in the .text segment is so far away from my Ruby gem that the displacement cannot be represented with only 32bits.

So what now?

Well, luckily mmap has a flag MAP_32BIT which maps a page in the first 2GB of the address space. If I map some code there, it should be well within the range of values whose displacement I can represent in 32bits.

So, why not map a second trampoline to that page which can contains code that can call an absolute address?

My stage 1 trampoline code looks something like this:

  /* the struct below is just a sequence of bytes which represent the
    *  following bit of assembly code, including 3 nops for padding:
    *  
    *  mov $address, %rbx
    *  callq *%rbx
    *  ret
    *  nop
    *  nop
    *  nop
    */
  struct tramp_tbl_entry ent = {
    .mov = {'\x48','\xbb'},
    .addr = (long long)&error_tramp,
    .callq = {'\xff','\xd3'},
    .ret = '\xc3',
    .pad =  {'\x90','\x90','\x90'},
  };

  tramp_table = mmap(NULL, 4096, PROT_WRITE|PROT_READ|PROT_EXEC, 
                                   MAP_32BIT|MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
  if (tramp_table != MAP_FAILED) {
    for (; i < 4096/sizeof(struct tramp_tbl_entry); i ++ ) {
      memcpy(tramp_table + i, &ent, sizeof(struct tramp_tbl_entry));
    }
  }
}

It mmaps a single page and writes a table of default trampolines (like a jump table) that all call an error trampoline by default. When a new trampoline is inserted, I just go to that entry in the table and insert the address that should be called.

To get around the displacement challenge described above, the addresses I insert into the stage 1 trampoline table are addresses for stage 2 trampolines.

The stage 2 trampoline

Setting up the stage 2 trampolines are pretty simple once the stage 1 trampoline table has been written to memory. All that needs to be done is update the address field in a free stage 1 trampoline to be the address of my stage 2 trampoline. These trampolines are written in C and live in my Ruby gem.

static void
insert_tramp(char *trampee, void *tramp) {
  void *trampee_addr = find_symbol(trampee);
  int entry = tramp_size;
  tramp_table[tramp_size].addr = (long long)tramp;
  tramp_size++;
  update_image(entry, trampee_addr);
}

An example of a stage 2 trampoline for rb_newobj might be:

static VALUE
newobj_tramp() {
  /* print the ruby source and line number where the allocation is occuring */
  printf("source = %s, line = %d\n", ruby_sourcefile, ruby_sourceline);

  /* call newobj like normal so the ruby app can continue */
  return rb_newobj();
}

Programatically rewriting the Ruby binary in memory

Overwriting the Ruby binary to cause my stage 1 trampolines to get hit is pretty simple, too. I can just scan the .text segment of the binary looking for bytes which look like callq instructions. Then, I can sanity check by reading the next 4 bytes which should be the displacement to the original function. Doing that sanity check should prevent false positives.

static void
update_image(int entry, void *trampee_addr) {
  char *byte = text_segment;
  size_t count = 0;
  int fn_addr = 0;
  void *aligned_addr = NULL;

 /* check each byte in the .text segment */
  for(; count < text_segment_len; count++) {

    /* if it looks like a callq instruction... */
    if (*byte == '\xe8') {

      /* the next 4 bytes SHOULD BE the original displacement */
      fn_addr = *(int *)(byte+1);

      /* do a sanity check to make sure the next few bytes are an accurate displacement.
        * this helps to eliminate false positives.
        */
      if (trampee_addr - (void *)(byte+5) == fn_addr) {
        aligned_addr = (void*)(((long)byte+1)&~(0xffff));

        /* mark the page in the .text segment as writable so it can be modified */
        mprotect(aligned_addr, (void *)byte+1 - aligned_addr + 10, 
                       PROT_READ|PROT_WRITE|PROT_EXEC);

        /* calculate the new displacement and write it */
        *(int  *)(byte+1) = (uint32_t)((void *)(tramp_table + entry) 
                                     - (void *)(byte + 5));
 
        /* disallow writing to this page of the .text segment again  */
        mprotect(aligned_addr, (((void *)byte+1) - aligned_addr) + 10,
                      PROT_READ|PROT_EXEC);
      }
    }
    byte++;
  }
}

Sample output

After requiring my ruby gem and running a test script which creates lots of objects, I see this output:

...
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
source = test.rb, line = 8
...

Showing the file name and line number for each object getting allocated. That should be a strong enough primitive to build a Ruby memory profiler without requiring end users to build a custom version of Ruby. It should also be possible to re-implement bleak_house by using this gem (and maybe another trick or two).

Awesome.

Conclusion

  • One step closer to building a memory profiler without requiring end users to find and use patches floating around the internet.
  • It is unclear whether cheap tricks like this are useful or harmful, but they are fun to write.
  • If you understand how your system works at an intimate level, nearly anything is possible. The work required to make it happen might be difficult though.

Thanks for reading and don't forget to subscribe (via RSS or e-mail) and follow me on twitter.

References

  1. http://en.wikipedia.org/wiki/Trampoline_(computers) []

Written by Joe Damato

November 23rd, 2009 at 5:59 am

Defeating the Matasano C++ Challenge with ASLR enabled

View Comments


If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.

Important note

I am NOT a security researcher (I kinda want to be though). As such, there are probably way better ways to do everything in this article. This article is just illustrating my thought process when cracking this challenge.

The Challenge

The Matasano Security blog recently posted an article titled A C++ Challenge1 which included a particularly ugly piece of C++ code that has a security vulnerability. The challenge is for the reader to find the vulnerability, use it execute arbitrary code, and submit the data to Matasano.

Sounds easy enough, let’s do this! cue hacking music

Making it harder

Recent linux kernels have feature called Address Space Layout Randomization (ASLR) which can be set in /proc/sys/kernel/randomize_va_space. ASLR is a security feature which randomizes the start address of various parts of a process image. Doing this makes exploiting a security bug more difficult because the exploit cannot use any hard coded addresses.

The options you can set are:

  • 0 – ASLR off
  • 1 – Randomize the addresses of the stack, mmap area, and VDSO page. This is the default.
  • 2 – Everything in option 1, but also randomize the brk area so the heap is randomized.

Just for fun I decided to set it to 2 to make exploiting the challenge more difficult.

Got the code, but now what?

I decided to start attacking this problem by looking for a few common errors, in this order:

  1. strcpy()/strncpy() bugs No calls
  2. memcpy() bugs A few calls
  3. Off by one bugs None obvious

It turned out from a quick look that all calls to memcpy() included sane, hard-coded values. So, it had to be something more complex.

Digging deeper – finding input streams the user can control

Next, I decided to actually read the code and see what it was doing at a high level and what inputs could be controlled. Turns out that the program reads data from a file and uses the data from the file to determine how many objects to allocate.

Obviously, this portion of the code caught my interest so let’s take a quick look:

/* ... */

fd.read(file_in_mem, MAX_FILE_SIZE-1);

/* ... */

struct _stream_hdr *s = (struct _stream_hdr *) file_in_mem;

if(s->num_of_streams >= INT_MAX / (int)sizeof(int)) {
    safe_count = MAX_STREAMS;
} else {
    safe_count = s->num_of_streams;
}

Obj *o = new Obj[safe_count];

OK, so clearly that if statement is suspect. At the very least it doesn’t check for negative values, so you could end up with safe_count = -1 which might do something interesting when passed to the new operator. Moreover, it appears this if statement will allow values as large as 536870910 ([INT_MAX / sizeof(int)] – 1).

Maybe the exploit has something to do with values this if statement is allowing through?

A closer look at the integer overflow in new

Let’s use GDB to take a closer look at what the compiler does before calling new. I’ve added a few comments in line to explain the assembly code:

mov    %edx,%eax   ;  %edx and %eax store s->num_of_streams
add    %eax,%eax   ;  add %eax to itself (s->num_of_streams * 2)
add    %edx,%eax   ;  add  s->num_of_streams + %eax (s->num_of_streams*3)
shl    $0x2,%eax   ;  multiply (s->num_of_streams * 3) by 4  (s->num_of_streams * 12)
mov    %eax,(%esp) ;  move it into position to pass to new
call   0x8048a7c <_Znaj@plt> ; call new

The compiler has generated code to calculate: s->num_of_streams * sizeof(Obj). sizeof(Obj) is 12 bytes. For large values of s->num_of_streams multiplying it by 12, causes an integer overflow and the value passed to new will actually be less than what was intended.

For my exploit, I ended up using the value 357913943. This value causes an overflow, because 357913943 * 12 is greater than the biggest possible value for an integer by 20. So the value passed to new is 20. Which is, of course, significantly less than what we actually wanted to allocate. Other people have written about integer overflow in new in other compilers2 before.

Let’s see how this can be used to cause arbitrary code to execute. Remember, for arbitrary code execution to occur there must be a way to cause the target program to write some data to a memory address that can be controlled.

Find the (possible) hand-off(s) to arbitrary code

To find any hand-off locations, I looked for places where memory writes were occurring in the program. I found a few memory writes:

  • 2 calls to memset()
  • 2 calls to memcpy()
  • parse_stream() of class Obj

Unfortunately (from the attacker’s perspective) the calls to memcpy() and memset() looked pretty sane. The parse_stream() function caught my interest, though.

Take a look:

class Obj {
    public:
    int parse_stream(int t, char *stream)
    {
      type = t;
      // ... do something with stream here ...
      return 0;
    }

    int length;
    int type;
/* ... */

REMEMBER: In C++, member functions of classes have a sekrit parameter which is a pointer to the object the function is being called on. In the function itself, this parameter is accessed using this. So the line writing to the type variable is actually doing this->type = t; where this is supplied to the function sektrily by the compiler.

This is important because this piece of code could be our hand-off! We need to find a way to control the value of this so we can cause a memory write to a location of our choice.

Controlling this to cause arbitrary code to execute

Take a look at an important piece of code in the challenge:

struct imetad {
  int msg_length;
  int (*callback)(int, struct imetad *);
/* ... */

Nice! The callback field of struct imetad is offset by 4 bytes into the structure. The type field of class Obj is also offset by 4 bytes. See where I’m going?

If we can control the this pointer to point at the struct imetad on the heap when parse_stream is called, it will overwrite the callback pointer. We’ll then be able to set the pointer to any address we want and hand-off execution to arbitrary code!

But how can we manipulate this?

Take a look at this piece of code that calls callback:

o[i].parse_stream(dword, stream_temp);
imd->callback(o[i].type, imd);

Since it is possible to overflow new and allocate fewer objects than safe_count is counting, that means that for some values of i, o[i] will be pointing at data that isn’t actually an Obj object, but just other data on the heap. Infact, when i = 2, o[i] will be pointing at the struct imetad object on the heap. The call to parse_stream will pass in a corrupted this pointer, that points at struct imetad. The write to type will actually overwrite callback since they are both offset equal amounts into their respective structures.

And with that, we’ve successfully exploited the challenge causing arbitrary code to execute.

Let’s now figure out how to beat ASLR!

How to defeat address space layout randomization

I did NOT invent this technique, but I read about it and thought it was cool. You can read a more verbose explanation of this technique here. The idea behind the technique is pretty simple:

  • When you call exec, the PID remains the same, but the image of the process in memory is changed.
  • The kernel uses the PID and the number of jiffies (jiffies is a fine-grained time measurement in the kernel) to pull data from the entropy pool.
  • If you can run a program which records stack, heap, and other addresses and then quickly call exec to start the vulnerable program, you can end up with the same memory layout.

My exploit program is actually a wrapper which records an approximate location of the heap (by just calling malloc()), generates the exploit file, and then executes the challenge binary.

Take a look at the relevant pieces of my exploit to get an idea of how it works:

/* ... */

/* do a malloc to get an idea of where the heap lives */
void *dummy = malloc(10);

/* ... */

unsigned int shell_addr = reinterpret_void_ptr_as_uint(dummy);

/*
 * XXX TODO FIXME - on my platform, execl'ing from here to the challenge binary
 * incurs a constant offset of 0x3160, probably for changes in the environment
 * (libs linked for c++ and whatnot).
 */
shell_addr += 0x3160;

/*
 * a guess as to how far off the heap the shellcode lives.
 * 
 * luckily we have a large NOP sled, so we should only fail when we miss
 * the current entropy cycle (see below).
 */
shell_addr += 700;

/* ... build exploit file in memory ... */

/* copy in our best guess as to the address of the shellcode, pray NOPs
 * take care of the rest! */
memcpy(entire_file+88, &shell_addr, sizeof(shell_addr));

/* ... write exploit out to disk ... */

/* launch program with the generated exploit file!
*
* calling execl here inherits the PID of this process, and IF we get lucky
* ~85%+ of the time, we'll execute before the next entropy cycle and hit
* the shellcode, even with ASLR=2.
*/
execl("./cpp_challenge", "cpp_challenge", "exploit", (char *)0);

My exploit for the C++ challenge

My exploit comes with the following caveats:

  • i386 system
  • The challenge binary is called “cpp_challenge” and lives in the same directory as the exploit binary.
  • The exploit binary can write to the directory and create a file called “exploit” which will be handed off to “cpp_challenge”

Get the full code of my exploit here.

Results

Results on my i386 Ubuntu 8.04 VM running in VMWare fusion, for each level of randomize_va_space:

  • 0 – 100% exploit hit rate
  • 1 – 100% exploit hit rate
  • 2 – ~85% exploit hit rate. Sometimes, my exploit code falls out of the time window and the address map changes before the challenge binary is run

I could probably boost the hit rate for 2 a bit, but then I’d probably re-write the entire exploit in assembly to make it run as fast as possible. I didn’t think there was really a point to going to such an extreme, though. So, an 85% hit rate is good enough.

Conclusion

  1. Security challenges are fun.
  2. More emphasis and more freely available information on secure coding would be very useful.
  3. Like it or not developers need to be security conscious when writing code in C and C++.
  4. As C and C++ change, developers need to carefully consider security implications of new features.

Thanks for reading and don’t forget to subscribe (via RSS or e-mail) and follow me on twitter.

References

  1. Matasano Security LLC – Chargen – A C++ Challenge []
  2. Integer overflow in the new[] operator []

Written by Joe Damato

October 16th, 2009 at 4:59 am

Fixing Threads in Ruby 1.8: A 2-10x performance boost

View Comments

Quick notes before things get crazy

OK, things might get a little crazy in this blog post so let’s clear a few things up before we get moving.

  • I like the gritty details, and this article in particular has a lot of gritty info. To reduce the length of the article for the casual reader, I’ve put a portion of the really gritty stuff in the Epilogue below. Definitely check it out if that is your thing.
  • This article, the code, and the patches below are for Linux and OSX for the x86 and x86_64 platforms, only.
  • Even though there are code paths for both x86 and x86_64, I’m going to use the 64bit register names and (briefly) mention the 64bit binary interface.
  • Let’s assume the binary is built with -fno-omit-frame-pointer, the patches don’t care, but it’ll make the explanation a bit simpler later.
  • If you don’t know what the above two things mean, don’t worry; I got your back chief.

How threads work in Ruby

Ruby 1.8 implements pre-emptible userland threads, also known as “green threads.” (Want to know more about threading models? See this post.) The major performance killer in Ruby’s implementation of green threads is that the entire thread stack is copied to and from the heap every context switch. Let’s take a look at a high level what happens when you:

Thread.new{
	10000.times {
		a << "a"
		a.pop
	}
}

  1. A thread control block (tcb) is allocated in Ruby.
  2. The infamous thread timer is initialized, either as a pthread or as an itimer.
  3. Ruby scope information is copied to the heap.
  4. The new thread is added to the list of threads.
  5. The current thread is set as the new thread.
  6. rb_thread_yield is called to yield to the block you passed in.
  7. Your block starts executing.
  8. The timer interrupts the executing thread.
  9. The current thread’s state is stored:
    • memcpy() #1 (sometimes): If the stack has grown since the last save, realloc is called. If the allocator cannot extend the size of the current block in place, it may decide to move the data to a new block that is large enough. If that happens memcpy() is called to move the data over.
    • memcpy() #2 (always): A copy of this thread’s entire stack (starting from the top of the interpreter’s stack) is put on the heap.
  10. The next thread’s state is restored.
    • memcpy() #3 (always): A copy of this thread’s entire stack is placed on the stack.

Steps 9 and 10 crush performance when even small amounts of Ruby code are executed.

Many of the functions the interpreter uses to evaluate code are massive. They allocate a large number of local variables creating stack frames up to 4 kilobytes per function call. Those functions also call themselves recursively many times in a single expression. This leads to huge stacks, huge memcpy()s, and an incredible performance penalty.

If we can eliminate the memcpy()s we can get a lot of performance back. So, let’s do it.

Increase performance by putting thread stacks on the heap

[Remember: we are only talking about x86_64]

How stacks work – a refresher

Stacks grow downward from high addresses to low addresses. As data is pushed on to the stack, it grows downward. As stuff is popped, it shrinks upward. The register %rsp serves as a pointer to the bottom of the stack. When it is decremented or incremented the stack grows or shrinks, respectively. The special property of the program stack is that it will grow until you run out of memory (or are killed by the OS for being bad). The operating system handles the automatic growth. See the Epilogue for some more information about this.

How to actually switch stacks

The %rsp register can be (and is) changed and adjusted directly by user code. So all we have to do is put the address of our stack in %rsp, and we’ve switched stacks. Then we can just call our thread start function. Pretty easy. A small blob of inline assembly should do the trick:

__asm__ __volatile__ ("movq %0, %%rsp\n\t"
                      "callq *%1\n"
                      :: "r" (th->stk_base),
                         "r" (rb_thread_start_2));

Two instructions, not too bad.

  1. movq %0, %%rsp moves a quad-word (th->stk_base) into the %rsp. Quad-word is Intel speak for 4 words, where 1 Intel word is 2 bytes.
  2. callq *%1 calls a function at the address “rb_thread_start_2.” This has a side-effect or two, which I’ll mention in the Epilogue below, for those interested in a few more details.

The above code is called once per thread. Calling rb_thread_start_2 spins up your thread and it never returns.

Where do we get stack space from?

When the tcb is created, we’ll allocate some space with mmap and set a pointer to it.

/* error checking omitted for brevity, but exists in the patch =] */
stack_area = mmap(NULL, total_size, PROT_READ | PROT_WRITE | PROT_EXEC,
			MAP_PRIVATE | MAP_ANON, -1, 0);

th->stk_ptr = th->stk_pos = stack_area;
th->stk_base = th->stk_ptr + (total_size - sizeof(int))/sizeof(VALUE *);

Remember, stacks grow downward so that last line: th->stk_base = ... is necessary because the base of the stack is actually at the top of the memory region return by mmap(). The ugly math in there is for alignment, to comply with the x86_64 binary interface. Those curious about more gritty details should see the Epilogue below.

BUT WAIT, I thought stacks were supposed to grow automatically?

Yeah, the OS does that for the normal program stack. Not gonna happen for our mmap‘d regions. The best we can do is pick a good default size and export a tuning lever so that advanced users can adjust the stack size as they see fit.

BUT WAIT, isn’t that dangerous? If you fall off your stack, wouldn’t you just overwrite memory below?

Yep, but there is a fix for that too. It’s called a guard page. We’ll create a guard page below each stack that has its permission bits set to PROT_NONE. This means, if a thread falls off the bottom of its stack and tries to read, write, or execute the memory below the thread stack, a signal (usually SIGSEGV or SIGBUS) will be sent to the process.

The code for the guard page is pretty simple, too:

/* omit error checking for brevity */
mprotect(th->stk_ptr, getpagesize(), PROT_NONE);

Cool, let’s modify the SIGSEGV and SIGBUS signal handlers to check for stack overflow:

/* if the address which generated the fault is within the current thread's guard page... */
  if(fault_addr <= (caddr_t)rb_curr_thread->guard &&
     fault_addr >= (caddr_t)rb_curr_thread->stk_ptr) {
  /* we hit the guard page, print out a warning to help app developers */
  rb_bug("Thread stack overflow! Try increasing it!");
}

See the epilogue for more details about this signal handler trick.

Patches

As always, this is super-alpha software.

Ruby 1.8.6 github raw .patch
Ruby 1.8.7 github raw .patch

Benchmarks

The computer language shootout has a thread test called thread-ring; let’s start with that.

require 'thread'
THREAD_NUM = 403
number = ARGV.first.to_i

threads = []
for i in 1..THREAD_NUM
   threads << Thread.new(i) do |thr_num|
      while true
         Thread.stop
         if number > 0
            number -= 1
         else
            puts thr_num
            exit 0
         end
      end
   end
end

prev_thread = threads.last
while true
   for thread in threads
      Thread.pass until prev_thread.stop?
      thread.run
      prev_thread = thread
   end
end

Results (ARGV[0] = 50000000):

Ruby 1.8.6 1389.52s
Ruby 1.8.6 w/ heap stacks 793.06s
Ruby 1.9.1 752.44s

A speed up of about 2.3x compared to Ruby 1.8.6. A bit slower than Ruby 1.9.1.

That is a pretty strong showing, for sure. Let’s modify the test slightly to illustrate the true power of this implementation.

Since our implementation does no memcpy()s we expect the cost of context switching to stay constant regardless of thread stack size. Moreover, the unmodified Ruby 1.8.6 should perform worse as thread stack size increases (therefore increasing the amount of time the CPU is doing memcpy()s).

Let’s test this hypothesis by modifying thread-ring slightly so that it increases the size of the stack after spawning threads.

def grow_stack n=0, &blk
  unless n > 100
    grow_stack n+1, &blk
  else
    yield
  end
end

require 'thread'
THREAD_NUM = 403
number = ARGV.first.to_i

threads = []
for i in 1..THREAD_NUM
  threads << Thread.new(i) do |thr_num|
    grow_stack do
      while true
        Thread.stop
        if number > 0
          number -= 1
        else
          puts thr_num
          exit 0
        end
      end
    end
  end
end

prev_thread = threads.last
while true
   for thread in threads
      Thread.pass until prev_thread.stop?
      thread.run
      prev_thread = thread
   end
end

Results (ARGV[0] = 50000000):

Ruby 1.8.6 7493.50s
Ruby 1.8.6 w/ heap stacks 799.52s
Ruby 1.9.1 680.92s

A speed up of about 9.4x compared to Ruby 1.8.6. A bit slower than Ruby 1.9.1.

Now, lets benchmark mongrel+sinatra.

require 'rubygems'
require 'sinatra'

disable :reload

set :server, 'mongrel' 

get '/' do 
  'hi' 
end

Results:

Ruby 1.8.6 1395.43 request/sec
Ruby 1.8.6 w/ heap stacks 1770.26 request/sec

An increase of about 1.26x in the most naive case possible.

Of course, if the handler did anything more than simply write “hi” (like use memcache or make sql queries) there would be more function calls, more context switches, and a much greater savings.

Conclusion

A couple lessons learned this time:

  • Hacking a VM like Ruby is kind of like hacking a kernel. Some subset of the tricks used in kernel hacking are useful in userland.
  • The x86_64 ABI is a must read if you plan on doing any low-level hacking.
  • Keep your CPU manuals close by, they come in handy even in userland.
  • Installing your own signal handlers is really useful for debugging, even if they are dumping architecture specific information.

Hope everyone enjoyed this blog post. I’m always looking for things to blog about. If there is something you want explained or talked about, send me an email or a tweet!

Don’t forget to subscribe and follow me and Aman on twitter.

Epilogue

Automatic stack growth

This can be achieved pretty easily with a little help from virtual memory and the programmable interrupt controller (PIC). The idea is pretty simple. When you (or your shell on your behalf) calls exec() to execute a binary, the OS will map a bunch of pages of memory for the stack and set the stack pointer of the process to the top of the memory. Once the stack space is exhausted, and the stack pointer is pushed onto un-mapped memory, a page fault will be generated.

The OS’s page fault handler (installed via the PIC) will fire. The OS can then check the address that generated the exception and see that you fell off the bottom of your stack. This works very similarly to the guard page idea we added to protect Ruby thread stacks. It can then just map more memory to that area, and tell your process to continue executing. Your process doesn’t know anything bad happened.

I hope to chat a little bit about interrupt and exception handlers in an upcoming blog post. Stay tuned!

callq side-effects

When a callq instruction is executed, the CPU pushes the return address on to the stack and then begins executing the function that was called. This is important because when the function you are calling executes a ret instruction, a quad-word is popped from the stack and put into the instruction pointer (%rip).

x86_64 Application Binary Interface

The x86_64 ABI is an extension of the x86 ABI. It specifies architecture programming information such as the fundamental types, caller and callee saved registers, alignment considerations and more. It is a really important document for any programmer messing with x86_64 architecture specific code.
The particular piece of information relevant for this blog post is found buried in section 3.2.2

The end of the input argument area shall be aligned on a 16 … byte boundary.

This is important to keep in mind when constructing thread stacks. We decided to avoid messing with alignment issues. As such we did not pass any arguments to rb_thread_start_2. We wanted to avoid mathematical error that could happen if we try to align the memory ourselves after pushing some data. We also wanted to avoid writing more assembly than we had to, so we avoided passing the arguments in registers, too.

Signal handler trick

The signal handler “trick” to check if you have hit the guard page is made possible by the sigaltstack() system call and the POSIX sa_sigaction interface.

sigaltstack() lets us specify a memory region to be used as the stack when a signal is delivered. This extremely important for the signal handler trick because once we fall off our thread stack, we certainly cannot expect to handle a signal using that stack space.

POSIX provides two ways for signals to be handled:

  • sa_handler interface: calls your handler and passes in the signal number.
  • sa_sigaction interface: calls your handler and passes in the signal number, a siginfo_t struct, and a ucontext_t. The siginfo_t struct contains (among other things), the address which generated the fault. Simply check this address to see if its in the guard page and if so let the user know they just overflowed their stack. Another useful, but extremely non-portable modification that was added to Ruby’ signal handlers was a dump of the contents in ucontext_t to provide useful debugging information. This structure contains the register state at the time of signal. Dumping it can help debugging by showing which values are in what registers.

Written by Joe Damato

May 18th, 2009 at 5:00 am