time to bleed by Joe Damato

technical ramblings from a wanna-be unix dinosaur

Archive for the ‘scaling’ Category

Garbage Collection and the Ruby Heap (from railsconf)

View Comments

Written by Joe Damato

June 8th, 2010 at 9:38 am

Descent into Darkness: Understanding your system’s binary interface is the only way out

View Comments

Written by Joe Damato

March 15th, 2010 at 12:11 pm

EventMachine: scalable non-blocking i/o in ruby

View Comments

Written by Aman Gupta

March 12th, 2010 at 1:07 pm

Enabling BIOS options on a live server with no rebooting

View Comments


This blog post is going to describe a C program that toggles some CPU and chipset registers directly to enable Direct Cache Access without needing a reboot or a switch in the BIOS. A very fun hack to write and investigate.

Special thanks…

Special thanks going out to Roman Nurik for helping me make the code CSS much, much prettier and easier to read.

Special thanks going out to Jake Douglas for convincing me that I shouldn’t use a stupid sensationalist title for this blog article :)

Intel I/OAT and Direct Cache Access (DCA)

From the Linux Foundation I/OAT project page1:

I/OAT (I/O Acceleration Technology) is the name for a collection of techniques by Intel to improve network throughput. The most significant of these is the DMA engine. The DMA engine is meant to offload from the CPU the copying of [socket buffer] data to the user buffer. This is not a zero-copy receive, but does allow the CPU to do other work while the copy operations are performed by the DMA engine.

Cool! So by using I/OAT the network stack in the Linux kernel can offload copy operations to increase throughput. I/OAT also includes a feature called Direct Cache Access (DCA) which can deliver data directly into processor caches. This is particularly cool because when a network interrupt arrives and data is copied to system memory, the CPU which will access this data will not cause a cache-miss on the CPU because DCA has already put the data it needs in the cache. Sick.

Measurements from the Linux Foundation project2 indicate a 10% reduction in CPU usage, while the Myri-10G NIC website claims they’ve measured a 40% reduction in CPU usage3. For more information describing the performance benefits of DCA see this incredibly detailed paper: Direct Cache Access for High Bandwidth Network I/O.

How to get I/OAT and DCA

To get I/OAT and DCA you need a few things:

  • Intel XEON CPU(s)
  • A NIC(s) which has DCA support
  • A chipset which supports DCA
  • The ioatdma and dca Linux kernel modules
  • And last but not least, a switch in your BIOS to turn DCA on

That last item can actually be a bit more tricky than it sounds for several reasons:

  • some BIOSes don’t expose a way to turn DCA on even though it is supported by the CPU, chipset, and NIC!
  • Your hosting provider may not allow BIOS access
  • Your system might be up and running and you don’t want to reboot to enter the BIOS to enable DCA

Let’s see what you can do to coerce DCA into working on your system if one of the above applies to you.

Build ioatdma kernel module

This is pretty easy, just make menuconfig and toggle I/OAT as a module. You must build it as a module if you cannot or do not want to enable DCA in your BIOS.

The option can be found in Device Drivers -> DMA Engine Support -> Intel I/OAT DMA Support.

Toggling that option will build the ioatdma and dca modules. Build and install the new module.

Enabling DCA without a reboot or BIOS access: Hack overview

In order to enable DCA a few special registers need to be touched.

  • The DCA capability bit in the PCI Express Control Register 4 in the configuration space for the PCI bridge your NIC(s) are attached to.
  • The DCA Model Specific Register on your CPU(s)

Let’s take a closer look at each stage of the hack.

Enable DCA in PCI Configuration Space

PCI configuration space is a memory region where control registers for PCI devices live. By changing register values, you can enable/disable specific features of that PCI device. The configuration space is addressable if you know the PCI bus, device, and function bits for a specific PCI device and the feature you care about.

To find the DCA register for the Intel 5000, 5100, and 7300 chipsets, we need to consult the documentation4:


Cool, so the register needed lives at offset 0×64. To enable DCA, bit 6 needs to be set to 1.

Toggling these register can be a bit cumbersome, but luckily there is libpci which provides some simple APIs to scan for PCI devices and accessing configuration space registers.

#define INTEL_BRIDGE_DCAEN_OFFSET   0x64
#define INTEL_BRIDGE_DCAEN_BIT      6
#define PCI_HEADER_TYPE_BRIDGE     1
#define PCI_VENDOR_ID_INTEL        0x8086 /* lol @ intel */
#define PCI_HEADER_TYPE             0x0e 
#define MSR_P6_DCA_CAP             0x000001f8

void check_dca(struct pci_dev *dev)
{
  /* read DCA status */
  u32 dca = pci_read_long(dev, INTEL_BRIDGE_DCAEN_OFFSET);

  /* if it's not enabled */
  if (!(dca & (1 << INTEL_BRIDGE_DCAEN_BIT))) {
    printf("DCA disabled, enabling now.\n");
   
    /* enable it */
    dca |= 1 << INTEL_BRIDGE_DCAEN_BIT;

    /* write it back */
    pci_write_long(dev, INTEL_BRIDGE_DCAEN_OFFSET, dca);
  } else {
    printf("DCA already enabled!\n");
  }
}

int main(void)
{
  struct pci_access *pacc;
  struct pci_dev *dev;
  u8 type;

  pacc = pci_alloc();
  pci_init(pacc);

  /* scan the PCI bus */
  pci_scan_bus(pacc);

  /* for each device */
  for (dev = pacc->devices; dev; dev=dev->next) {
    pci_fill_info(dev, PCI_FILL_IDENT | PCI_FILL_BASES);

    /* if it's an intel device */
    if (dev->vendor_id == PCI_VENDOR_ID_INTEL) {

        /* read the header byte */
        type = pci_read_byte(dev, PCI_HEADER_TYPE);

        /* if its a PCI bridge, check and enable DCA */
        if (type == PCI_HEADER_TYPE_BRIDGE) {
          check_dca(dev);
        }
    }
  }

  msr_dca_enable();
  return 0;
}

Enable DCA in the CPU MSR

A model specific register (MSR) is a control register that is provided by a CPU to enable a feature that exists on a specific CPU. In this case, we care about the DCA MSR. In order to find it’s address, let’s consult the Intel Developer’s Manual 3B5.

This register lives at offset 0x1f8. We just need to set it to 1 and we should be good to go.

Thankfully, there are device files in /dev for the MSRs of each CPU:

#define MSR_P6_DCA_CAP      0x000001f8
void msr_dca_enable(void)
{
  char msr_file_name[64];
  int fd = 0, i = 0;
  u64 data;

  /* for each CPU */
  for (;i < NUM_CPUS; i++) {
    sprintf(msr_file_name, "/dev/cpu/%d/msr", i);
    
    /* open the MSR device file */
    fd = open(msr_file_name, O_RDWR);
    if (fd < 0) {
      perror("open failed!");
      exit(1);
    }

    /* read the current DCA status */
    if (pread(fd, &data, sizeof(data), MSR_P6_DCA_CAP) != sizeof(data)) {
      perror("reading msr failed!");
      exit(1);
    }

    printf("got msr value: %*llx\n", 1, (unsigned long long)data);

    /* if DCA is not enabled */
    if (!(data & 1)) {

      /* enable it */
      data |= 1;

      /* write it back */
      if (pwrite(fd, &data, sizeof(data), MSR_P6_DCA_CAP) != sizeof(data)) {
        perror("writing msr failed!");
        exit(1);
      }
    } else {
      printf("msr already enabled for CPU %d\n", i);
    }
  }
}

Code for the hack is on github

Get it here: http://github.com/ice799/dca_force/tree/master

Putting it all together to get your speed boost

  1. Checkout the hack from github: git clone git://github.com/ice799/dca_force.git
  2. Build the hack: make NUM_CPUS=whatever
  3. Run it: sudo ./dca_force
  4. Load the kernel module: sudo modprobe ioatdma
  5. Check your dmesg: dmesg | tail

You should see:

[   72.782249] dca service started, version 1.8
[   72.838853] ioatdma 0000:00:08.0: setting latency timer to 64
[   72.838865] ioatdma 0000:00:08.0: Intel(R) I/OAT DMA Engine found, 4 channels, device version 0x12, driver version 3.64
[   72.904027]   alloc irq_desc for 56 on cpu 0 node 0
[   72.904030]   alloc kstat_irqs on cpu 0 node 0
[   72.904039] ioatdma 0000:00:08.0: irq 56 for MSI/MSI-X

in your dmesg.

You should NOT SEE

[    8.367333] ioatdma 0000:00:08.0: DCA is disabled in BIOS

You can now enjoy the DCA performance boost your BIOS or hosting provider didn't want you to have!

Conclusion

  • Intel I/OAT and DCA is pretty cool, and enabling it can give pretty substantial performance wins
  • Cool features are sometimes stuffed away in the BIOS
  • If you don't have access to your BIOS, you should ask you provider nicely to do it for you
  • If your BIOS doesn't have a toggle switch for the feature you need, do a BIOS update
  • If all else fails and you know what you are doing, you can sometimes pull off nasty hacks like this in userland to get what you want

Thanks for reading and don't forget to subscribe (via RSS or e-mail) and follow me on twitter.

P.S.

I know, I know. I skipped Part 2 of the signals post (here's Part 1 if you missed it). Part 2 is coming soon!

References

  1. http://www.linuxfoundation.org/en/Net:I/OAT []
  2. http://www.linuxfoundation.org/en/Net:I/OAT []
  3. http://www.myri.com/serve/cache/626.html []
  4. Intel® 7300 Chipset Memory Controller Hub (MCH) Datasheet, Section 4.8.12.6 []
  5. Intel® 64 and IA-32 Architectures Software Developer’s Manual Volume 3B: System Programming Guide, Part 2, Appendix B-19 []

Written by Joe Damato

July 6th, 2009 at 8:00 am

Fixing Threads in Ruby 1.8: A 2-10x performance boost

View Comments

Quick notes before things get crazy

OK, things might get a little crazy in this blog post so let’s clear a few things up before we get moving.

  • I like the gritty details, and this article in particular has a lot of gritty info. To reduce the length of the article for the casual reader, I’ve put a portion of the really gritty stuff in the Epilogue below. Definitely check it out if that is your thing.
  • This article, the code, and the patches below are for Linux and OSX for the x86 and x86_64 platforms, only.
  • Even though there are code paths for both x86 and x86_64, I’m going to use the 64bit register names and (briefly) mention the 64bit binary interface.
  • Let’s assume the binary is built with -fno-omit-frame-pointer, the patches don’t care, but it’ll make the explanation a bit simpler later.
  • If you don’t know what the above two things mean, don’t worry; I got your back chief.

How threads work in Ruby

Ruby 1.8 implements pre-emptible userland threads, also known as “green threads.” (Want to know more about threading models? See this post.) The major performance killer in Ruby’s implementation of green threads is that the entire thread stack is copied to and from the heap every context switch. Let’s take a look at a high level what happens when you:

Thread.new{
	10000.times {
		a << "a"
		a.pop
	}
}

  1. A thread control block (tcb) is allocated in Ruby.
  2. The infamous thread timer is initialized, either as a pthread or as an itimer.
  3. Ruby scope information is copied to the heap.
  4. The new thread is added to the list of threads.
  5. The current thread is set as the new thread.
  6. rb_thread_yield is called to yield to the block you passed in.
  7. Your block starts executing.
  8. The timer interrupts the executing thread.
  9. The current thread’s state is stored:
    • memcpy() #1 (sometimes): If the stack has grown since the last save, realloc is called. If the allocator cannot extend the size of the current block in place, it may decide to move the data to a new block that is large enough. If that happens memcpy() is called to move the data over.
    • memcpy() #2 (always): A copy of this thread’s entire stack (starting from the top of the interpreter’s stack) is put on the heap.
  10. The next thread’s state is restored.
    • memcpy() #3 (always): A copy of this thread’s entire stack is placed on the stack.

Steps 9 and 10 crush performance when even small amounts of Ruby code are executed.

Many of the functions the interpreter uses to evaluate code are massive. They allocate a large number of local variables creating stack frames up to 4 kilobytes per function call. Those functions also call themselves recursively many times in a single expression. This leads to huge stacks, huge memcpy()s, and an incredible performance penalty.

If we can eliminate the memcpy()s we can get a lot of performance back. So, let’s do it.

Increase performance by putting thread stacks on the heap

[Remember: we are only talking about x86_64]

How stacks work – a refresher

Stacks grow downward from high addresses to low addresses. As data is pushed on to the stack, it grows downward. As stuff is popped, it shrinks upward. The register %rsp serves as a pointer to the bottom of the stack. When it is decremented or incremented the stack grows or shrinks, respectively. The special property of the program stack is that it will grow until you run out of memory (or are killed by the OS for being bad). The operating system handles the automatic growth. See the Epilogue for some more information about this.

How to actually switch stacks

The %rsp register can be (and is) changed and adjusted directly by user code. So all we have to do is put the address of our stack in %rsp, and we’ve switched stacks. Then we can just call our thread start function. Pretty easy. A small blob of inline assembly should do the trick:

__asm__ __volatile__ ("movq %0, %%rsp\n\t"
                      "callq *%1\n"
                      :: "r" (th->stk_base),
                         "r" (rb_thread_start_2));

Two instructions, not too bad.

  1. movq %0, %%rsp moves a quad-word (th->stk_base) into the %rsp. Quad-word is Intel speak for 4 words, where 1 Intel word is 2 bytes.
  2. callq *%1 calls a function at the address “rb_thread_start_2.” This has a side-effect or two, which I’ll mention in the Epilogue below, for those interested in a few more details.

The above code is called once per thread. Calling rb_thread_start_2 spins up your thread and it never returns.

Where do we get stack space from?

When the tcb is created, we’ll allocate some space with mmap and set a pointer to it.

/* error checking omitted for brevity, but exists in the patch =] */
stack_area = mmap(NULL, total_size, PROT_READ | PROT_WRITE | PROT_EXEC,
			MAP_PRIVATE | MAP_ANON, -1, 0);

th->stk_ptr = th->stk_pos = stack_area;
th->stk_base = th->stk_ptr + (total_size - sizeof(int))/sizeof(VALUE *);

Remember, stacks grow downward so that last line: th->stk_base = ... is necessary because the base of the stack is actually at the top of the memory region return by mmap(). The ugly math in there is for alignment, to comply with the x86_64 binary interface. Those curious about more gritty details should see the Epilogue below.

BUT WAIT, I thought stacks were supposed to grow automatically?

Yeah, the OS does that for the normal program stack. Not gonna happen for our mmap‘d regions. The best we can do is pick a good default size and export a tuning lever so that advanced users can adjust the stack size as they see fit.

BUT WAIT, isn’t that dangerous? If you fall off your stack, wouldn’t you just overwrite memory below?

Yep, but there is a fix for that too. It’s called a guard page. We’ll create a guard page below each stack that has its permission bits set to PROT_NONE. This means, if a thread falls off the bottom of its stack and tries to read, write, or execute the memory below the thread stack, a signal (usually SIGSEGV or SIGBUS) will be sent to the process.

The code for the guard page is pretty simple, too:

/* omit error checking for brevity */
mprotect(th->stk_ptr, getpagesize(), PROT_NONE);

Cool, let’s modify the SIGSEGV and SIGBUS signal handlers to check for stack overflow:

/* if the address which generated the fault is within the current thread's guard page... */
  if(fault_addr <= (caddr_t)rb_curr_thread->guard &&
     fault_addr >= (caddr_t)rb_curr_thread->stk_ptr) {
  /* we hit the guard page, print out a warning to help app developers */
  rb_bug("Thread stack overflow! Try increasing it!");
}

See the epilogue for more details about this signal handler trick.

Patches

As always, this is super-alpha software.

Ruby 1.8.6 github raw .patch
Ruby 1.8.7 github raw .patch

Benchmarks

The computer language shootout has a thread test called thread-ring; let’s start with that.

require 'thread'
THREAD_NUM = 403
number = ARGV.first.to_i

threads = []
for i in 1..THREAD_NUM
   threads << Thread.new(i) do |thr_num|
      while true
         Thread.stop
         if number > 0
            number -= 1
         else
            puts thr_num
            exit 0
         end
      end
   end
end

prev_thread = threads.last
while true
   for thread in threads
      Thread.pass until prev_thread.stop?
      thread.run
      prev_thread = thread
   end
end

Results (ARGV[0] = 50000000):

Ruby 1.8.6 1389.52s
Ruby 1.8.6 w/ heap stacks 793.06s
Ruby 1.9.1 752.44s

A speed up of about 2.3x compared to Ruby 1.8.6. A bit slower than Ruby 1.9.1.

That is a pretty strong showing, for sure. Let’s modify the test slightly to illustrate the true power of this implementation.

Since our implementation does no memcpy()s we expect the cost of context switching to stay constant regardless of thread stack size. Moreover, the unmodified Ruby 1.8.6 should perform worse as thread stack size increases (therefore increasing the amount of time the CPU is doing memcpy()s).

Let’s test this hypothesis by modifying thread-ring slightly so that it increases the size of the stack after spawning threads.

def grow_stack n=0, &blk
  unless n > 100
    grow_stack n+1, &blk
  else
    yield
  end
end

require 'thread'
THREAD_NUM = 403
number = ARGV.first.to_i

threads = []
for i in 1..THREAD_NUM
  threads << Thread.new(i) do |thr_num|
    grow_stack do
      while true
        Thread.stop
        if number > 0
          number -= 1
        else
          puts thr_num
          exit 0
        end
      end
    end
  end
end

prev_thread = threads.last
while true
   for thread in threads
      Thread.pass until prev_thread.stop?
      thread.run
      prev_thread = thread
   end
end

Results (ARGV[0] = 50000000):

Ruby 1.8.6 7493.50s
Ruby 1.8.6 w/ heap stacks 799.52s
Ruby 1.9.1 680.92s

A speed up of about 9.4x compared to Ruby 1.8.6. A bit slower than Ruby 1.9.1.

Now, lets benchmark mongrel+sinatra.

require 'rubygems'
require 'sinatra'

disable :reload

set :server, 'mongrel' 

get '/' do 
  'hi' 
end

Results:

Ruby 1.8.6 1395.43 request/sec
Ruby 1.8.6 w/ heap stacks 1770.26 request/sec

An increase of about 1.26x in the most naive case possible.

Of course, if the handler did anything more than simply write “hi” (like use memcache or make sql queries) there would be more function calls, more context switches, and a much greater savings.

Conclusion

A couple lessons learned this time:

  • Hacking a VM like Ruby is kind of like hacking a kernel. Some subset of the tricks used in kernel hacking are useful in userland.
  • The x86_64 ABI is a must read if you plan on doing any low-level hacking.
  • Keep your CPU manuals close by, they come in handy even in userland.
  • Installing your own signal handlers is really useful for debugging, even if they are dumping architecture specific information.

Hope everyone enjoyed this blog post. I’m always looking for things to blog about. If there is something you want explained or talked about, send me an email or a tweet!

Don’t forget to subscribe and follow me and Aman on twitter.

Epilogue

Automatic stack growth

This can be achieved pretty easily with a little help from virtual memory and the programmable interrupt controller (PIC). The idea is pretty simple. When you (or your shell on your behalf) calls exec() to execute a binary, the OS will map a bunch of pages of memory for the stack and set the stack pointer of the process to the top of the memory. Once the stack space is exhausted, and the stack pointer is pushed onto un-mapped memory, a page fault will be generated.

The OS’s page fault handler (installed via the PIC) will fire. The OS can then check the address that generated the exception and see that you fell off the bottom of your stack. This works very similarly to the guard page idea we added to protect Ruby thread stacks. It can then just map more memory to that area, and tell your process to continue executing. Your process doesn’t know anything bad happened.

I hope to chat a little bit about interrupt and exception handlers in an upcoming blog post. Stay tuned!

callq side-effects

When a callq instruction is executed, the CPU pushes the return address on to the stack and then begins executing the function that was called. This is important because when the function you are calling executes a ret instruction, a quad-word is popped from the stack and put into the instruction pointer (%rip).

x86_64 Application Binary Interface

The x86_64 ABI is an extension of the x86 ABI. It specifies architecture programming information such as the fundamental types, caller and callee saved registers, alignment considerations and more. It is a really important document for any programmer messing with x86_64 architecture specific code.
The particular piece of information relevant for this blog post is found buried in section 3.2.2

The end of the input argument area shall be aligned on a 16 … byte boundary.

This is important to keep in mind when constructing thread stacks. We decided to avoid messing with alignment issues. As such we did not pass any arguments to rb_thread_start_2. We wanted to avoid mathematical error that could happen if we try to align the memory ourselves after pushing some data. We also wanted to avoid writing more assembly than we had to, so we avoided passing the arguments in registers, too.

Signal handler trick

The signal handler “trick” to check if you have hit the guard page is made possible by the sigaltstack() system call and the POSIX sa_sigaction interface.

sigaltstack() lets us specify a memory region to be used as the stack when a signal is delivered. This extremely important for the signal handler trick because once we fall off our thread stack, we certainly cannot expect to handle a signal using that stack space.

POSIX provides two ways for signals to be handled:

  • sa_handler interface: calls your handler and passes in the signal number.
  • sa_sigaction interface: calls your handler and passes in the signal number, a siginfo_t struct, and a ucontext_t. The siginfo_t struct contains (among other things), the address which generated the fault. Simply check this address to see if its in the guard page and if so let the user know they just overflowed their stack. Another useful, but extremely non-portable modification that was added to Ruby’ signal handlers was a dump of the contents in ucontext_t to provide useful debugging information. This structure contains the register state at the time of signal. Dumping it can help debugging by showing which values are in what registers.

Written by Joe Damato

May 18th, 2009 at 5:00 am