time to bleed by Joe Damato

technical ramblings from a wanna-be unix dinosaur

Archive for the ‘monitoring’ Category

Extending ltrace to make your Ruby/Python/Perl/PHP apps faster

View Comments


If you enjoy this article, subscribe (via RSS or e-mail) and follow me on twitter.

A few days ago, Aman (@tmm1) was complaining to me about a slow running process:

I want to see what is happening in userland and trace calls to extensions. Why doesn’t ltrace work for Ruby processes? I want to figure out which MySQL queries are causing my app to be slow.

It turns out that ltrace did not have support for libraries loaded with libdl. This is a problem for languages like Ruby, Python, PHP, Perl, and others because in many cases extensions, libraries, and plugins for these languages are loaded by the VM using libdl. This means that ltrace is somewhat useless for tracking down performance issues in dynamic languages.

A couple late nights of hacking and I managed to finagle libdl support in ltrace. Since most people probably don’t care about the technical details of how it was implemented, I’ll start with showing how to use the patch I wrote and what sort of output you can expect. This patch has made tracking down slow queries (among other things) really easy and I hope others will find this useful.

How to use ltrace:

After you’ve applied my patch (below) and rebuilt ltrace, let’s say you’d like to trace MySQL queries and have ltrace tell you when the query was executed and how long it took. There are two steps:

  1. Give ltrace info so it can pretty print – echo “int mysql_real_query(addr,string,ulong);” > custom.conf
  2. Tell ltrace you want to hear about mysql_real_query: ltrace -F custom.conf -ttTgx mysql_real_query -p <pid>

Here’s what those arguments mean:

  • -F use a custom config file when pretty-printing (default: /etc/ltrace.conf, add your stuff there to avoid -F if you wish).
  • -tt print the time (including microseconds) when the call was executed
  • -T time the call and print how long it took
  • -x tells ltrace the name of the function you care about
  • -g avoid placing breakpoints on all library calls except the ones you specify with -x. This is optional, but it makes ltrace produce much less output and is a lot easier to read if you only care about your one function.

PHP

Test script

mysql_connect("localhost", "root");
while(true){
    mysql_query("SELECT sleep(1)");
}

ltrace output

22:31:50.507523 zend_hash_find(0x025dc3a0, "mysql_query", 12) = 0 <0.000029>
22:31:50.507781 mysql_real_query(0x027bc540, "SELECT sleep(1)", 15) = 0 <1.000600>
22:31:51.508531 zend_hash_find(0x025dc3a0, "mysql_query", 12) = 0 <0.000025>
22:31:51.508675 mysql_real_query(0x027bc540, "SELECT sleep(1)", 15) = 0 <1.000926>

ltrace command

ltrace -ttTg -x zend_hash_find -x mysql_real_query -p [pid of script above]

Python

Test script

import MySQLdb
db = MySQLdb.connect("localhost", "root", "", "test")
cursor = db.cursor()
sql = """SELECT sleep(1)"""
while True:
	cursor.execute(sql)
	data = cursor.fetchone()
db.close()

ltrace output

22:24:39.104786 PyEval_SaveThread() = 0x21222e0 <0.000029>
22:24:39.105020 PyEval_SaveThread() = 0x21222e0 <0.000024>
22:24:39.105210 PyEval_SaveThread() = 0x21222e0 <0.000024>
22:24:39.105303 mysql_real_query(0x021d01d0, "SELECT sleep(1)", 15) = 0 <1.002083>
22:24:40.107553 PyEval_SaveThread() = 0x21222e0 <0.000026>
22:24:40.107713 PyEval_SaveThread()= 0x21222e0 <0.000024>
22:24:40.107909 PyEval_SaveThread() = 0x21222e0 <0.000025>
22:24:40.108013 mysql_real_query(0x021d01d0, "SELECT sleep(1)", 15) = 0 <1.001821>

ltrace command

ltrace -ttTg -x PyEval_SaveThread -x mysql_real_query -p [pid of script above]

Perl

Test script

#!/usr/bin/perl
use DBI;

$dsn = "DBI:mysql:database=test;host=localhost";
$dbh = DBI->connect($dsn, "root", "");
$drh = DBI->install_driver("mysql");
@databases = DBI->data_sources("mysql");
$sth = $dbh->prepare("SELECT SLEEP(1)");

while (1) {
  $sth->execute;
}

ltrace output

22:42:11.194073 Perl_push_scope(0x01bd3010) =  <0.000028>
22:42:11.194299 mysql_real_query(0x01bfbf40, "SELECT SLEEP(1)", 15) = 0 <1.000876>
22:42:12.195302 Perl_push_scope(0x01bd3010) =  <0.000024>
22:42:12.195408 mysql_real_query(0x01bfbf40, "SELECT SLEEP(1)", 15) = 0 <1.000967>

ltrace command

ltrace -ttTg -x mysql_real_query -x Perl_push_scope -p [pid of script above]

Ruby

Test script

require 'rubygems'
require 'sequel'

DB = Sequel.connect('mysql://root@localhost/test')

while true
  p DB['select sleep(1)'].select.first
  GC.start
end

snip of ltrace output

22:10:00.195814 garbage_collect()  = 0 <0.022194>
22:10:00.218438 mysql_real_query(0x02740000, "select sleep(1)", 15) = 0 <1.001100>
22:10:01.219884 garbage_collect() = 0 <0.021401>
22:10:01.241679 mysql_real_query(0x02740000, "select sleep(1)", 15) = 0 <1.000812>

ltrace command used:

ltrace -ttTg -x garbage_collect -x mysql_real_query -p [pid of script above]

Where to get it

How ltrace works normally

ltrace works by setting software breakpoints on entries in a process’ Procedure Linkage Table (PLT).

What is a software breakpoint

A software breakpoint is just a series of bytes (0xcc on the x86 and x86_64) that raise a debug interrupt (interrupt 3 on the x86 and x86_64). When interrupt 3 is raised, the CPU executes a handler installed by the kernel. The kernel then sends a signal to the process that generated the interrupt. (Want to know more about how signals and interrupts work? Check out an earlier blog post: here)

What is a PLT and how does it work?

A PLT is a table of absolute addresses to functions. It is used because the link editor doesn’t know where functions in shared objects will be located. Instead, a table is created so that the program and the dynamic linker can work together to find and execute functions in shared objects. I’ve simplified the explanation a bit1, but at a high level:

  1. Program calls a function in a shared object, the link editor makes sure that the program jumps to a slot in the PLT.
  2. The program sets some data up for the dynamic linker and then hands control over to it.
  3. The dynamic linker looks at the info set up by the program and fills in the absolute address of the function that was called in the PLT.
  4. Then the dynamic linker calls the function.
  5. Subsequent calls to the same function jump to the same slot in the PLT, but every time after the first call the absolute address is already in the PLT (because when the dynamic linker is invoked the first time, it fills in the absolute address in the PLT).

Since all calls to library functions occur via the PLT, ltrace sets breakpoints on each PLT entry in a program.

Why ltrace didn’t work with libdl loaded libraries

Libraries loaded with libdl are loaded at run time and functions (and other symbols) are accessed by querying the dynamic linker (by calling dlsym()). The compiler and link editor don’t know anything about libraries loaded this way (they may not even exist!) and as such no PLT entries are created for them.

Since no PLT entries exist, ltrace can’t trace these functions.

What needed to be done to make ltrace libdl-aware

OK, so we understand the problem. ltrace only sets breakpoints on PLT entries and libdl loaded libraries don’t have PLT entries. How can this be fixed?

Luckily, the dynamic linker and ELF all work together to save your ass.

Executable and Linking Format (ELF) is a file format for executables, shared libraries, and more2. The file format can get a bit complicated, but all you really need to know is: ELF consists of different sections which hold different types of entries. There is a section called .dynamic which has an entry named DT_DEBUG. This entry stores the address of a debugging structure in the address space of the process. In Linux, this struct has type struct r_debug.

How to use struct r_debug to win the game

The debug structure is updated by the dynamic linker at runtime to reflect the current state of shared object loading. The structure contains 3 things that will help us in our quest:

  1. state – the current state of the mapping change taking place (begin add, begin delete, consistent)
  2. brk – the address of a function internal to the dynamic linker that will be called when the linker maps, unmaps, or has completed mapping a shared object.
  3. link map – Pointer to the start of a list of currently loaded objects. This list is called the link map and is represented as a struct link_map in Linux.

Tie it all together and bring it home

To add support for libdl loaded libraries to ltrace, the steps are:

  1. Find the address of the debug structure in the .dynamic section of the program.
  2. Set a software breakpoint on brk.
  3. When the dynamic linker updates the link map, it will trigger the software breakpoint.
  4. When the breakpoint is triggered, check state in the debug structure.
  5. If a new library has been added, walk the link map and figure out what was added.
  6. Search the added library’s symbol table for the symbols we care about.
  7. Set a software breakpoints on whatever is found.
  8. Steps 3-8 repeat.

That isn’t too hard all thanks to the dynamic linker providing a way for us to hook into its internal events.

Conclusion

  • Read the System V ABI for your CPU. It is filled with insanely useful information that can help you be a better programmer.
  • Use the source. A few times while hacking on this patch I looked through the source for GDB and glibc to help me figure out what was going on.
  • Understanding how things work at a low-level can help you build tools to solve your high-level problems.

Thanks for reading and don’t forget to subscribe (via RSS or e-mail) and follow me on twitter.

References

  1. System V Application Binary Interface AMD64 Architecture Processor Supplement, p 78 []
  2. Executable and Linking Format (ELF) Specification []

Written by Joe Damato

October 8th, 2009 at 4:59 am

Useful kernel and driver performance tweaks for your Linux server

View Comments


This article is going to address some kernel and driver tweaks that are interesting and useful. We use several of these in production with excellent performance, but you should proceed with caution and do research prior to trying anything listed below.

Tickless System

The tickless kernel feature allows for on-demand timer interrupts. This means that during idle periods, fewer timer interrupts will fire, which should lead to power savings, cooler running systems, and fewer useless context switches.

Kernel option: CONFIG_NO_HZ=y

Timer Frequency

You can select the rate at which timer interrupts in the kernel will fire. When a timer interrupt fires on a CPU, the process running on that CPU is interrupted while the timer interrupt is handled. Reducing the rate at which the timer fires allows for fewer interruptions of your running processes. This option is particularly useful for servers with multiple CPUs where processes are not running interactively.

Kernel options: CONFIG_HZ_100=y and CONFIG_HZ=100

Connector

The connector module is a kernel module which reports process events such as fork, exec, and exit to userland. This is extremely useful for process monitoring. You can build a simple system (or use an existing one like god) to watch mission-critical processes. If the processes die due to a signal (like SIGSEGV, or SIGBUS) or exit unexpectedly you’ll get an asynchronous notification from the kernel. The processes can then be restarted by your monitor keeping downtime to a minimum when unexpected events occur.

Kernel options: CONFIG_CONNECTOR=y and CONFIG_PROC_EVENTS=y

TCP segmentation offload (TSO)

A popular feature among newer NICs is TCP segmentation offload (TSO). This feature allows the kernel to offload the work of dividing large packets into smaller packets to the NIC. This frees up the CPU to do more useful work and reduces the amount of overhead that the CPU passes along the bus. If your NIC supports this feature, you can enable it with ethtool:

[joe@timetobleed]% sudo ethtool -K eth1 tso on

Let’s quickly verify that this worked:

[joe@timetobleed]% sudo ethtool -k eth1
Offload parameters for eth1:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp segmentation offload: on
udp fragmentation offload: off
generic segmentation offload: on
large receive offload: off

[joe@timetobleed]% dmesg | tail -1
[892528.450378] 0000:04:00.1: eth1: TSO is Enabled

Intel I/OAT DMA Engine

This kernel option enables the Intel I/OAT DMA engine that is present in recent Xeon CPUs. This option increases network throughput as the DMA engine allows the kernel to offload network data copying from the CPU to the DMA engine. This frees up the CPU to do more useful work.

Check to see if it’s enabled:

[joe@timetobleed]% dmesg | grep ioat
ioatdma 0000:00:08.0: setting latency timer to 64
ioatdma 0000:00:08.0: Intel(R) I/OAT DMA Engine found, 4 channels, device version 0x12, driver version 3.64
ioatdma 0000:00:08.0: irq 56 for MSI/MSI-X

There’s also a sysfs interface where you can get some statistics about the DMA engine. Check the directories under /sys/class/dma/.

Kernel options: CONFIG_DMADEVICES=y and CONFIG_INTEL_IOATDMA=y and CONFIG_DMA_ENGINE=y and CONFIG_NET_DMA=y and CONFIG_ASYNC_TX_DMA=y

Direct Cache Access (DCA)

Intel’s I/OAT also includes a feature called Direct Cache Access (DCA). DCA allows a driver to warm a CPU cache. A few NICs support DCA, the most popular (to my knowledge) is the Intel 10GbE driver (ixgbe). Refer to your NIC driver documentation to see if your NIC supports DCA. To enable DCA, a switch in the BIOS must be flipped. Some vendors supply machines that support DCA, but don’t expose a switch for DCA. If that is the case, see my last blog post for how to enable DCA manually.

You can check if DCA is enabled:

[joe@timetobleed]% dmesg | grep dca
dca service started, version 1.8

If DCA is possible on your system but disabled you’ll see:

ioatdma 0000:00:08.0: DCA is disabled in BIOS

Which means you’ll need to enable it in the BIOS or manually.

Kernel option: CONFIG_DCA=y

NAPI

The “New API” (NAPI) is a rework of the packet processing code in the kernel to improve performance for high speed networking. NAPI provides two major features1:

Interrupt mitigation: High-speed networking can create thousands of interrupts per second, all of which tell the system something it already knew: it has lots of packets to process. NAPI allows drivers to run with (some) interrupts disabled during times of high traffic, with a corresponding decrease in system load.

Packet throttling: When the system is overwhelmed and must drop packets, it’s better if those packets are disposed of before much effort goes into processing them. NAPI-compliant drivers can often cause packets to be dropped in the network adaptor itself, before the kernel sees them at all.

Many recent NIC drivers automatically support NAPI, so you don’t need to do anything. Some drivers need you to explicitly specify NAPI in the kernel config or on the command line when compiling the driver. If you are unsure, check your driver documentation. A good place to look for docs is in your kernel source under Documentation, available on the web here: http://lxr.linux.no/linux+v2.6.30/Documentation/networking/ but be sure to select the correct kernel version, first!

Older e1000 drivers (newer drivers, do nothing): make CFLAGS_EXTRA=-DE1000_NAPI install

Throttle NIC Interrupts

Some drivers allow the user to specify the rate at which the NIC will generate interrupts. The e1000e driver allows you to pass a command line option InterruptThrottleRate

when loading the module with insmod. For the e1000e there are two dynamic interrupt throttle mechanisms, specified on the command line as 1 (dynamic) and 3 (dynamic conservative). The adaptive algorithm traffic into different classes and adjusts the interrupt rate appropriately. The difference between dynamic and dynamic conservative is the the rate for the “Lowest Latency” traffic class, dynamic (1) has a much more aggressive interrupt rate for this traffic class.

As always, check your driver documentation for more information.

With modprobe: insmod e1000e.o InterruptThrottleRate=1

Process and IRQ affinity

Linux allows the user to specify which CPUs processes and interrupt handlers are bound.

  • Processes You can use taskset to specify which CPUs a process can run on
  • Interrupt Handlers The interrupt map can be found in /proc/interrupts, and the affinity for each interrupt can be set in the file smp_affinity in the directory for each interrupt under /proc/irq/

This is useful because you can pin the interrupt handlers for your NICs to specific CPUs so that when a shared resource is touched (a lock in the network stack) and loaded to a CPU cache, the next time the handler runs, it will be put on the same CPU avoiding costly cache invalidations that can occur if the handler is put on a different CPU.

However, reports2 of up to a 24% improvement can be had if processes and the IRQs for the NICs the processes get data from are pinned to the same CPUs. Doing this ensures that the data loaded into the CPU cache by the interrupt handler can be used (without invalidation) by the process; extremely high cache locality is achieved.

oprofile

oprofile is a system wide profiler that can profile both kernel and application level code. There is a kernel driver for oprofile which generates collects data in the x86′s Model Specific Registers (MSRs) to give very detailed information about the performance of running code. oprofile can also annotate source code with performance information to make fixing bottlenecks easy. See oprofile’s homepage for more information.

Kernel options: CONFIG_OPROFILE=y and CONFIG_HAVE_OPROFILE=y

epoll

epoll(7) is useful for applications which must watch for events on large numbers of file descriptors. The epoll interface is designed to easily scale to large numbers of file descriptors. epoll is already enabled in most recent kernels, but some strange distributions (which will remain nameless) have this feature disabled.

Kernel option: CONFIG_EPOLL=y

Conclusion

  • There are a lot of useful levers that can be pulled when trying to squeeze every last bit of performance out of your system
  • It is extremely important to read and understand your hardware documentation if you hope to achieve the maximum throughput your system can achieve
  • You can find documentation for your kernel online at the Linux LXR. Make sure to select the correct kernel version because docs change as the source changes!

Thanks for reading and don’t forget to subscribe (via RSS or e-mail) and follow me on twitter.

References

  1. http://www.linuxfoundation.org/en/Net:NAPI []
  2. http://software.intel.com/en-us/articles/improved-linux-smp-scaling-user-directed-processor-affinity/ []

Written by Joe Damato

July 28th, 2009 at 3:20 am

Fix a bug in Ruby’s configure.in and get a ~30% performance boost.

View Comments


Special thanks…

Going out to Jake Douglas for pushing the initial investigation and getting the ball rolling.

The whole --enable-pthread thing

Ask any Ruby hacker how to easily increase performance in a threaded Ruby application and they’ll probably tell you:

Yo dude… Everyone knows you need to configure Ruby with --disable-pthread.

And it’s true; configure Ruby with --disable-pthread and you get a ~30% performance boost. But… why?

For this, we’ll have to turn to our handy tool strace. We’ll also need a simple Ruby program to this one. How about something like this:

def make_thread
  Thread.new {
    a = []
    10_000_000.times {
      a << "a"
      a.pop
    }
  }
end

t = make_thread 
t1 = make_thread 

t.join
t1.join

Now, let's run strace on a version of Ruby configure'd with --enable-pthread and point it at our test script. The output from strace looks like this:

22:46:16.706136 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 <0.000004>
22:46:16.706177 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 <0.000004>
22:46:16.706218 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 <0.000004>
22:46:16.706259 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 <0.000005>
22:46:16.706301 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 <0.000004>
22:46:16.706342 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 <0.000004>
22:46:16.706383 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 <0.000004>
22:46:16.706425 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 <0.000004>
22:46:16.706466 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 <0.000004>

Pages and pages and pages of sigprocmask system calls (Actually, running with strace -c, I get about 20,054,180 calls to sigprocmask, WOW). Running the same test script against a Ruby built with --disable-pthread and the output does not have pages and pages of sigprocmask calls (only 3 times, a HUGE reduction).

OK, so let's just set a breakpoint in GDB... right?

OK, so we should just be able to set a breakpoint on sigprocmask and figure out who is calling it.

Well, not exactly. You can try it, but the breakpoint won't trigger (we'll see why a little bit later).

Hrm, that kinda sucks and is confusing. This will make it harder to track down who is calling sigprocmask in the threaded case.

Well, we know that when you run configure the script creates a config.h with a bunch of defines that Ruby uses to decide which functions to use for what. So let's compare ./configure --enable-pthread with ./configure --disable-pthread:

[joe@mawu:/home/joe/ruby]% diff config.h config.h.pthread 
> #define _REENTRANT 1
> #define _THREAD_SAFE 1
> #define HAVE_LIBPTHREAD 1
> #define HAVE_NANOSLEEP 1
> #define HAVE_GETCONTEXT 1
> #define HAVE_SETCONTEXT 1


OK, now if we grep the Ruby source code, we see that whenever HAVE_[SG]ETCONTEXT are set, Ruby uses the system calls setcontext() and getcontext() to save and restore state for context switching and for exception handling (via the EXEC_TAG).

What about when HAVE_[SG]ETCONTEXT are not define'd? Well in that case, Ruby uses _setjmp/_longjmp.

Bingo!

That's what's going on! From the _setjmp/_longjmp man page:

... The _longjmp() and _setjmp() functions shall be equivalent to longjmp() and setjmp(), respectively, with the additional restriction that _longjmp() and _setjmp() shall not manipulate the signal mask...

And from the [sg]etcontext man page:

... uc_sigmask is the set of signals blocked in this context (see sigprocmask(2)) ...


The issue is that getcontext calls sigprocmask on every invocation but _setjmp does not.

BUT WAIT if that's true why didn't GDB hit a sigprocmask breakpoint before?

x86_64 assembly FTW, again

Let's fire up gdb and figure out this breakpoint-not-breaking thing. First, let's start by disassembling getcontext (snipped for brevity):

(gdb) p getcontext
$1 = {} 0x7ffff7825100
(gdb) disas getcontext
...
0x00007ffff782517f : mov $0xe,%rax
0x00007ffff7825186 : syscall
...

Yeah, that's pretty weird. I'll explain why in a minute, but let's look at the disassembly of sigprocmask first:

(gdb) p sigprocmask
$2 = {} 0x7ffff7817340 <__sigprocmask>
(gdb) disas sigprocmask
...
0x00007ffff7817383 <__sigprocmask+67>: mov $0xe,%rax
0x00007ffff7817388 <__sigprocmask+72>: syscall
...

Yeah, this is a bit confusing, but here's the deal.

Recent Linux kernels implement a shiny new method for calling system calls called sysenter/sysexit. This new way was created because the old way (int $0x80) turned out to be pretty slow. So Intel created some new instructions to execute system calls without such huge overhead.

All you need to know right now (I'll try to blog more about this in the future) is that the %rax register holds the system call number. The syscall instruction transfers control to the kernel and the kernel figures out which syscall you wanted by checking the value in %rax. Let's just make sure that sigprocmask is actually 0xe:

[joe@pluto:/usr/include]% grep -Hrn "sigprocmask" asm-x86_64/unistd.h 
asm-x86_64/unistd.h:44:#define __NR_rt_sigprocmask                     14


Bingo. It's calling sigprocmask (albeit a bit obscurely).

OK, so getcontext isn't calling sigprocmask directly, instead it replicates a bunch of code that sigprocmask has in its function body. That's why we didn't hit the sigprocmask breakpoint; GDB was going to break if you landed on the address 0x7ffff7817340 but you didn't.

Instead, getcontext reimplements the wrapper code for sigprocmask itself and GDB is none the wiser.

Mystery solved.

The patch

Get it HERE

The patch works by adding a new configure flag called --disable-ucontext to allow you to specifically disable [sg]etcontext from being called, you use this in conjunction with --enable-pthread, like this:

./configure --disable-ucontext --enable-pthread


After you build Ruby configured like that, its performance is on par with (and sometimes slightly faster) than Ruby built with --disable-pthread for about a 30% performance boost when compared to --enable-pthread.

I added the switch because I wanted to preserve the original Ruby behavior, if you just pass --enable-pthread without --disable-ucontext Ruby will do the old thing and generate piles of sigprocmasks.

Conclusion

  1. Things aren't always what they seem - GDB may lie to you. Be careful.
  2. Use the source, Luke. Libraries can do unexpected things, debug builds of libc can help!
  3. I know I keep saying this, assembly is useful. Start learning it today!

If you enjoyed this blog post, consider subscribing (via RSS) or following (via twitter).

You'll want to stay tuned; tmm1 and I have been on a roll the past week. Lots of cool stuff coming out!

Written by Joe Damato

May 5th, 2009 at 3:20 am

5 Things You Don’t Know About User IDs That Will Destroy You

View Comments


*nix user and group IDs are complicated, confusing, and often misused. Look at this code snippet from the popular Ruby project, Starling:

def drop_privileges
  Process.egid = options[:group] if options[:group]
  Process.euid = options[:user] if options[:user]
end

At quick first glance, you might think this code looks OK. But you’d be wrong.

Let’s take a look at 5 things you probably don’t know about user and group IDs that can lead you to your downfall.

  1. The difference between real, effective, and saved IDs
  2. This is always a bit confusing, but without a solid understanding of this concept you are doomed later.

    • Real ID – The real ID is the ID of the process that created the current process. So, let’s say you log in to your box as joe, your shell is then launched with its real ID set to joe. All processes you start from your shell will inherit the real ID joe as their real ID.
    • Effective ID – The effective ID is the ID that the system uses to determine whether a process can take a particular action. There are two popular ways to change your effective ID:
      • su – the su program changes your effective, real, and saved IDs to the ID of the user you are switching to.
      • set ID upon execute (abbreviated setuid) – You can mark a program’s set uid upon execute bit so that the program runs with its effective and saved ID set to the owner of the program (which may not necessarily be you). The real ID will remain untouched. For example, if you have a program:


        rv = getresuid(&ruid, &euid, &suid);

        printf("ruid %d, euid %d, suid %d\n", ruid, euid, suid);

        If you then chown the program as root and chmod +s (which turns on the setuid bit), the program will print:

        ruid 1000, euid 0, suid 0

        when it is run (assuming your user ID is 1000).

    • Saved ID – The saved ID is set to the effective ID when the program starts. This exists so that a program can regain its original effective ID after it drops its effective ID to an unprivileged ID. This use-case can cause problems (as we’ll see soon) if it is not correctly managed.
    • If you start a program as yourself, and it does not have its set ID upon execute bit set, then the program will start running with its real, effective, and saved IDs set to your user ID.
    • If you run a setuid program, your real ID remains unchanged, but your effective and saved IDs are set to the owner of the file.
    • su does the same as running a setuid program, but it also changes your real ID.

  3. Don’t use Process.euid= in Ruby; stay as far away as possible
    • Process.euid= is EXTREMELY platform specific. It might do any of the following:

      • Set just your effective ID
      • Set your effective, real, and saved ID.

      On most recent Linux kernels, Process.euid= changes ONLY the Effective ID. In most cases, this is NOT what you want. Check out this sample Ruby script. What would happen if you ran this script as root?

    • def write_file
        begin
          File.open("/test", "w+") do |f|
            f.write("hello!\n")
            f.close
          end
          puts "wrote test file"
        rescue Errno::EACCES
          puts "could not write test file"
        end
      end
       
      puts "ok, set uid to nobody"
      Process.euid = Etc.getpwnam("nobody").uid
       
      puts "going to try to write to / now…"
       
      write_file
       
      puts "restoring back to root"
       
      Process.euid = 0
       
      puts "now writing file"
       
      write_file

      This might surprise you, but the script regains root‘s ID after it has dropped itself down to nobody.

    • Why does this work?
    • Well as we just said, Process.euid= doesn’t touch the Saved ID, only the Effective ID. As a result, the effective ID can be set back to the saved ID at any time. The only way to avoid this is to call a different Ruby function as we’ll see in #4 below.

  4. Buggy native code running as nobody can execute arbitrary code as root in 8 bytes
    • Imagine a Ruby script much like the one above. The script is run as root to do something special (maybe bind to port 80).
    • The process then drops privileges to nobody.
    • Afterward, your application interacts with buggy native code in the Ruby interpreter, a Ruby extension, or a Ruby gem.
    • If that buggy native code can be “tricked” into executing arbitrary code, a malicious user can elevate the process up from nobody to root in just 8 bytes. Those 8 bytes are: \x31\xdb\x8d\x43\x17\x99\xcd\x80 – which is a binary representation of setuid(0).
    • At this point, a malicious user can execute arbitrary code as the root user

    Let’s take a look at an (abbreviated) code snippet (full here):

    ## we’re using a buggy gem
    require ‘badgem’

    # do some special operations here as the privileged user

    # ok, now let’s (incorrectly) drop to nobody
    Process.euid = Etc.getpwnam("nobody").uid

    # let’s take some user input
    s = MyModule::GetUserInput

    # let’s assume the user is malicious and supplies something like:
    # "\x6a\x17\x58\x31\xdb\xcd\x80\x6a\x0b\x58\x99\x52" +
    # "\x68//sh\x68/bin\x89\xe3\x52\x53\x89\xe1\xcd\x80"
    # as the string.
    # That string is x86_32 linux shellcode for running
    # setuid(0); and execve("/bin/sh", 0, 0) !

    # pass that to a buggy Ruby Gem
    BadGem::bad(s)

    # the user is now sitting in a root shell!!

    This is obviously NOT GOOD.

  5. How to change the real, effective, and saved IDs
  6. In the list below, I’m going to list the functions as syscall - RubyFunction

    • setuid(uid_t uid) - Process::Sys.setuid(integer)
    • This pair of functions always sets the real, effective, and saved user IDs to the value passed in. This is a useful function for permanently dropping privileges, as we’ll see soon. This is a POSIX function. Use this when possible.

    • setresuid(uid_t ruid, uid_t euid, uid_t suid) - Process::Sys.setresuid(rid, eid, sid)
    • This pair of functions allows you to set the real, effective, saved User IDs to arbitrary values, assuming you have a privileged effective ID. Unfortunately, this function is NOT POSIX and is not portable. It does exist on Linux and some BSDs, though.

    • setreuid(uid_t ruid, uid_t eid) - Process::Sys.setreuid(rid, eid)
    • This pair of functions allows you to set the real and effective user IDs to the values passed in. On Linux:

      • A process running with an unprivileged effective ID will only have the ability to set the real ID to the real ID or to the effective ID.
      • A process running with a privileged effective ID will have its saved ID set to the new effective ID if the real or effective IDs are set to a value which was not the previous real ID.

      This is a POSIX function, but has lots of cases with undefined behavior. Be careful.

    • seteuid(uid_t eid) - Process::Sys.seteuid(eid)
    • This pair of functions sets the effective ID of the process but leaves the real and saved IDs unchanged. IMPORTANT: Any process (including those with unprivileged effective IDs) may change their effective ID to their real or saved ID. This is exactly the behavior we saw with the Ruby script in #2 above. This is a POSIX function.

  7. How to correctly and permanently drop privileges
  8. You should use either the:

    • setuid(uid_t uid) - Process::Sys.setuid(integer)
    • or

    • setresuid(uid_t ruid, uid_t euid, uid_t suid) - Process::Sys.setresuid(rid, eid, sid)

    pair of functions to set the real, effective, and saved IDs to the lowest privileged ID possible. On many systems, this is the ID of the user nobody.

    For the truly paranoid, it is recommended to check that dropping privileges was actually successful before continuing. For example:

    require ‘etc’

    def test_drop
      begin
        Process::Sys.setuid(0)
      rescue Errno::EPERM
        true
      else
        false
      end
    end

    uid = Etc.getpwnam("nobody").uid
    Process::Sys.setuid(uid)

    if !test_drop
      puts "Failed!"
      #handle error
    end

Conclusion

*nix user and group ID management is confusing, difficult, and extremely error prone. It is a difficult system with many nuances, gotchas, and caveats. It is no wonder so many people make mistakes when trying to write secure code. The major things to keep in mind from this article are:

  • Avoid Process.euid= at all costs.
  • Drop privileges as soon as possible in your application.
  • Drop those privileges permanently.
  • Ensure that privileges were correctly dropped.
  • Carefully read and re-read man pages when using the functions listed above.

Written by Joe Damato

April 13th, 2009 at 12:06 pm

a/b test mallocs against your memory footprint

View Comments


The other day at Kickball Labs we were discussing whether linking Ruby against tcmalloc (or ptmalloc3, nedmalloc, or any other malloc) would have any noticeable effect on application latency. After taking a side in the argument, I started wondering how we could test this scenario.

We had a couple different ideas about testing:

  • Look at other people’s benchmarks
    BUT do the memory workloads tested in the benchmarks actually match our own workload at all?
  • Run different allocators on different Ruby backends
    BUT different backends will get different users who will use the system differently and cause different allocation patterns
  • Try to recreate our applications memory footprint and test that against different mallocs
    BUT how?

I decided to explore the last option and came up with an interesting solution. Let’s dive into how to do this.

Get the code:

http://github.com/ice799/malloc_wrap/tree/master

Step 1: We need to get a memory footprint of our process

So we have some random binary  (in this case it happens to be a Ruby interpreter, but it could be anything) and we’d like to track when it calls malloc/realloc/calloc and free (from now on I’ll refer to all of these as malloc-family for brevity). There are two ways to do this, the right way and the wrong/hacky/unsafe way.

  • The “right” way to do this, with libc malloc hooks:

    Edit your application code to use the malloc debugging hooks provided by libc. When a malloc-family function is called, your hook executes and outputs to a file which function was called and what arguments were passed to it.

  • The “wrong/hacky/unsafe” way to do this, with LD_PRELOAD:

    Create a shim library and point LD_PRELOAD at it. The shim exports the malloc-family symbols, and when your application calls one of those functions, the shim code gets executed. The shim logs which function was called and with what arguments. The shim then calls the libc version of the function (so that memory is actually allocated/freed) and returns control to the application.

I chose to do it the second way, because I like living on the edge. The second way is unsafe because you can’t call any functions which use a malloc-family function before your hooks are setup. If you do, you can end up in an infinite loop and crash the application.

You can check out my implementation for the shim library here: malloc_wrap.c

Why does your shim output such weirdly formatted data?

Answer is sort of complicated, but let’s keep it simple: I originally had a different idea about how I was going to use the output. When that first try failed, I tried something else and translated the data to the format I needed it in, instead of re-writing the shim. What can I say, I’m a lazy programmer.

OK, so once you’ve built the shim (gcc -O2 -Wall -ldl -fPIC -o malloc_wrap.so -shared malloc_wrap.c), you can launch your binary like this:

% LD_PRELOAD=/path/to/shim/malloc_wrap.so /path/to/your/binary -your -args

You should now see output in /tmp/malloc-footprint.pid

Step 2: Translate the data into a more usable format

Yeah, I should have went back and re-written the shim, but nothing happens exactly as planned. So, I wrote a quick ruby script to convert my output into a more usable format. The script sorts through the output and renames memory addresses to unique integer ids starting at 1 (0 is hardcoded to NULL).

The format is pretty simple. The first line of the file has the number of calls to malloc-family functions, followed by a blank line, and then the memory footprint. Each line of the memory footprint has 1 character which represents the function called followed by a few arguments. For the free() function, there is only one argument, the ID of the memory block to free. malloc/calloc/realloc have different arguments, but the first argument following the one character is always the ID of the return value. The next arguments are the arguments usually passed to malloc/calloc/realloc in the same order.

Have a look at my ruby script here: build_trace_file.rb

It might take a while to convert your data to this format, I suggest running this in a screen session, especially if your memory footprint data is large. Just as a warning, we collected 15 *gigabytes* of data over a 10 hour period. This script took *10 hours* to convert the data. We ended up with a 7.8 gigabyte file.

% ruby /path/to/script/build_trace_file.rb /path/to/raw/malloc-footprint.PID /path/to/converted/my-memory-footprint

Step 3: Replay the allocation data with different allocators and measure time, memory usage.

OK, so we now have a file which represents the memory footprint of our application. It’s time to build the replayer, link against your malloc implementation of choice, fire it up and start measuring time spent in allocator functions and memory usage.

Have a look at the replayer here: alloc_tester.c
Build the replayer: gcc -ggdb -Wall -ldl -fPIC -o tester alloc_tester.c

Use ltrace

ltrace is similar to strace, but for library calls. You can use ltrace -c to sum the amount of time spent in different library calls and output a cool table at the end, it will look something like this:

% time     seconds  usecs/call     calls      function
------ ----------- ----------- --------- --------------------
86.70   37.305797          62    600003 fscanf
10.64    4.578968          33    138532 malloc
2.36    1.014294          18     55263 free
0.25    0.109550          18      5948 realloc
0.03    0.011407          45       253 printf
0.02    0.010665          42       252 puts
0.00    0.000167          20         8 calloc
0.00    0.000048          48         1 fopen
------ ----------- ----------- --------- --------------------
100.00   43.030896                800260 total

Conclusion

Using a different malloc implementation can provide a speed/memory increases depending on your allocation patterns. Hopefully the code provided will help you test different allocators to determine whether or not swapping out the default libc allocator is the right choice for you. Our results are still pending; we had a lot of allocator data (15g!) and it takes several hours to replay the data with just one malloc implementation. Once we’ve gathered some data about the different implementations and their effects, I’ll post the results and some analysis. As always, stay tuned and thanks for reading!

Written by Joe Damato

March 16th, 2009 at 8:39 pm