Discussion about this post

User's avatar
Jordan Newman's avatar

Back before nginx when we needed a webserver that could perform but apache 2.x was a broken mess and apache 1.3 was all 1 proc per connection snd didn’t scale like you mentioned I wrote an event driven webserver based on kqueue. I found kqueue to be superior with in syntax and what you could build with it so I kept using FreeBSD instead of switching to linux. Your description of everything above is on the money. The other benefit of the low memory footprint was that all that unused RAM would he used by FreeBSD as filesystem cache, so if i served say a 1gb video file the first request the os will read it, see that there is free memory and store it in the FS cache. Then let’s say I got 1000 new requests for the file in the next minute, it won’t have to do a slow read from disk. Instead the OS will read it from memory and send it without touching the disk.

This brings me to another optimization, sendfile() / zero copying. First I will use the sendfile() system call to copy X bytes from the file descriptor to the socket descriptor. By using sendfile() the kernel will write the data we tell it to the file descriptor all within kernel space and all without blocking (make sure to utilize the SF_NODISKIO option). Without sendfile you would need to read the data in kernel spade, then copy it to user space. Then copy from user space back to kernel space and onto the socket descriptor of the client connection. That’s a lot of context switching for no reason. Not to mention it wastes twice the RAM while handling the request.

TJ Easter's avatar

Interesting read!

1 more comment...

No posts

Ready for more?