Fundamentally, a lot of API development is about gathering, manipulating, and tranferring data across processes. The concept of copying data from one memory area to another memory area is integral to the efficiency of the system. My interest is to better understand how this “copy” operation works. More so, I want to learn how to make this copy process efficient from a software engineer’s POV.

Usually when a user space process (our application binary) has to execute system operations like reading or writing data from/to a device (disk, network etc.) through their high level software interfaces (language runtime), or like moving data from one device to another, etc., it has to perform one or more system calls that are then executed in kernel space by the operating system.1 Most often these system calls belong to read2 and write3 family. However, these context switches from user space to kernel space are time consuming, thus expensive.

The idea is to reduce the data tranfer operations using efficient system calls that do not perform such context switches. These system calls transfer bytes from one File Descriptor to another within the kernel space. Examples of such calls include sendFile4 and splice5 among others.

I would defer to an article from IBM which beautifully explains the intricacies.

Applications Link to heading

  • Kafka uses zero-copy data transfer optimizations for high throughput between disk data and network.67

  • Go HTTP FileServer from standard library leverages zero-copy techniques 8.

It’s pleasantly surprising to discover that the tools I use already implement this concept. This idea of zero copy helps improve my mental model about distributed system design.