Creating your own linux containersLast edited on Jul 10, 2016

I'm not trying to say that this solution is better than LXC or docker. I'm just doing this because it is very simple to get a basic container created with chroot and cgroups. Of course, docker provides much more features than this, but this really is the basis. It's easy to make containers in linux, depending on the amount of features you need.


cgroups are a way to run a process while limiting its resources such as cpu time and memory. The it works is by creating a group (a cgroup), define the limits for various resources (you can control more than just memory and CPU time) and then run a process under that cgroup. It is important to know that every child process of that process will also run under the same same cgroup. So if you start a bash shell under a cgroup, then any program that you manually launch from the shell will be constrained to the same cgroup than the one of the shell.

If you create a cgroup with a memory limit of 10mb, this 10mb limit will be applicable to all processes running in the cgroup. The 10mb is a constraint on the sum of memory usage by all process under the same cgroup.


On slackware 14.2 RC2, I didn't have to install or setup anything and cgroups were available to use with the tools already installed. I had to manually enable cgroups in the kernel though since I compiled my own kernel. Without going into the details (because this is covered in a million other websites) you need to make sure that:

  • cgroup kernel modules are built-in are loaded as modules
  • cgroup tools are installed
  • cgroup filesystem is mounted (normally accessible through /sys/fs/cgroup/)

Here's how to run a process in a cgroup

cgcreate -g memory:testgroup
# now the "/sys/fs/cgroup/memory/testgroup" exists and contains files that control the limits of the group

# assign a limit of 4mb to that cgroup
echo "4194304" > /sys/fs/cgroup/memory/testgroup/memory.limit_in_bytes

# run bash in that cgroup
cgexec -g memory:testgroup /bin/bash

Note that instead of using cgexec, you could also just write the current shell's PID into /sys/fs/cgroup///task. Then your shell, and whatever process you start from it, would execute in the cgroup.

Making your own containers

A container is nothing more than a chroot'd environment with processes confined in a cgroup. It's not difficult to write your own software to automate the environment setup. There is a "chroot" system call that already exist. For cgroups, I was wondering if there was any system calls available to create them. Using strace while running cgcreate, I found out that cgcreate only manipulates files in the cgroup file system. Then I got the rest of the information I needed from the documentation file located in the Documentation folder of the linux kernel source: Documentation/cgroups/cgroups.txt.

Creating a cgroup

To create a new cgroup, it is simply a matter of creating a new directory under the submodule folder that the cgroups needs to control. For example, to create a cgroup that controls memory and cpu usage, you just need to create a directory "AwesomeControlGroup" under /sys/fs/cgroup/memory and /sys/fs/cgroup/cpu. These directories will automatically be populated with the files needed to control the cgroup (cgroup fs is a vitual filesystem, so files do not exist on a physical medium).

Configuring a cgroup

To configure a cgroup, it is just a matter of writing parameters in the relevant file. For example: /sys/fs/cgroup/memory/testgroup/memory.limit_in_bytes

Running a process in a cgroup

To run a process in a cgroup, you need to launch the process, take its PID and write it under /sys/fs/cgroup///task My "container creator" application (let's call it the launcher) does it like this:

  • The launcher creates a cgroup and sets relevant parameters.
  • The launcher clones (instead of fork. now we have a parent and a child)
  • The parent waits for the child to die, after which it will destroy the cgroup.
  • The child writes its PID in the /sys/fs/cgroup///task file for all submodules (memory, cpu, etc)
  • At this point, the child runs as per the cgroup's constraints.
  • The child invokes execv with the application that the user wanted to have invoked in the container.

The reason I use clone() instead of fork, is that clone() can use the CLONE_NEWPID flag. This will create a new process namespace that will isolate the new process for the others that exist on the system. Indeed, when the cloned process queries its PID it will find that it is assigned PID 1. Doing a "ps" would not list other processes that run on the system since this new process is isolated.

Destroying a cgroup

To destroy a cgroup, just delete the /sys/fs/cgroup// directory

So interfacing with cgroups from userland is just a matter of manipulating files in the cgroup file system. It is really easy to do programmatically and also from the shell without any special tools or library.

My container application

My "container launcher" is a c++ application that chroots in a directory and run a process under a cgroup that it creates. To use it, I only need to type "./a.out container_path". The container_path is the path to a container directory that contains a "settings" files and a "chroot" directory. The "chroot" directory contains the environment of the container (a linux distribution maybe?) and the "settings" file contains settings about the cgroup configuration and the name of the process to be launched.

You can download my code: cgroups.tar


I've extracted the slackware initrd image found in the "isolinux" folder of the dvd.

cd /tmp/slackware/chroot
gunzip < /usr/src/slackware64-current/isolinux/initrd.img  | cpio -i --make-directories

Extracting this in /tmp/slackware/chroot gives me a small linux environment that directory and I've created a settings file in /tmp/slackware. Call this folder a "container", it contains a whole linux environment under the chroot folder and a settings file to indicate under what user the container should run, what process it should start, how much ram max it can get etc. For this example, my settings file is like this:

user: 99
group: 98
memlimit: 4194304
cpupercent: 5

And running the container gives me:

[12:49:34 root@pat]# ./a.out /tmp/slackware
Mem limit: 4194304 bytes
CPU shares: 51 (5%)
Added PID 9337 in cgroup
Dropping privileges to 99:98
Starting /bin/bash -i

[17:26:10 nobody@pat:/]$ ps aux
nobody       1  0.0  0.0  12004  3180 ?        S    17:26   0:00 /bin/bash -i
nobody       2  0.0  0.0  11340  1840 ?        R+   17:26   0:00 ps aux
[17:26:14 nobody@pat:/]$ ls
a  a.out  bin  boot  cdrom  dev  etc  floppy  init  lib  lib64  lost+found  mnt  nfs  proc  root  run  sbin  scripts  sys  tag  tmp  usr  var
[17:26:15 nobody@pat:/]$ exit
Exiting container


When cloning with CLONE_NEWNET, the child process gets a separate netwrok namespace. It doesn't see the host's network interfaces anymore. So in order to get networking enabled in the container, we need to create a veth pair. I am doing all network interface manipulations with libnl (which was already installed on a stock slackware installation). The veth pair will act as a kind of tunnel between the host and the container. The host side interface will be added in a bridge so that it can be part of another lan. The container side interface will be assigned an IP and then the container will be able to communicate wil all peers that are on the same bridge. The bridge could be used to connect the container on the LAN or within a local network that consists of only other containers from a select group.

The launcher creates a "eth0" that appears in the container. The other end of the veth pair is added in a OVS bridge. An ip address is set on eth0 and the default route is set. I then have full networking functionality in my container.

Settings for networking

bridge: br0    


Mem limit: 4194304 bytes
CPU shares: 51 (5%)
Added PID 14230 in cgroup
Dropping privileges to 0:0
Starting /bin/bash -i

[21:59:33 root@container:/]# ping google.com
PING google.com ( 56 data bytes
64 bytes from seq=0 ttl=52 time=22.238 ms
--- google.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 22.238/22.238/22.238 ms
[21:59:36 root@container:/]# exit
Exiting container

virtio driver implementationLast edited on Jun 6, 2016


Virtio is a standard for implementing device drivers on a virtual machine. Normally, a host would emulate a device like a rtl8139 network card, the VM would detect such a device and would load a driver for it. But this adds a lot of overhead because the host is emulating that device so it must translate the data it receives into a format that is understood by the real hardware. The more complicated the emulated device it, the more challenge it will be for the host to keep latency low.

Virtio solves this problem by letting the host expose a Virtio device. A virtio device is a fake device that will be used by the VM. Virtio devices are very simple to use compared to other real hardware devices. For example, a host may implement a Virtio network card. The VM would detect such a device and start using it as its network card. Of course, the end-user wouldn't really notice this. The simplicity of the device is seen by the device driver implementers.

So to use Virtio, the host must support it. Currenly, KVM does. Then the guest must install the appropriate device drivers. Virtio device drivers are included in the linux kernel already, so there is no need to download separate drivers. On windows, drivers must be downloaded separately.

Virtio can be seen as a two layer device architecture. The first layer is the communication layer between the host and the guest. This is how both exchange information to say "Here's a packet I want you to send on the real hardware" or "Here's a packet I just received from the real hardware". Note that the driver knows it is running in a virtual environment and can implement optimizations in that effect. But the rest of the OS, using the Virtio driver, doesn't know that. It only knows that it is using a network card with a driver like any other ones. Every Virtio device drivers communicate with the host using the same model. This means that the code for this layer can be shared between all Virtio drivers.

The second layer is the protocol used over the first layer. Every virtio device use a different protocol. For example, a virtio-net driver will speak differently than a virtio-block driver to the guest. But they would both convey the messages the same way to the host.


The reason I was interested in virtio was because my hobby operating system required some device drivers to work. I had already written an ATA driver and a netcard driver (rtl8139) but those are old devices and I wanted to learn something new anyway. By having implemented virtio drivers in my OS, I should, technically, be able to run my OS on any host that support virtio. I don't need to worry about developing several device drivers because different hosts support different hardware. If virtio because a widely accepted standard (maybe it is already), then my OS should be fine on all different hosts. Note that I will still need to implement several different drivers if I want to support my OS on real hardware. But running it in a VM for now is just fine.

My code

These are the drivers. Note that, without the full code of my OS, these drivers won't make much sense but I'm putting them here in case someone could use it as an example whenever trying to write such a driver.


Implementing the virtio drivers was very simple. I was able to do so by using only two sources of information:


I'm going to describe the implementation using pseudo-code and will skip some of the basic details. Things like PCI enumeration will be left out since it is out of the scope of this document.

Device setup

Pci enumeration

The first thing to do is to discover the device on the PCI bus. You will be searching for a device with vendor ID 0x1AF4 with device ID between 0x1000 and 0x103F. The subsystemID will indicate which type of device it is. For example, susbsystem ID 1 is a netcard. So after finding device on the PCI bus, you will obtain the base IO address and IRQ number. You can then proceed to attaching your device driver interrupt handler to that IRQ and setup the device using the IObase address..

    foreach pci_dev
        if pci_dev.vendor == 0x1AF4 && pci_dev.device >= 0x1000 && pci_dev.device <= 0x103F && pci_dev.subsystem == 1
            return [pci_dev.iobase, pci_dev.irq];

Init sequence

The device initialization is very well described in the spec so there is no need to go into much details here. Here is the sequence:

    //Virtual I/O Device (VIRTIO) Version 1.0, Spec 4, section 3.1.1:  Device Initialization

    // Tell the device that we have noticed it
    // Tell the device that we will support it.

    // Get the features that this device supports. Different host may implement different features
    // for each device. The list of device-specific features can be found in the spec

    // This is called the "negotiation". You will negotiate, with the device, what features you will support.
    // You can disable features in the supportedFeatures bitfield. You would disable
    // features that your driver doesn't implement. But you cannot enable more features
    // than what is currently specified in the supportedFeatures. 

    // Tell the device that we are OK with those features

    // Initialize queues


The init_queues() function will discover all available queues for this device and initialize them. These queues are the core communication mechanism of virtio. This is what I was refering as the first layer. I will go in more details about queues a bit later. For now, to discover the queues, You just need to verify the queue size for each queue. If the size is not zero, then a queue exist. Queues are addressed with a 16bit number.

    q_addr = 0
    size = -1
    while (size != 0)
        // Write the queue address that we want to access
        // Now read the size. The size is not the byte size but rather the element count.

        if (size > 0) init_queue(q_addr, size)

For each queue, you must prepare a rather large structure containing information about the queue and slots for buffers to send in the queue. The structure is created in memory (anywhere you want, as long as it sits on a 4k boundary) and the address will be given to the device driver. I find that the structure that is detailed in the spec is a bit confusing because the structure can't really be defined as a struct since it has many elements that must be dynamically allocated since their size depends on the queue size.

Field Format Size
Buffer Descriptors u64 address; u32 length; u16 flags; u16 next; queue_size
Available buffers header u16 flags; u16 index; 1
Available buffers u16 rings queue_size
Padding to next page byte variable
Used buffers header u16 flags; u16 index; 1
Used buffers u32 index; u32 length; queue_size

This is how I create the structure in memory

typedef struct
    u64 address;
    u32 length;
    u16 flags;
    u16 next;
} queue_buffer;

typedef struct
    u16 flags;
    u16 index;
    u16 rings[];
} virtio_available;

typedef struct
    u32 index;
    u32 length;
} virtio_used_item;

typedef struct
    u16 flags;
    u16 index;
    virtio_used_item rings[];
} virtio_used;

typedef struct
    queue_buffer* buffers;
    virtio_available* available;
    virtio_used* used;
} virt_queue;

init_queue(index, queueSize)
    u32 sizeofBuffers = (sizeof(queue_buffer) * queueSize);
    u32 sizeofQueueAvailable = (2*sizeof(u16)) + (queueSize*sizeof(u16));
    u32 sizeofQueueUsed = (2*sizeof(u16))+(queueSize*sizeof(virtio_used_item));
    u32 queuePageCount = PAGE_COUNT(sizeofBuffers + sizeofQueueAvailable) + PAGE_COUNT(sizeofQueueUsed);
    char* buf = kernelAllocPages(queuePageCount);
    u32 bufPage = buf >> 12;
    vq->buffers = (u64)buf;
    vq->available = (virtio_available*)&buf[sizeofBuffers];
    vq->used = (virtio_used*)&buf[((sizeofBuffers + sizeofQueueAvailable+0xFFF)&~0xFFF)];
    vq->next_buffer = 0;

    // Tell the device what queue we are working on

    // Now we have to tell the device what is the page number (of the physical address, not logical) of the structure
    // for that queue
    vq->available->flags = 0;

The communication layer

The way the driver talks to the device is by placing data in a queue and notifying the device that some data is ready. Data is stored in a dynamically allocated buffer. The buffer's physical address is then writen to the first free buffer descriptor in the queue. Buffers can be chained, but forget about that now (it will be usefull when you want to optimize). Then, you need to tell the device that a buffer was placed in the queue. This is done by writing the buffer index into the next free slot in the "available" array.

BTW: it's important to know that queue sizes will always be powers of 2. Making it easy to naturally wrap around, so you never need to take care of checking bounds.

    // Find next free buffer slot
    buf_index = 0;
    foreach desc in vq->buffers
        if desc.length == 0
            buf_index = index of this descriptor in the vq->buffers array

    // Add it in the available ring
    u16 index = vq->available->index % vq->queue_size;
    vq->available->rings[index] = buffer_index;

    // Notify the device that there's been a change
    OUTPORTW(queue_index, dev->iobase+0x10);

Once the device has read your data, you should get an interrupt. You would then check the "used" ring and clear any used descriptors in vq->buffers that are referenced by the "used" ring (ie: set lenght back to 0)

To receive data, you would do it almost the same way. You would still place a buffer in the queue but you would set its lenght to the max size that you are expecting data (512bytes for a block device for example, or MTU for a net device). Then you would monitor the "used" ring to see when the buffer has been used by the device and filled up.

The transport interface

With this information, you should be able to write a generic virtio transport layer that provides 3 functions:

  • init()
  • send_buffer()
  • receive_buffer()

The virtio-net implementation

MAC address

The MAC address can be found in the 6 bytes at iobase+0x14..0x19. You must access those bytes one by one.

To send a packet out, you need to create a buffer that contains a "net_header" and the payload. For simplicity, we'll assume that no buffer chaining is done. So sending a packet would be done like this:

typedef struct
    u8 flags;
    u8 gso_type;
    u16 header_length;
    u16 gso_size;
    u16 checksum_start;
    u16 checksum_offset;
} net_header;

    char buffer[size+sizeof(net_header)];
    net_header* h = &buffer;
    h.gso_type = 0;
    h.checksum_start = 0;
    h.checksum_offset = size;

To receive packets, just fill up the rx queue with empty buffers (with lenth=MTU) and set them all available. It's important to set them back available after you received data in it (ie: they've been added in the used ring) since you want to keep the queue full of ready buffers at all time.

I didn't talk about buffer chaining (it's very simple, and well described in the spec) but you should obviously use that. You could use one buffer for the header and another one for the data. You could use the address of the data buffer supplied by the calling function in the descriptor directly (as long as you convert it to physical address) instead of copying the entire frame. This allows you to implement a zero-copy mechanism.

The virt-block implementation

Block devices are similar to net device but they use one queue only and instead of a net_header, they use a block_header

typedef struct
    u32 type;
    u32 reserved;
    u64 sector;
} block_header;

To write, fill the header with type = 1, sector = sector number to write. Followed by the 512 bytes of data and send the buffer. To read, fill the header with type = 0, sector = sector number to read. Followed by a 512 bytes empty buffer. The device will fill the buffer and will put the buffer descriptor in the used ring.

I think you need to separate the header and the data buffer into 2 descriptors that are chained. That's the way I did it anyway, but I think I read that it won't work if you don't do that.


This was a very rough explanation of virtio but it should be enough to get you started and have something working. Once this is done, I suggest going through the specs again since it has a lot of information what will be needed for handling failure scenarios, optmization and multi-platform support. The driver I wrote works for my OS only and has only been tested with KVM. I am not doing any real feature negoiation nor am I handling any failure cases. Things could surely be optmized also since virtio allows very easy zero-copy buffer passing.

Networking in my OSLast edited on Apr 14, 2016

Designing my OS

When I design and code my operating system, I usually document what I do in a couple markdown files. I thought maybe some people would find it usefull to see them so I am posting one here. This one is about the networking architecture of my OS. Note that this is really the product of my imagination so it might not be (or should I say it definitely isn't) the best way of doing things. It's just the way I imagined it and it actually works.

Of course, I can't pretend this is only the product of my imagination. I did get a lot of inspiration from many sources online. But I tend to stay away (as much as I can) from the linux source code. Since I think Linux is such a great implementation, it would be too easy to assimilate a lot of its concepts while looking at it. Because once you look at it, you realize that what you're looking at is probably the best way to do things. Take sockets for example. Once you are used to using sockets with open/send/recv/close, how else would you do it? So I ended up doing that because it's what I know. But I have no idea how linux works behind the socket implementation, so I tried to design it the way I think it works. I'm not looking at conceiving the best thing, I'm looking at picking my brain and create something that works the way I imagined it. Maybe like art instead of science.

There are flaws and still open questions but it does work. I don't have any code to post here because I am thinking on either posting the whole code on my website or on github. I have to sort that out.

Network card drivers

The netcard abstration layer is contained in netcard.c.

When the OS boots, net_init() is called. This function iterates through the PCI bus's devices to find all devices that match one of the OS's netcard drivers. For each netcard found, a struct NetworkCard is instanciated. The structure contains several function pointers that are initialized with the matching driver's functions. For example, NetworkCard::receive would be set to &rtl8139_receive if the netcard was a rtl8139. Each of these NetworkCard instances will be refered to as the "netcard implementation"

Only the rtl8139 driver is implemented right now. You can view the source code I have posted a while ago in Realtek 8139 network card driver

When a netcard IRQ occurs, a softIRQ is scheduled. (I know, softIRQ is a linux concept. But I actually thought about it before knowing it existed in linux. I didn't have a name for it so I found that Linux had a great name so I stole it. But it doesn't really work like linux's softIRQ. Linux has a way better way of doing it than my OS). When the softIRQ handler is invoked, it calls net_process(). net_process() iterates through all netcard that was discovered during boot. It then checks if data is available and forwards the the data up to the TCP/IP stack if the data is an IP packet, or to the ICMP stack if the packet is of the ICMP type.

TCP/IP stack

Receiving data

The ICMP handler responds immediately to ping requests. Therefore the ICMP response is sent from the softIRQ thread. This allows consistent RTT. The IP handler forwards the message to the TCP or UDP handlers. The UDP handler is not implemented yet. The TCP handler forwards segments to active sockets by finding the socket listening to the port and ip of the message. This is done by by finding the socket instance within a hash list. The message is added in the socket's queue. The user thread is then responsible for dequeuing those messages.

Sending data

The netcard abstraction's net_send() function locks the netcard implementation's send spinlock. This way, only one thread can send to one given netcard at the same time.

net_send() takes an interface index as destination parameter. ip_send takes an ip address as destination parameter. ip_send invokes ip_routing_route() to determine on what netcard to send the message based the destination address.

net_send() will send 1 frame of maximum MTU size. It returns 0 if sending failed or size of frame if sending suceeded. Frames are guaranteed to be sent in full or not at all but never partly.

ip_send() will send a packet of maximum 65536 bytes. It will do ip fragmentation in order to send frames of appropriate size.

tcp_send() will send 1 segment of maximum 64k. This means that the underlying ip_send() can end up calling net_send several times.

There is actually a design flaw with tcp_send right now. Sending data on a socket will end up in netcard_send being called from the user thread. The call is thus blocking.

Also, If net_send() returns 0, because of HW buffer overflow, then ip_send will return the number of bytes sent. But tcp_send should not attempt to send the rest of the data because the IP protocol expects the rest of the packet.

IP routing

IP routing is a whole subject itself. There are many algorithms and many ways to do this. It's actually the center of a very big industry. Basically, when a packet needs to be sent out, we know destination IP to which we want to send. The OS needs to know out of with netcard (if there are more than one) to send the packet and it also needs to know the source IP address to put in the packet. Normally, there would be one IP per netcard. Linux allows you to setup multiple IP for one physical netcard, so this would be done by creating virtual netcards. So the rule would still hold: 1 IP per netcard. My OS does not support virtual netcards, so only one IP per physical netcards.

This is all done with a routing table. By default, the routing table will consist of an entry for each IP configured IP for the host. Those route would be something like "anything within my own LAN must be forwarded out on my netcard". So if netcard0 has IP and netcard1 has IP then the routes would be:

  • -> netcard with IP (netcard0)
  • -> netcard with IP (netcard1)

A route can either say:

  • if destination IP matches my subnet, then route out of interface X
  • if destination IP matches my subnet, then route out to GW with IP address X

To match a rule, the rule's netmask is applied to the destination IP and the rule's IP. These masked IPs are subnets. If both subnets match, then the rule matches. A rule of would be a default route and would guarantee to always match. It would be the last resort route.

Usually, routing out of an interface is done when we expect a device connected on that interface (through a switch probably) to be able to recognize that destination IP. When routing to a GW IP, it is because we expect a specific device to know where to find that destination IP. This would be a router. A default route would be configured like that. If an outgoing packet's destination IP address matches the default route, it means it couldn't match any of the other routes. So we want to send that packet to a gateway. We expect the gateway to be able to route the packet because our own routing table can't.

So the destination subnet is verified in order to find a route for that subnet. The subnet is the IP address logically AND'ed with the netmask.

Then a route for the default gateway must be added. This will tell: "Anything not matched in the LAN IPs must be sent to this IP". So something like: -> So if something needs to be sent to then that route would be matched. The OS would then compare the subnet (using the netmask) of that gateway with the IPs configured on the host. It would find that this IP (of the gateway) is on the same LAN than so netcard1 will be chosen as the outgoing netcard and the destination MAC address of the gateway will be used. So it is clear that using a gateway that reside on a subnet that doesn't match any of the host's IPs would be an invalid configuration.

Frame forwarding

At the layer 2, devices don't know about IP addresses. They only know about MAC addresses. Only devices connected together on the same layer 2 network (physically connected with switches) can talk to each other. When a packet is ready to be sent out, its MAC address might not be known yet. The OS will discover it by sending an ARP request: "who has IP". Then, that device will answer "Me, and my MAC is xx:xx:xx:xx:xx:xx". So that destination MAC address will be used. When sending a packet to an external network, that is behind a router, such as the example of earlier, the destination MAC of the gateway will be used. So if we send a packet to a device (by selecting its MAC as the destination) but the destination IP does not match that device's IP, it means we are expecting that device to be able to do layer 3 routing. Such a device would generally be a router. But it could also be any workstation on the LAN. Those workstations could be able to route. Linux does it.

Blocking operations

All socket operations such as connect, accept, send, receive and close are non-blocking. This implies that the lower-level layer operations are non-blocking also. There are some exceptions to this, but only because there are design flaws that need to be addressed.

Problem Upon connecting a socket, ARP discovery and DNS resolution might need to be performed. DNS resolution will be left to the user to do. connect() will require an IP address. Upon connect, the socket will be placed in "arp discovery" state. The MAC address will be fetched from the cache or a discovery will be sent. Upon receiving an ARP reply, or any other ARP cache entry addition, the stack will search through the socket list for sockets that are waiting for that entry. The connection will then be resumed. Once the an ARP entry is found for a socket, it will be saved in the socket and reused for all remaining transmission.


At any time, only two threads can access the tcp/ip stack: The softIRQ thread, for processing incomming frames, and the owner of the socket. If multiple threads want to share socket usage, they will have to implement their own locking mechanism. The tcp/ip stack will only guarantee thread safety between the two threads mentioned above. Sockets should be used by one thread only in a user application. Two different sockets can be used by two different threads at the same time though.

net_send() locking

net_send() is used by the softIRQ context and user threads. Since the softIRQ context has high priority, it means that if a thead is preempted while it was holding the netcard's send spinlock, and then the softIRQ attempts to request the lock, a deadlock might occur. On uni-CPU systems, a deadlock would occur because the softIRQ will never give CPU time to other threads until it has completed its work. This could also happen on systems where the number of netcard is greater than or equal to the number of CPUs.

To solve this problem, the spinlock will disable interrupts.

  • Spinlock will prevent another CPU from accessing the send function
  • The thread would not be preempted on the local CPU so there is no chances that a softIRQ would deadlock (since softIRQ are prioritized over that thread, it could continue to run and never give time to the thread to release the lock).
  • On a single-CPU system, the interrupt cleared and spinlock would be redundant but would not cause a problem

Socket locking

No locking is currently done at the socket level. The following is a list of problems that would arise, and the associated solution

Problem 1
  • Make sure a thread does not delete its socket while softIRQ is using it.
  • A thread might want to get the seqnumber while the softirq is modifying it.
  • A thread might read in the receive queue while softIRQ is writing in it

Problem 2
Make sure that only one consumer at a time can append/remove from the socket list.

The hash list is thread safe so it should already be handled correctly. But more tests need to be done because there is a little more to it than just accessing the hash list.

Problem 3
Socket backlog list might get accessed by softIRQ and owning thread


Accepting incomming connections

A socket is created using create_socket(). Then the socket is placed in listening mode with listen(). listen() will set the source ip/port of the socket as per parameters of the function and destination ip/port to 0. A backlog buffer is created with size=(sizeof(socket*)*backlogCount).

When a segment is processed by tcp_process (in softirq context), a socket will try to be matched. If no socket is found then tcp_process() will try to find a listening socket that uses the same source ip/port (matching the received segment's destination fields) and with destination ip/port set to 0. if a socket is found, then we know that this segment is for an incomming connection.

The listening socket will only process SYN segments and will ignore any other segments. When processing a SYN segment, it will create a new socket with the same source ip/port and with destination ip/port matching the source of the incomming segment. The state will be set to CONNECTING. The new pending socket will be saved in the listening socket's backlog. The new socket will stay in the backlog until it gets picked up by accept(). accept() will then move the socket to the main list. The socket created in the backlog is only temporary. accept() will create a new socket instance based on that socket so that the new instance will reside in the accepting process's heap.

When the accept function is called, it will go through the backlog of the listening socket and will finish creating the socket. It will clone the socket and create the receive buffer and send the syn/ack. The socket will stay in "CONNECTING" state until it receives the ack of the syn/ack accept will move the socket from the backlog to the main list.

Other articles about my OS

Process Context ID and the TLB
Thread management in my hobby OS
Enabling Multi-Processors in my hobby OS
Block caching and writeback
Memory Paging
Stack frame and the red zone (x86_64)
AVX/SSE and context switching
Realtek 8139 network card driver

ESP8266 based Irrigation ControllerLast edited on Apr 8, 2016


In the past couple of months, I've taken interest in the popular ESP8266 wifi enable uC. I've also wanted for a long time to start designing PCBs, so I decided to mix both interests and build an irrigation controller. I am no expert in PCB design nor any electronics for that matter. I'm a software guy. So this whole PCB design is done in a very amateur way and some of the experts out there might find many flaws in my design. But the important thing is that it does work as expected and most of all: I had a lot of fun and learned a lot of things while doing this. That project may look like a failure to some, but it is definitely a victory for me since I ended up learning things. And next time, I'll learn even more until I get good at it. That's the process I went through to learn about software design. I have to fail once before taking on the theory.


An irrigation controller is a pretty simple device. It's just a couple of relays, a microcontroller and a power supply. For my project, I decided to use 4 relays (to control 4 valves). The uC I chose is a esp8266, more specifically, the ESP-03 board. This brings wifi capabilities to my project. So the esp8266 connects to my network and with a multicast message, it advertises its presence. My New Home Automation system then picks it up and connects to it. From that point, it can control all 4 valves.

Part list

  • My custom built PCB
  • esp-03
  • 4 G5LE-1A4 relays
  • 1 LM2596
  • capacitors (see diagram for values)
  • resistors (see diagram for values)
  • diodes (see diagram for values)
  • uf.l SMD connector
  • 2.4ghz adhesive antenna
  • 4 MPS2907A transistors (PNP)

Power supply

The ESP8266 needs a 3.3 VDC input and my valves need 24VAC. So I decided to supply my board with 24VAC, supply the valves (through the relays) directly and convert to 3.3vdc before the ESP8266 input. I had very limited experience in designing a power supply before this project. I still do. But I figured that using a diode and a capacitor, I could easily convert VAC to VDC. I then used a switched regulator to convert the high VDC down to 3.3v. I used the LM2596T regulator because I already had some pre-built power supplies that had that on it and found that they were working pretty well. The choice of capacitor to use was confusing. Looking online, I found many people using different values than what the datasheet suggests. So I experimented with a breadboard until I found a combination that worked (I know....). So bascially: 24VAC gets converted to something like 33VDC after going through 1 diode and a capacitor to smooth it out. Then Through a LM2596 to step down to 3.3V.


I wasn't expecting to use an external antenna at first. But when trying my board for the first time, I saw that it couldn't connect to wifi. When I designed the PCB, I added a trace to a SMD uf.l connector. But at that time, I had no connector nor any antenna on hand. So I was wondering if that trace could be the problem. So I cut it. But it still wasn't working. So I finally bought a connector and an antenna, soldered a wire on the board because the trace had been cut. And then everything worked out great. So yeah, you need an external antenna. The small ceramic antenna on the esp-03 won't do the job. Possibly because I put it too close to the power supply on my board? Or because I have no ground plane? I have no idea, that's something I'm gonna have to investigate.


The choice of PNP transistors was because the ESP8266 GPIO are set high on boot. So I wanted my relays to stay opened during that time. I keep all GPIOs high when valves must be shut off, and pull the GPIO down when a valve needs to be opened. I did a lot of searching about where to put the load (my relay) in the circuit. Do I put it after the collector of before the emitter? Apperently it's better to put it after the collector. I still don't understand why though. So I still have some reading to do. But it works.


I've already talked about the basics of the ESP8266 in my Led strip controller using ESP8266. article. It is using my custom made protocole "DWN protocol" to speak with my home automation software. So after my home automation software is connected to the irrigation controller, it sends 2-bytes commands to activate a valve or to turn it off. The firmware takes care of a couple of things like:

  • Hard limit of 30 minutes "on" time for every valve. In case you forget to shut it off. If longer period is needed, then the controlling software will need to take care of it.
  • Only 1 valve at a time can be opened. If the controlling software opens a valve, the irrigation controller will shut off any other opened valves.
  • No valve sequence support. For example: V1 on for 10min, then V3 for 5min and then V2 for 1min. This needs to be done by the controlling software.

The sources:


So far the biggest flaw I found is that I mixed up the RX/TX pins of the FTDI header. In eagleCAD, the part I was using for the header had to have its RX connected to the uC's RX pin. I didn't know that, I thought I had to reverse them. If I had looked at the FTDI pinout, I would have noticed it. Lesson learned. But it's not a big deal, I made an adapter that reverses both pins.

Also, like I said earlier, the wifi doesn't work without an external antenna. No big deal, I can use an external antenna. But I added that SMD footprint like 5 min before sending to the fab. I'm pretty glad I did that. The lesson learned here: anticipate for errors: put extra stuff in case you need it. That's obviously not good thinking for production but for a hobbyist like me, it's a good thing to do. Noticed the "spare" capacitor foot print on the board? hehehe.


I've seen a lot of amateur projects like mine on sites like Hackaday and a lot of people always make the same comments so I figured I'd answer those questions immediately.

  • Why did you do it this way instead of that way? Because I felt like doing it that way.
  • Why not use this and that instead? Because that's not what I felt like playing with.
  • Isn't this a fire hazard? I don't know! Is it?!?!?! Please tell me at youjustbuiltadeathtrapyoulunatic@dumaisnet.ca
  • Why not use an Insteon EZFlora? I did. But I wanted to build an irrigation controller. So now my EZFlora is useless

Led strip controller using ESP8266.Last edited on Mar 2, 2016

I recently installed an RGB led string under my kitchen cabinet. I wanted to way to control it using any web browser on a cell phone, tablet or desktop computer. There are many ways I could have achieved this but this time, I wanted to do it all by myself and design my first PCB. This is how I did it.


The ESP8266 is small cheap wifi microcontroller. It is a 32bit microcontroller with and external 1024kb or flash memory. There are tons of information out there about this chip so I won't dig too much into the basics.

When I discovered this board, I realized the potential it could bring to my home automation projects. I can now have multiple devices that talk to my central software (New Home Automation system) wirelessly.


The ESP8266 usually comes with a pre-loaded "AT" firmware. If it isn't, you can download it for free and upload it, using an FTDI cable, to your chip. The AT firmware is a simple firmware that lets you send "AT" style commands, like in the old days with modems, to control the chip. There's an AT command to connect to an access point, another to start a TCP server, another to send a packet, etc.

A lot of people use the AT firmware. This allows you to connect a another microcontroller to the ESP8266 using the UART pins and send commands to the device. But you can also write your own firmware and take advantage of the the 1024kb of flash on the chip. You have a few GPIOs and 2 UARTs. You don't need an external microcontroller.


To upload a firmware, whether it is the AT firmware or your own, you will need the flash tool. To build your firmware, you will need a complete toolchain. The esptool can be found at https://github.com/themadinventor/esptool/. Once you have downloaded the file, just copy it somewhere like in /usr/bin so that it is in your exec path. To upload a firmware, you must obviously obtain one or build one. The firmware is separated in a couple of file. If you are uploading the AT firmware, this should work (change /dev/ttyUSB8 for the serial port where your FTDI cable is attached):

esptool.py --port /dev/ttyUSB8 --baud 9600 write_flash 0x00000 boot_v1.1.bin 0x01000 v0.20/user1.bin 0x7c000 esp_init_data_default.bin 0x7e000 blank.bin


Keep in mind that the chip must be reset while GPIO0 is pulled low in order to enter the bootloader to accept a firmware update. To create your own firmware, you will need to download the toolchain. The following should get you going:

git clone --recursive https://github.com/pfalcon/esp-open-sdk
mv xtensa-lx106-elf /opt/

Now add /opt/xtensa-lx106-elf/bin in your PATH. Next, you will need a Makefile. I've created the following:

all: firmware

SRC = main.c
CC = xtensa-lx106-elf-gcc
CFLAGS = -I. -mlongcalls
LDLIBS = -nostdlib -Wl,--start-group -lmain -lnet80211 -lwpa -llwip -lpp -lphy -Wl,--end-group -lgcc
LDFLAGS = -Teagle.app.v6.ld

    $(CC) -c $(CFLAGS) $< -o $@

firmware: $(OBJECTS)
    $(CC) $(LDFLAGS) $(LDLIBS) $(OBJECTS) -o firmware.elf
    esptool.py elf2image firmware.elf

install: firmware
    esptool.py --port $(TTY) --baud 115200 write_flash 0 firmware.elf-0x00000.bin 0x40000 firmware.elf-0x40000.bin

    rm *.bin
    rm *.elf
    rm *.o

Again, remember to reset the chip with GPIO0 pulled low before invoking 'make install'. Here is a basic source file to build a firmware

#include "ets_sys.h"
#include "gpio.h"

void ICACHE_FLASH_ATTR user_init()

    gpio_output_set(BIT5, 0, BIT5, 0);

Notice how the entry function is called user_init() instead of main(int argc, char** argv). This is because you are not running your firmware bare-metal on the board. There is an OS running in there and it will invoke user_init(). This is an event-based system. You cannot make an infinite loop like you would on another microcontroller. Instead, you let the OS run and hook your app to some events with callback functions. There is a way to create a timer and give the OS a pointer to your timer callback function. You can also give the OS a function pointer to a message handler and post messages to your app in order to receive messages in your message handler. If your code takes too much time inside a callback, the OS watchdog will reset your chip. So you cannot make infinit loops nor complex functions that do massive computations. You have to build your architecture around a single-threaded event-based system. Kindof like a javascript application.

The SDK, included in the toolchain, is pretty easy to use. For examle, to connect to a wireless access point, all you need to do is:

void ICACHE_FLASH_ATTR init_wifi(char ssid[32], char password[64])
    struct station_config station_config;
    wifi_set_opmode(1); // station mode. will connect to AP
    os_memcpy(&station_config.ssid, ssid, 32);
    os_memcpy(&station_config.password, password, 64);

void ICACHE_FLASH_ATTR user_init()

If you want to create an infinite loop, as I said previously, you cannot do this within a callback since it will block the os thread and the watchdog will reset your board. Instead, you need to post your self a message so that the OS will call your function everytime. For example:

#define TASK_QUEUE_SIZE 64
os_event_t    main_loop_task_queue[TASK_QUEUE_SIZE];

// handle a message
static void ICACHE_FLASH_ATTR main_loop(os_event_t *events)
    // do something with the message

    // now post a message again to our self, creating an infinit loop
    system_os_post(MAIN_LOOP_TASK_PRIORITY, 0, 0);

void ICACHE_FLASH_ATTR user_init()
    system_os_task(main_loop, MAIN_LOOP_TASK_PRIORITY, main_loop_task_queue, TASK_QUEUE_SIZE);

    // post a message to ourself
    system_os_post(MAIN_LOOP_TASK_PRIORITY, 0, 0);

The Board

I'm not going to show my board diagram, because frankly, I'm a little ashamed of the work I did. This is my first PCB design and I'm still learning basic electronics. Bascially, I made a PCB with 3 mosfets, a step-down voltage regulator (12V to 3.3V) and a ESP-03 chip.


The chip supports software PWM for all GPIOs but I never was able to make it work properly. So I ended up writing my own software PWM. It's pretty straight forward, I just setup a timer and toggle GPIOs according to a duty cycle set by the user application. You can get my source code here:

DWN protocol

For each of my new Wifi devices to talk to my home automation server, I had to invent a new protocol. I could have used MQTT but I felt like my idea was even more simple. I created what I call the DWN protocol (DHAS Wifi Nodes).

The protocol is very simple. The wifi node broadcasts a JSON message containing its name and unique ID every 5 seconds on a multicast address at port 242. Whenever my home automation server sees such an advertisement, it will detect the source IP address of the broadcasting device and connect to the device's TCP server listening on port 242. Once the wifi device has a TCP client connected, it continue to send advertisements every 5 seconds and also sends heartbeats to the connected client. The DWN protocol does not define the format of payloads being exchanged between the home automation server and the wifi device. The DWN protocol is just a protocol that allows wifi nodes discovery.

Using such a system, my home automation can detect any wifi devices that gets turned on in my house and it will connect to it. Since the devices broadcast their name (ie: "IRRIGATION_CONTROLLER"), the server loads the appropriate "driver" or "module" for that device and starts communicating with them. This allows me to have a plug-and-play architecture. If I want to add a second led controller device, I just build one, turn it on and the home automation server automatically detects it and starts using it.

Future projects

This was my led controller board. I am planning on making a sprinkler controller with a esp-03 board with 4 relays, using the DWM protocol with my home automation server. I will also make a small device with a ESP-01 board with 2 input GPIOs to monitor my alarm system's relays. I currently monitor my alarm system's relay with an arduino setup as an ACM device plugged in the USB of my computer. I'd rather go wifi now. I would also like to build a device that would stay connected in my car. Onces I park my car in the garage, because of the DWN protocol, my home automation server will know when my car is at home and when it isn't. So as you can see, these esp8266 chips can be used for many things in the home automation world.