Another Virtual Network Environment

Rob van der Hoeven
Wed Oct 14 2015

This article is about a Linux program called avne, a program that runs other programs inside a virtual network environment. All network traffic from this environment is intercepted and forwarded to the Tor network. Interception is transparent and is guaranteed to be 100% effective.

In order to run a program inside the virtual network environment the user can simply type avne followed by the program name.


    avne bash
    avne ip link
    avne iceweasel -P avne -no-remote
    avne chromium --disable-cache
    avne wget https://blog.torproject.org/

Note: Iceweasel = Firefox, renamed for (tm) reasons by Debian

Interception of the network traffic takes place at the IP-level. Because of this, the program started by avne does not have to be Tor-aware. It does not have to have a special Tor configuration and it does not have to be SOCKS compatible. With avne more communications programs can use the Tor network.

AVNE is currently in the alpha phase of development. This means that it is not intended for end users. Alpha versions are for developers and people interested in the technologies behind the program. Although avne works quite well it is not fully developed and tested.

So, what do I hope to achieve with this release?


Because avne is still in the alpha phase of development there is no install package for the program. Fortunately installation is easy. It involves the following steps:

  1. Download the source code: avne-0.5.tar.gz

  2. Extract the avne-0.5.tar.gz file.

    tar xfvz avne-0.5.tar.gz
  3. Compile the source code.

    gcc -Wall -o avne avne.c
    Note: if you get an error about the setns function then simply remove this function from the sourcecode.
  4. Change ownership of the executable to root and set the suid bit.

    chown root avne && chmod u+s avne
  5. Make a symbolic link to the executable in /usr/local/bin

    ln -s /home/rob/projects/avne/avne /usr/local/bin/avne

AVNE expects Tor to be located at localhost:9050. You can specify another address in the avne.conf file which should be in the same directory as the executable.


There are two ways avne can run a program: it can create a brand new virtual network environment, or use an existing environment. Creating a new network environment is the normal way to start a program. It is as simple as typing:

avne chromium --disable-cache
avne iceweasel -P avne -no-remote

Important: When Iceweasel (Firefox) is already running Iceweasel forks the running process. This forked process runs inside the network environment of its parent and not inside a network environment created by avne. You can prevent this behavior by first closing all running Iceweasel instances, or by starting Iceweasel with another profile (see example above). To create a new profile close all running Iceweasel instances and start Iceweasel with the option -Profilemanager.

Always check if avne is enabled by requesting its status page at IP address

Using an existing virtual network environment is needed for debugging. It allows you to run a second program (for example Wireshark) inside the network environment of another program. If you want to attach to an existing network environment you need the PID of a program that is running inside the network environment. This PID is reported by avne shortly after the program starts.


First start Iceweasel:

    avne iceweasel -P avne -no-remote

avne reports the PID of the program running inside the virtual environment:

    avne: child pid is [1234]

Use this PID to inject Wireshark inside the same network environment:

    avne --use-namespace 1234 wireshark

Note: you can also choose to start Wireshark first and then Iceweasel.

After a program is started you can check if avne is active by requesting its status page located at IP address At the moment the status page displays quite a lot of debugging information. This information is not guaranteed to be correct, development is faster than reporting! The debugging information will be removed in future versions.

During development of a program like avne logging is essential. You can find its logfile at /var/log/avne. Be aware that this logfile is very detailed and can grow quite large.


The avne program uses a combination of two technologies: User Space Networking and lightweight virtualization.

User Space Networking

With User Space Networking network protocols like IP, UDP, TCP and others are implemented by code running in User Space. A normal (non-kernel) program can use this to get almost total control of the network traffic. How? Lets give you an example of how avne uses User Space Networking to intercept and forward TCP/IP traffic. If avne receives a TCP SYN packet (connect request) for lets say IP address it sends back a TCP ACK packet (acknowledge) to the communications program. The communications program then thinks it is connected to a server at while in reality it is connected to the avne program. Once the connection with the communications program is fully established avne connects to the Tor network and starts forwarding network traffic.

Running network protocols in User Space requires a network interface that is not connected to the network protocol stack of the kernel. The tuntap device of the Linux kernel can create such an interface. Network interfaces created by a tuntap device are connected to a raw socket in User Space. Depending on the mode of the tuntap device (tun or tap), I/O on the raw socket consists of IP-datagrams (tun mode) or Ethernet packets (tap mode). avne uses the tun mode to process the network traffic of the client program.

At the moment avne implements (=intercepts) two network protocols: TCP/IP and DNS.

Lightweight virtualization

One problem that plagues Tor since its start is that even if a program is configured to use its SOCKS interface, there is no guarantee that all network traffic of the program uses the SOCKS interface. The normal non-SOCKS interface is still accessible, and client programs can still use it. Bad client implementations and add-ons have been bypassing the SOCKS interface in the past.

AVNE uses lightweight virtualization to create a network environment with only one network interface, a tun interface that is connected to the avne program. In this network environment the normal system network interface (eth0) is not available. All network traffic is guaranteed to pass through avne which in turn passes it to the Tor network.

The virtualization technology that avne uses is called Linux Kernel Namespaces. Inside the Linux kernel all the important resources are capable of having multiple independent instances. Programs running in User Space use one of the available instances of a resource and are referred to as “running inside a resource name namespace”.

Named after the resource there are several different kernel namespaces:

Kernel Namespaces are a great idea. Lightweight and easy to use! At the moment avne only uses the network and UTS namespaces. Future versions can significantly improve the protection of the user by using additional namespaces. For example: using a mount namespace can present a different rootfs to the program started by avne. One with no access to the home directory (or a fake home directory), different hosts, passwd, hostname files, etc.


The avne program is designed to be extensible. It is easy to add new network protocols, filters, or add support for other overlay networks than Tor. This section gives you a high-level overview of the avne implementation.

Data flow

AVNE has a simple object orientated design which basically consists of two objects. A connection object takes care of the connection with the client (for example Firefox), and a upstream object connects to a server (Tor). For TCP traffic this looks like:

client <-- fd_tun --> tcp_connection
                          upstream_connection <-- fd_upstream --> server
                              (input_buffer == ref output_buffer of tcp_connection)
                              (output_buffer == ref input_buffer of tcp_connection)
                              upstream_functions (connect, disconnect, read, write, prepare_events, handle_events)

In the diagram above the tcp and upstream connection share input- and output buffers. Data exchange between the tcp and upstream connection only takes place via these shared buffers. The tcp connection can further use a set of functions to control the upstream connection.

Like all good object orientated design, the tcp and upstream objects know nothing of each-others implementation. This makes the design flexible. For a tcp connection object it does not matter what the upstream object does, as long as the upstream object accepts its input and output buffers and implements a known set of functions. To illustrate this, avne has the following types of upstream connections:

Note that this design also makes it possible to chain multiple upstream connections. You can for example put a filter_adds object between the tcp connection and a SOCKS connection.

Event handling

All I/O is event-driven and uses a simple select loop. Pseudo code for TCP looks like:

    clear fdset_read and fdset_write
    add fd_tun to fdset_read

    while client_running
        for each tcp_connection do


        for each tcp_connection do


    call tcp_connection->upstream.functions->prepare_events(tcp_connection->upstream)

    call tcp_connection->upstream.functions->handle_events(tcp_connection->upstream)

    ack bytes that have been removed from buffer by upstream connection
    check if upstream connection changed to closed state

In the io_loop, file descriptors are added to the read and write sets by calling the prepare_events function on the active tcp connections. This function in turn calls the prepare_events of the upstream connection it owns. The interesting part is that a tcp connection does not have a private file descriptor, but shares the file descriptor of the tun interface with all other I/O on the tun interface. So, the prepare_events function of a tcp connection can only add the file descriptor of the owned upstream connection to the read or write sets.

After the select function returns the function handle_events is called for each tcp connection. This function calls the handle_events function of the owned upstream connection and on return checks for changes in the state of the upstream connection. If the upstream connection has written bytes from its output buffer, those bytes are ack-ed to the client. If the upstream connection changed its state to disconnected, the tcp connection starts the TCP disconnect sequence.

Tcp protocol implementation

When I started my avne project I had some worries about implementing the TCP protocol. Its a complex protocol. This complexity is mostly due to the fact that TCP must work over unreliable connections. Connections that can damage packets, drop packets, duplicate packets, change the sequence of packets etc.

The tun interface that avne uses communicates over a very reliable connection (user space - kernel). With this in mind I decided to do a partial implementation of the TCP protocol. From the testing I have done it seems like this implementation works very well. There are still some loose ends, but I am confident these will be easy to fix.

Dns protocol implementation

Implementing DNS support proved to be more interesting than I thought. I ended up implementing two ways to do DNS queries.

I really like the TCP queries solution. It does support the full DNS protocol and it does not rely on a non-standard SOCKS protocol extension. There are however problems with this solution. For a TCP query you need to specify a DNS server. In my code I use Google Public DNS, which has an IP address of This worked fine for my tests, but it probably will not scale. For a DNS server a TCP query uses considerable more resources than an UDP query. It would not surprise me if the number of simultaneous TCP connections from a client to the DNS server has a limit. If so, this will be a problem for a Tor exit node. When loading a web page you can easily have say 20 simultaneous DNS connections. If 100 users connected to an exit node do the same, the DNS server sees a total of 2000 simultaneous request coming from one IP address. Will it honor these requests? Probably not. For TCP queries to scale the Tor exit node code needs to intercept TCP traffic to port 53 and redirect the query to its own internal DNS resolver, or even better to a full caching name-server on the same machine.

In a normal network environment DNS queries are fast and cheap. If a client does not get a quick reply it assumes the query got lost or was dropped and resubmits the query. The Tor network does not have the fast response that is expected, leading to extra unnecessary queries. To counter this I have implemented a simple client-side DNS cache.

Source code

For a quick look at the source code of the avne program you can follow this link: source code of avne.c

AVNE needs your I/O

This has been a long article so if you are reading this you must be interested in avne and its technologies. I hope you want to give the program a try and if you encounter problems you will report them to me. Ideas are of course welcome too. Please keep in mind that it is alpha software that is not fully developed and tested. Do not use it if you need anonymity!

AVNE is new technology that can significantly enhance usability and security for Tor users. Like all new technology the effect of the technology must first be carefully studied before it is used. This is especially true for technology that is to be used in combination with anonymity software like Tor where a flaw can have serious consequences.

Here are some questions I have:

I developed avne with Tor in mind. Therefore I think most technical discussions about avne should take place at the tor-dev mailing list. Hope to see you there!

Comments: 0