This article is about a Linux program called avne, a program that runs other programs inside a virtual network environment. All network traffic from this environment is intercepted and forwarded to the Tor network. Interception is transparent and is guaranteed to be 100% effective.
In order to run a program inside the virtual network environment the user can simply type avne followed by the program name.
Examples: avne bash avne ip link avne iceweasel -P avne -no-remote avne chromium --disable-cache avne wget https://blog.torproject.org/ Note: Iceweasel = Firefox, renamed for (tm) reasons by Debian
Interception of the network traffic takes place at the IP-level. Because of this, the program started by avne does not have to be Tor-aware. It does not have to have a special Tor configuration and it does not have to be SOCKS compatible. With avne more communications programs can use the Tor network.
AVNE is currently in the alpha phase of development. This means that it is not intended for end users. Alpha versions are for developers and people interested in the technologies behind the program. Although avne works quite well it is not fully developed and tested.
So, what do I hope to achieve with this release?
First of all I want to show you some new technologies that can greatly improve the security and usability of the client to Tor interface. I want to get you thinking: hey this stuff is cool and less difficult than I first thought, maybe now we could.... Please don't be shy to share your ideas!
Second I want your opinion on what should be included in the first beta release. I want avne to be a program to "connect and protect". At the moment I am mainly working on the connect part of the program, but there are plenty of possibilities to add extra protection to the program. Also, connect and protect may not always be compatible. Should I limit the connect part to gain protection?
Last but not least. With this release I hope to get some feedback in the form of bug reports.
Because avne is still in the alpha phase of development there is no install package for the program. Fortunately installation is easy. It involves the following steps:
Download the source code: avne-0.5.tar.gz
Extract the avne-0.5.tar.gz file.
tar xfvz avne-0.5.tar.gz
Compile the source code.
gcc -Wall -o avne avne.c Note: if you get an error about the setns function then simply remove this function from the sourcecode.
Change ownership of the executable to root and set the suid bit.
chown root avne && chmod u+s avne
Make a symbolic link to the executable in /usr/local/bin
ln -s /home/rob/projects/avne/avne /usr/local/bin/avne
AVNE expects Tor to be located at localhost:9050. You can specify another address in the avne.conf file which should be in the same directory as the executable.
There are two ways avne can run a program: it can create a brand new virtual network environment, or use an existing environment. Creating a new network environment is the normal way to start a program. It is as simple as typing:
avne chromium --disable-cache avne iceweasel -P avne -no-remote
Important: When Iceweasel (Firefox) is already running Iceweasel forks the running process. This forked process runs inside the network environment of its parent and not inside a network environment created by avne. You can prevent this behavior by first closing all running Iceweasel instances, or by starting Iceweasel with another profile (see example above). To create a new profile close all running Iceweasel instances and start Iceweasel with the option -Profilemanager.
Always check if avne is enabled by requesting its status page at IP address 10.10.10.10.
Using an existing virtual network environment is needed for debugging. It allows you to run a second program (for example Wireshark) inside the network environment of another program. If you want to attach to an existing network environment you need the PID of a program that is running inside the network environment. This PID is reported by avne shortly after the program starts.
First start Iceweasel: avne iceweasel -P avne -no-remote avne reports the PID of the program running inside the virtual environment: avne: child pid is  Use this PID to inject Wireshark inside the same network environment: avne --use-namespace 1234 wireshark Note: you can also choose to start Wireshark first and then Iceweasel.
After a program is started you can check if avne is active by requesting its status page located at IP address 10.10.10.10. At the moment the status page displays quite a lot of debugging information. This information is not guaranteed to be correct, development is faster than reporting! The debugging information will be removed in future versions.
During development of a program like avne logging is essential. You can find its logfile at /var/log/avne. Be aware that this logfile is very detailed and can grow quite large.
The avne program uses a combination of two technologies: User Space Networking and lightweight virtualization.
User Space Networking
With User Space Networking network protocols like IP, UDP, TCP and others are implemented by code running in User Space. A normal (non-kernel) program can use this to get almost total control of the network traffic. How? Lets give you an example of how avne uses User Space Networking to intercept and forward TCP/IP traffic. If avne receives a TCP SYN packet (connect request) for lets say IP address 188.8.131.52 it sends back a TCP ACK packet (acknowledge) to the communications program. The communications program then thinks it is connected to a server at 184.108.40.206 while in reality it is connected to the avne program. Once the connection with the communications program is fully established avne connects to the Tor network and starts forwarding network traffic.
Running network protocols in User Space requires a network interface that is not connected to the network protocol stack of the kernel. The tuntap device of the Linux kernel can create such an interface. Network interfaces created by a tuntap device are connected to a raw socket in User Space. Depending on the mode of the tuntap device (tun or tap), I/O on the raw socket consists of IP-datagrams (tun mode) or Ethernet packets (tap mode). avne uses the tun mode to process the network traffic of the client program.
At the moment avne implements (=intercepts) two network protocols: TCP/IP and DNS.
One problem that plagues Tor since its start is that even if a program is configured to use its SOCKS interface, there is no guarantee that all network traffic of the program uses the SOCKS interface. The normal non-SOCKS interface is still accessible, and client programs can still use it. Bad client implementations and add-ons have been bypassing the SOCKS interface in the past.
AVNE uses lightweight virtualization to create a network environment with only one network interface, a tun interface that is connected to the avne program. In this network environment the normal system network interface (eth0) is not available. All network traffic is guaranteed to pass through avne which in turn passes it to the Tor network.
The virtualization technology that avne uses is called Linux Kernel Namespaces. Inside the Linux kernel all the important resources are capable of having multiple independent instances. Programs running in User Space use one of the available instances of a resource and are referred to as “running inside a resource name namespace”.
Named after the resource there are several different kernel namespaces:
PID namespace: A PID namespace encapsulates a process tree. Starting a new PID namespace creates a new process tree which only contains processes that are started inside the newly created namespace. The first process running inside a PID namespace has a PID of 1, and acts as the “init-process” of the namespace.
Network namespace: A network namespace represents a completely independent network stack. This includes interfaces, IP addresses, routing and iptables rules. A newly created network namespace is spic and span - not even the lo interface is configured...
Mount namespace: A mount namespace consists of a set of mount points. A new mount namespace starts with a copy of all the mount points at the moment of creation. This copy is independent - mount actions inside the namespace do not affect mounts outside the namespace.
IPC namespace: The IPC namespace isolates the System V interprocess mechanisms like message queues, semaphore sets and shared memory.
UTS namespace: The UTS namespace encapsulates the settings for host name and domain name.
User namespace: A User namespace has its own user and group ID's. These ID's can - and normally do - overlap global ID's.
Kernel Namespaces are a great idea. Lightweight and easy to use! At the moment avne only uses the network and UTS namespaces. Future versions can significantly improve the protection of the user by using additional namespaces. For example: using a mount namespace can present a different rootfs to the program started by avne. One with no access to the home directory (or a fake home directory), different hosts, passwd, hostname files, etc.
The avne program is designed to be extensible. It is easy to add new network protocols, filters, or add support for other overlay networks than Tor. This section gives you a high-level overview of the avne implementation.
AVNE has a simple object orientated design which basically consists of two objects. A connection object takes care of the connection with the client (for example Firefox), and a upstream object connects to a server (Tor). For TCP traffic this looks like:
client <-- fd_tun --> tcp_connection input_buffer output_buffer upstream_connection <-- fd_upstream --> server (input_buffer == ref output_buffer of tcp_connection) (output_buffer == ref input_buffer of tcp_connection) upstream_functions (connect, disconnect, read, write, prepare_events, handle_events)
In the diagram above the tcp and upstream connection share input- and output buffers. Data exchange between the tcp and upstream connection only takes place via these shared buffers. The tcp connection can further use a set of functions to control the upstream connection.
Like all good object orientated design, the tcp and upstream objects know nothing of each-others implementation. This makes the design flexible. For a tcp connection object it does not matter what the upstream object does, as long as the upstream object accepts its input and output buffers and implements a known set of functions. To illustrate this, avne has the following types of upstream connections:
- tcp_upstream_connection: A simple pass-trough connection (Tip: handy for testing!).
- socks5_upstream_connection: Connects to the Tor socks interface.
- http_upstream_connection: Implements an internal status server at IP-address 10.10.10.10:80
Note that this design also makes it possible to chain multiple upstream connections. You can for example put a filter_adds object between the tcp connection and a SOCKS connection.
All I/O is event-driven and uses a simple select loop. Pseudo code for TCP looks like:
io_loop() clear fdset_read and fdset_write add fd_tun to fdset_read while client_running for each tcp_connection do tcp_connection_prepare_events(tcp_connection) select(fdset_read,fdset_write) for each tcp_connection do tcp_connection_handle_events(tcp_connection) tun_handle_events() tcp_connection_prepare_events(tcp_connection) call tcp_connection->upstream.functions->prepare_events(tcp_connection->upstream) tcp_connection_handle_events(tcp_connection) call tcp_connection->upstream.functions->handle_events(tcp_connection->upstream) ack bytes that have been removed from buffer by upstream connection check if upstream connection changed to closed state
In the io_loop, file descriptors are added to the read and write sets by calling the prepare_events function on the active tcp connections. This function in turn calls the prepare_events of the upstream connection it owns. The interesting part is that a tcp connection does not have a private file descriptor, but shares the file descriptor of the tun interface with all other I/O on the tun interface. So, the prepare_events function of a tcp connection can only add the file descriptor of the owned upstream connection to the read or write sets.
After the select function returns the function handle_events is called for each tcp connection. This function calls the handle_events function of the owned upstream connection and on return checks for changes in the state of the upstream connection. If the upstream connection has written bytes from its output buffer, those bytes are ack-ed to the client. If the upstream connection changed its state to disconnected, the tcp connection starts the TCP disconnect sequence.
Tcp protocol implementation
When I started my avne project I had some worries about implementing the TCP protocol. Its a complex protocol. This complexity is mostly due to the fact that TCP must work over unreliable connections. Connections that can damage packets, drop packets, duplicate packets, change the sequence of packets etc.
The tun interface that avne uses communicates over a very reliable connection (user space - kernel). With this in mind I decided to do a partial implementation of the TCP protocol. From the testing I have done it seems like this implementation works very well. There are still some loose ends, but I am confident these will be easy to fix.
Dns protocol implementation
Implementing DNS support proved to be more interesting than I thought. I ended up implementing two ways to do DNS queries.
The first way only supports DNS resolve queries. AVNE extracts the domain name from an incoming DNS message and passes it to Tor using its SOCKS resolve extension. When Tor has resolved the query, a reply DNS datagram is constructed and sended to the client.
The second implementation uses TCP for DNS queries. DNS supports both the UDP and TCP protocol to submit queries. Because a TCP query only differs from an UDP query by a two byte length field, an UDP query can be copied blindly into a TCP message. This message can then passed to the Tor network using a normal SOCKS TCP connection. The reply is converted back into an UDP reply and sended to the client.
I really like the TCP queries solution. It does support the full DNS protocol and it does not rely on a non-standard SOCKS protocol extension. There are however problems with this solution. For a TCP query you need to specify a DNS server. In my code I use Google Public DNS, which has an IP address of 220.127.116.11. This worked fine for my tests, but it probably will not scale. For a DNS server a TCP query uses considerable more resources than an UDP query. It would not surprise me if the number of simultaneous TCP connections from a client to the DNS server has a limit. If so, this will be a problem for a Tor exit node. When loading a web page you can easily have say 20 simultaneous DNS connections. If 100 users connected to an exit node do the same, the DNS server sees a total of 2000 simultaneous request coming from one IP address. Will it honor these requests? Probably not. For TCP queries to scale the Tor exit node code needs to intercept TCP traffic to port 53 and redirect the query to its own internal DNS resolver, or even better to a full caching name-server on the same machine.
In a normal network environment DNS queries are fast and cheap. If a client does not get a quick reply it assumes the query got lost or was dropped and resubmits the query. The Tor network does not have the fast response that is expected, leading to extra unnecessary queries. To counter this I have implemented a simple client-side DNS cache.
For a quick look at the source code of the avne program you can follow this link: source code of avne.c
AVNE needs your I/O
This has been a long article so if you are reading this you must be interested in avne and its technologies. I hope you want to give the program a try and if you encounter problems you will report them to me. Ideas are of course welcome too. Please keep in mind that it is alpha software that is not fully developed and tested. Do not use it if you need anonymity!
AVNE is new technology that can significantly enhance usability and security for Tor users. Like all new technology the effect of the technology must first be carefully studied before it is used. This is especially true for technology that is to be used in combination with anonymity software like Tor where a flaw can have serious consequences.
Here are some questions I have:
Currently AVNE works with all TCP client software. Is this safe or should I exclude some types of programs? For example: I think it's a bad idea to allow email clients that do not use encryption. Without encryption the password of the user can be snooped at the exit node. What other programs (protocols) should not be used in combination with Tor?
How much and what type of access should a program running under avne have to the users system? The virtualization technology that is used is very powerful. It can be (and is) used to do full userland virtualization. At the moment only the network and UTS subsystems are virtualized. The next important subsystem to virtualize would be the file system. What files/directories should be visible and what files/directories should be faked?
What should be in the first beta release? For the first beta release I want the network part of the program to be safe to use with Tor. Because proper testing will take time I don't think it would be wise to add too much extra functionalities in the program.
I developed avne with Tor in mind. Therefore I think most technical discussions about avne should take place at the tor-dev mailing list. Hope to see you there!