Darknets are usually defined as closed, often decentralized and hidden, networks that overlay a public medium such as the Internet. Although common knowledge gives this term a “peer-to-peer file sharing” connotation, exchange of information, frequently anonymously, across multiple parties would be a more accurate depiction.
Darknets are not the only way to preserve privacy for online activities. Anonymizer proxy services, both free and commercial, are possibly the best known vehicles to achieve online privacy up to a certain degree. However, they all suffer from a key weakness of this model: the entity controlling the proxy has access, either transient or permanent, to activity records for all the users of that particular service. This poses an interesting question: is your privacy better preserved when your information is only known to a few, potentially interested, parties? The obvious response from privacy purists is that this is not the case.
Darknets, by nature, tend to be decentralized, leading themselves to a paradigm where no single party can control or eavesdrop the information moving across the network. Most of them go to great extents to encrypt data and network identifiers so that they can not be accessed by the transporting parties. Some of them even provide for distributed data storage, allowing for fragments or whole data elements to reside encrypted in permanent storage contributed by participants.
1. Closed or pure Darknets, also known as F2F (for friend-to-friend), characterized by the fact that connections are only established between nodes based on extrinsic arrangement or prior knowledge. These are the ones that clearly provide the highest confidentiality; in many cases these Darknets can operate undetected by extended periods of time. However, accessibility is limited to those who know one or more participants before they can connect to the network. Requestor and resource are both contained within the Darknet itself, and no traffic abandons the Darknet, ever.
2. Open Darknets, where new nodes more or less randomly establish connections with existing nodes. This model provides for easy access for new participants to join, but also offers more possibilities for third parties intending to snoop in or subvert the network. Similar to the previous case, both, requestors and resources are internal to the Darknet.
3. Darknets with gateways or “exit nodes”, which allow access to external services not contained in the Darknet itself. As soon as traffic abandons the Darknet, it becomes vulnerable to information leakage, either to attacks by third parties in the way of the traffic (man in the middle) or to compromise by the final recipients. SSL or any other encryption protocol can provide a veil of confidentiality to the data contained in the transmission, but it cannot prevent a third-party from discovering that the transmission itself happened. The third-party may not know who the original sender for the request was (if the Darknet operates as expected and masks the original source network identifier replacing it with the gateway network identifier), but will undoubtedly realize that there was a transmission between the gateway and the destination at a particular time and using specific network protocols and ports.
We’ll go over three of the most popular Darknets including their fundamentals, applicability and limitations.
The most popular forms of Darknets are usually those that allow for some type of anonymous access to external resources. Since the Internet offers a significantly larger pool of resources than any Darknet in existence, most people just look for ways to conceal their online identities while accessing these resources in the open Internet. Tor, based on a model known as “onion routing”, which provides for multiple concentric layers of encryption for every transmission, is possibly the most prevalent form of Darknet. Nodes relaying data for other nodes within the Darknet are oblivious to the content of the transmissions and the identity of the original sources of the requests, thanks to these multiple encryption layers. Although Tor offers internal resources too (in the “onionland” or .onion URL domain), accessing these resources is conditioned to the existence of prior knowledge of their URI’s, and the fact that there is no central directory of resources doesn’t make finding any given resource a simple task (try to imagine an Internet with no search engines and incomplete indices to sites and information). This limitation is also shared by other Darknets that provide in-network services. Using Tor is extremely simple, and self-contained installation packages are available for the most common operating systems. Tor is not exempt of challenges as two potential problems have been identified in the past:
1. DNS resolution can, if not properly routed through Tor, expose the identities of requestor and resource (not a weakness in Tor itself and it has been addressed and corrected by a relatively recent update);
2. If you could identify the requester beforehand by exploiting a vulnerability on the user’s system or otherwise, you could trace the path within the Tor network for any future request from that user. While the former is not a weakness in Tor itself, the latter is a limitation in the way Tor works. This is known as “the bad apple attack”.
The Invisible Internet Project is, in some sense, similar to Tor as it uses multiple encryption layers to encapsulate the requests and that it also replaces the sender information as the message is relayed through the network. However, there are differences between the two, particularly regarding the following two aspects:
1. I2P also replaces the destination information to conceal the identity of the receiver;
2. I2P is based on the so called “garlic routing” which aggregates multiple messages together in an attempt to prevent attacks that could use traffic information to identify sender and receiver of a particular transmission.
I2P also allows for in-the-network anonymous website publishing. These sites are called “eepsites” and use a .i2p domain (similar to the .onion domain for Tor). Since I2P hasn’t been adequately peer reviewed and it has a relatively small group of participants, anonymity can not be guaranteed.
Originally developed by Ian Clarke in the late 90’s, Freenet advocates a different model. The paradigm behind it is based on ensuring a censor resistant anonymous information store. In order to achieve this goal, a combination of a hashed distributed information store and strong cryptography are utilized. Each participant voluntarily contributes permanent storage space which is used to host encrypted data blocks. These blocks are referenced by identifiers based on their hashes, which serve the dual purpose of validating that the data hasn’t been tampered with, and indexing the specific block for later retrieval. Any new data injected into the network is decomposed into blocks and these blocks are migrated to nodes that tend to concentrate that particular portion of the hashing space. The more these blocks are accessed, the more copies of them in existence and the higher the availability for the particular data element. This distributed storage behaves as an LRU (least recently used) cache, so data blocks that have not been recently accessed can be overwritten to make room for new data, effectively expiring uninteresting data in lieu of content in higher demand. One interesting aspect of this approach is the fact that the publisher can disappear almost immediately after the data has been injected into Freenet without affecting the availability of the data itself. In addition to data store, Freenet also provides for peer to peer communication, although latencies vary depending on the actual topology. There is also the inconvenience that there is no delivery assurance (although there is a high probability that if peers are close enough, they will be able to communicate). Moreover, Freenet is based on the “small-world” network theory which sustains that the topology of the network is such that any node can be reached in a small number of hops, with only knowledge of immediately adjacent participants.
As of its latest version (0.7), Freenet can be configured in either one of two modes: F2F or pure Darknet mode, and open Darknet. The former provides for the highest degree of anonymity, while the latter allows for easier joining if there is no prior knowledge of nodes in the network
Ethical considerations and conclusion
The right to privacy and the right to freedom are fundamental rights, and part of many countries’ privacy laws and, in some cases, Constitutions. However, by their own nature, Darknets also provide for a fertile ground for cyber crime, as they hamper the investigators’ ability to perform forensic analysis. In any case, Darknets can be a powerful tool against totalitarian and oppressive regimes.
At the end, Darknets are just a tool: what you do with them is what counts.