« prev :: next »


What is Nebula?

On November 19, 2019, the dev team at Slack HQ announced the first open-source public release of Nebula, a “mutually authenticated peer-to-peer software defined network.” Nebula essentially acts as a VPN, creating a virtual network interface by which you can interact with the rest of the network.

Traditional VPNs provide remote access to a private network. For example, a school computer lab may have a whole network of systems in a server room at the school, and allow students access by VPN. When connected, the students can interact with the systems in the lab as if they were physically located in the server room.

Nebula, like other, similar products, takes essentially the opposite approach. When connected, linked systems can interact with each other as if they’re on the same local network, regardless of where they’re located in the world. In other words, it’s like having “private LAN parties with remote people.”

In this tutorial, we’ll be using Nebula to provide remote access to the drop-box. This has a number of advantages over typical SSH-based Command and Control schemes, including stability and scalability, but there are some drawbacks as well, such as being dependent on outbound UDP, which could be blocked by a firewall.


Configuring Nebula

There are three primary steps we’ll take to get set-up with Nebula:

  1. Configure Cerfiticate Authority
  2. Configure Lighthouse(s)
  3. Configure Nodes

Configure Certificate Authority

Nebula uses the familiar public-key cryptography model (a la TLS) where a privately-controlled and trusted Certificate Authority (CA) must exist in order to create the key-pairs for each system on the network.

In this arrangement, it is absolutely vital to keep private keys private, especially the private key of the CA itself. For this reason, the following steps should be taken from a secure system, not from the drop box. For the purposes of this tutorial, I used an encrypted Virtual Machine (VM) on my laptop to generate the CA and all other key-pairs. This is sub-optimal; if my laptop were stolen, I’d lose control over my Nebula network. In the field, you’ll want to take more precautions to protect your keys.

To create the “Master Keys” for our CA, we’ll need to download the official Nebula binary for our specific hardware and OS (amd64 Linux, in this case):

wget https://github.com/slackhq/nebula/releases/download/v1.0.0/nebula-linux-amd64.tar.gz
tar -xf nebula-linux-amd64.tar.gz
rm nebula-linux-amd64.tar.gz

With this complete, there should be two executable binaries in your working directory:

haxys@straylight:~/Projects/Nebula/demo$ ls -lah
total 22M
drwxrwx--- 1 root vboxsf 4.0K Dec  5 16:09 .
drwxrwx--- 1 root vboxsf 4.0K Dec  5 16:08 ..
-rwxrwx--- 1 root vboxsf  17M Nov 19 12:36 nebula
-rwxrwx--- 1 root vboxsf 4.6M Nov 19 12:36 nebula-cert

To generate the CA keys, use nebula-cert:

./nebula-cert ca -name "Drop Box Demo"

This generates the CA key-pair:

  • ca.crt: CA public certificate
  • ca.key: CA private key

That’s all there is to it! Now we can begin configuring our Lighthouse(s).

Configure Lighthouse(s)

While Nebula is a peer-to-peer technology, it uses a central node (or set of nodes) called a “Lighthouse” to help systems find each other. These nodes should be visible from the Internet, and can be run at little to no cost from the top cloud providers.

For this tutorial, I used a free-tier Debian 10 instance on Google’s cloud. I connected the instance to a static external IP, then added a rule to the firewall to allow inbound traffic on UDP port 4242. (This is the default port for Nebula, but you can use whichever port you please.)

It will be necessary to transfer files securely to the Lighthouse machine. There are numerous tools and methods you could use for this. Perhaps I’ll list them in a different tutorial. For this one, I used Magic Wormhole. (I’ll also use this to transfer files to the drop box later.) For installation instructions, see their website.

I decided to use the 10.42.42.0/24 IP range for my network. With this in mind, my lighthouse’s IP would be 10.42.42.1.

The static external IP of my Lighthouse in the cloud was 104.154.194.215.

With this in mind, I created a minimal template config.yaml that could be copied onto the various client nodes:

pki:
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/host.crt
  key: /etc/nebula/host.key

static_host_map:
  "10.42.42.1": ["104.154.194.215:4242"]

lighthouse:
  am_lighthouse: false
  interval: 60
  hosts:
    - "10.42.42.1"

punchy: true

tun:
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false

firewall:
  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: any
      host: any

If you don’t like these settings, feel free to explore Nebula’s example config.yaml to see the various options available to you. For this demo, I tried to keep things short.

Now that the template config.yaml has been created, create the Lighthouse keypair in a new directory:

mkdir Lighthouse
./nebula-cert sign -name "Demonstration Lighthouse" -ip "10.42.42.1/24"
cp ca.crt Lighthouse/
mv Demo\ Lighthouse.crt Lighthouse/host.crt
mv Demo\ Lighthouse.key Lighthouse/host.key
cp config.yaml Lighthouse/

For any other node, that’s all it takes to prepare the configuration files. But this is the Lighthouse; we’ll need to alter the config.yaml accordingly. In your favorite text editor, open Lighthouse/config.yaml and do the following:

  1. Change am_lighthouse to true.
  2. Delete these two lines:
    hosts:
      - "10.42.42.1"
    
  3. Add these three lines at the end:
    listen:
      host: 0.0.0.0
      port: 4242
    

When it’s complete, it should look like this:

pki:
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/host.crt
  key: /etc/nebula/host.key

static_host_map:
  "10.42.42.1": ["104.154.194.215:4242"]

lighthouse:
  am_lighthouse: true
  interval: 60

punchy: true

tun:
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false

firewall:
  outbound:
    - port: any
      proto: any
      host: any

  inbound:
    - port: any
      proto: any
      host: any

listen:
  host: 0.0.0.0
  port: 4242

With this complete, return to the parent directory with cd ... The Lighthouse configuration files are complete! All that’s left is to get everything installed on the Lighthouse system.

If you want to have additional Lighthouse systems, follow the same steps, but set a different node IP for the second Lighthouse (and so on).

You should now have four files in your Lighthouse directory:

haxys@straylight:~/Projects/Nebula/demo$ ls -lah Lighthouse/
total 24K
drwxrwx--- 1 root vboxsf 4.0K Dec  5 17:30 .
drwxrwx--- 1 root vboxsf 4.0K Dec  5 17:33 ..
-rwxrwx--- 1 root vboxsf  247 Dec  5 17:18 ca.crt
-rwxrwx--- 1 root vboxsf  456 Dec  5 17:30 config.yaml
-rwxrwx--- 1 root vboxsf  308 Dec  5 17:19 host.crt
-rwxrwx--- 1 root vboxsf  127 Dec  5 17:18 host.key

Compress those into a .tar.gz file:

tar -zcf lighthouse.tar.gz Lighthouse/

Transfer the file to the Lighthouse. Using wormhole, from the CA system, create the transfer:

haxys@straylight:~/Projects/Nebula/demo$ wormhole send lighthouse.tar.gz
Sending 934 Bytes file named 'lighthouse.tar.gz'
On the other computer, please run: wormhole receive
Wormhole code is: 3-penetrate-guidance

On the Lighthouse system, use wormhole to receive the file:

haxys@nebula-master:~$ wormhole receive
Enter receive wormhole code: 3-penetrate-guidance
 (note: you can use <Tab> to complete words)
Receiving file (934 bytes) into: lighthouse.tar.gz
ok? (y/N): Y
  ...
Received file written to lighthouse.tar.gz

Once the transfer is completed, extract the file, create the /etc/nebula directory, and copy the contents of the ./Lighthouse/ directory into /etc/nebula/:

sudo mkdir /etc/nebula
tar -xf lighthouse.tar.gz
sudo cp Lighthouse/* /etc/nebula/
rm -fr Lighthouse/ lighthouse.tar.gz

Next, download and extract the Nebula binary release for the Lighthouse’s OS and architecture. (I used the arm64 version once again.) Then, copy the two binary files into the /usr/bin directory:

wget https://github.com/slackhq/nebula/releases/download/v1.0.0/nebula-linux-amd64.tar.gz
tar -xf nebula-linux-amd64.tar.gz
rm nebula-linux-amd64.tar.gz
sudo mv nebula* /usr/bin/

Finally, update the crontab to make Nebula start on boot:

sudo echo "@reboot root nebula -config /etc/nebula/config.yaml" >> /etc/crontab

With all of this complete, reboot the Lighthouse with sudo shutdown -r now. When the system comes back online, SSH back in, and type ip a to see information about the active network interfaces. There should be a new entry, called nebula1, which lists its IP as 10.42.42.1/24:

haxys@nebula-master:~$ ip a
  ...
3: nebula1: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none
    inet 10.42.42.1/24 scope global nebula1
       valid_lft forever preferred_lft forever
    inet6 fe80::9b6e::8c7e::1169::6b2e/64 scope link stable-privacy
       valid_lft forever preferred_lft forever

Congratulations! Time to configure some nodes.

Configuring Nodes

At a minimum, you’ll need two non-Lighthouse nodes: one for the pentester, and one for the drop box. Return to the system where you set up your CA and created the Lighthouse configuration. We’ll focus on the pentester’s node first.

To create the pentester’s key-pair, run the following commands:

mkdir Pentester
./nebula-cert sign -name "Pentester's Laptop" -ip "10.42.42.100/24"
cp config.yaml Pentester/
cp ca.crt Pentester/
mv Pentester\'s\ Laptop.crt Pentester/host.crt
mv Pentester\'s\ Laptop.key Pentester/host.key
tar -zcf pentester.tar.gz Pentester/

While we’re here, let’s go ahead and make the Drop Box’s key-pair as well:

mkdir DropBox-001
./nebula-cert sign -name "Drop Box 001" -ip "10.42.42.200/24"
cp config.yaml DropBox-001/
cp ca.crt DropBox-001/
mv Drop\ Box\ 001.crt DropBox-001/host.crt
mv Drop\ Box\ 001.key DropBox-001/host.key
tar -zcf DropBox-001.tar.gz DropBox-001/

Easy, right? All we’re doing is:

  1. Creating a new directory to store the node’s files
  2. Creating a new key-pair with the nebula-cert binary
  3. Copying the CA certificate and default config.yaml into the node’s directory
  4. Moving the new key-pair into the node’s directory (renamed to host)
  5. Compressing everything into a single archive.

If you want to make additional nodes, that’s all there is to it. Now, let’s copy the configuration files to their appropriate systems and get everything set up.

First, I copied the pentester.tar.gz file onto my pentest laptop using Wormhole. From the CA system, I did the following:

haxys@straylight:~/Projects/Nebula/demo$ wormhole send pentester.tar.gz
Sending 931 Bytes file named 'pentester.tar.gz'
On the other computer, please run: wormhole receive
Wormhole code is: 3-decadence-music

On my pentest laptop, I received the files:

haxys@freeside:/tmp$ wormhole receive
Enter receive wormhole code: 3-decadence-music
Receiving file (931 Bytes) into: pentester.tar.gz
ok? (y/N): Y
  ...
Received file written to pentester.tar.gz

As before, I created the /etc/nebula directory, extracted the archive, and copied the files to the new directory:

sudo mkdir /etc/nebula
tar -xf pentester.tar.gz
sudo mv Pentester/* /etc/nebula/
rm -fr Pentester/ pentester.tar.gz

Next, I download the appropriate Nebula binaries and extract them into the /usr/bin directory:

wget https://github.com/slackhq/nebula/releases/download/v1.0.0/nebula-linux-amd64.tar.gz
tar -xf nebula-linux-amd64.tar.gz
rm nebula-linux-amd64.tar.gz
sudo mv nebula* /usr/bin/

Finally, I add Nebula to the system’s crontab so it’ll start on boot:

sudo echo "@reboot root nebula -config /etc/nebula/config.yaml" >> /etc/crontab

After rebooting the system, I checked the output of ip a to ensure that the nebula1 interface was properly configured with the 10.42.42.100 IP address. Then I used ping to see if I could reach 10.42.42.1. Everything looked to be working fine, so I proceeded to set up the Drop Box in the same way:

  1. Transfer DropBox-001.tar.gz to the drop box using Wormhole.
  2. Make the /etc/nebula directory and move the config files into it.
  3. Download the Nebula binary for the architecture (nebula-linux-arm6.tar.gz in my case).
  4. Extract the binaries and move them into /usr/bin.
  5. Add Nebula to the crontab to start on boot.
  6. Reboot, then test the connection.

With everything set up, I visited each of the three systems (Lighthouse, Laptop, and Drop Box) and pinged each of the others to ensure that each had full connectivity.

That’s all there is to it!

Using Nebula

With Nebula installed and configured on the three systems, it’s easy to communicate between them as if they’re sitting on the same LAN. You can use simple tools like netcat to send data between the systems, and Nebula ensures that all data sent is end-to-end encrypted. And if you want to SSH into the drop box, it’s as easy as ssh root@10.42.42.200.

One important consideration is that both the laptop and the drop box need to be able to communicate with the Lighthouse on UDP port 4242 (or whatever port you chose in the Lighthouse config.yaml). If the drop-box is behind a firewall that blocks outbound UDP, then the system won’t be able to connect to the Nebula network.

For this reason, it’s good to have a back-up C&C method, such as the traditional AutoSSH tunnel described in the next section of the tutorial. Another idea would be to use a small cellular modem with a prepaid SIM to provide a secondary connection to the Internet, instead of depending upon the target’s network for outbound communication.


« prev :: next »