Join us for the Honeynet Workshop 2024: May 27th–29th, Copenhagen, Denmark

Glutton 1.0 Release

23 Sep 2023 Lukas Rist honeypot glutton
Glutton

I’d like to announce the 1.0 release of the server-side, low-interaction honeypot Glutton!

We have built Glutton as a versatile honeypot, capable of receiving any network traffic by accepting connections on any port. Being very easy to adapt and extend, Glutton is a fantastic tool to understand network threats.

First commit was on July 11th, 2016, the main objective was to receive traffic on every port without having to establish 65,535 listeners. This was initially achieved by just redirected all incoming traffic, regardless of their destination port:

$ iptables -t nat -A PREROUTING -p tcp -j REDIRECT --to-port 5000
$ iptables -t nat -A PREROUTING -p udp -j REDIRECT --to-port 5001

In order to make assumptions regarding the used network protocol, and in order to set the correct source port for the response packet, it is very advantageous to know the original request destination port. With conntrack we were able to look up the original destination port.

Participating in Google Summer of Code 2017 with Mohammad Bilal with his many contributions. As part of this project, we added TCP and SSH proxying. Allowing to place Glutton between the attacker and a real, and potentially vulnerable system. At the same time, the Telnet worm Mirai (and Hajime) were making the rounds. We added a simple Telnet handler creating a login prompt and handling the rather simple tests the attackers used to verify the system and started capturing samples of the worm.

On January 26th, 2017 we released the reworked networking stack using Freki by Jonathan Camp. Freki is a tool for manipulating packets in userspace. Using iptable’s raw table, packets are routed down into userspace using nfqueue where Freki takes over. A set of rules are applied, allowing for a large amount of flexibility. For example, you can forward all TCP ports to an HTTP honeypot and log the requests, or you can proxy TCP port 22 into a docker container running a SSH honeypot. We primarily used it to redirect the traffic to a honeypot server and binding it to a specific handler, based on the rule.

Here are the very basic iptables rules for nfqueue handling.

$ iptables -A INPUT -j NFQUEUE --queue-num 0
$ iptables -A OUTPUT -j NFQUEUE --queue-num 0

We quickly added more protocol handlers, specifically for SMB to handle EternalBlue/WannaCry (almost to the payload, contributions welcome), Jabber, and MQTT for IoT devices. Here it was that we started to use the flexibility of Glutton: Watching the TCP packets, we would identify a unknown protocol, investigate into the destination port and payload and start implementing a basic handler. Once deployed, we could incrementally expand the capabilities and the levels of interactions of our honeypot. Implement, observe, increment, observe, and so on.

Introducing HPFeeds end of 2018: Basic Go client implementation of hpfeeds, a simplistic publish/subscribe protocol, written by Mark Schloesser (rep), heavily used within the Honeynet Project for internal real-time data sharing. Now using the excellent fork by Brady Sullivan. This is also how Glutton integrates into T-Pot an open-source platform for multiple honeypots.

In an attempt to reduce the observe and implementation cycle, we added logging of TCP payloads to StdOut and storing them to disk in 2019. This allows for watching live traffic while tweaking the honeypot. The stored payloads can be used to quickly search the historic data for similar payloads to cross-reference. This lead to protocol handlers for Ethereum RPC, Hadoop, ADB, memcache, and VMWare attacks in the following two years.

Quite often we wouldn’t be able to assume or guess the expected protocol, so in mid 2022 we made a small change, replying to TCP requests with some random data which occasionally lead to the attacking client sending additional packets, providing us with more clues on the protocol in question. Think of this as random server banners or an attempt to entice client error behavior.

// sending some randome data
randomBytes := make([]byte, 12+rand.Intn(500))
if _, err = rand.Read(randomBytes); err != nil {
	return err
}
if _, err = conn.Write(randomBytes); err != nil {
	return err
}

Next up was a bigger change, after talking to the folks from Shadowserver and running into some issues with setting up Glutton with traffic forwarding, we worked on switching from nfqueues to iptables’ TPROXY, based on a Cloudflare blog post by Marek Majkowski. Their Spectrum is surprisingly similar to how Glutton functions as a honeypot, listening on all ports and handling traffic transparently. The rules employed more or less look like this:

iptables -t mangle -I PREROUTING -p tcp -m state ! --state ESTABLISHED,RELATED -j TPROXY --on-port 5000 --on-ip 127.0.0.1

This is approach is used to proxy both, TCP and UDP traffic to a single listener while keeping the original destination port in tact. This way we achieve what previously required the overhead of nfqueues with a mangle during pre-routing in iptables. This can also be converted to nft if required. We also highly recommend reading this post on the intricacies of UDP traffic.

Since this is running, we also started to produce events to our new front-end Ochi which has a publicly accessible instance.