Skip to content

Behind the Scenes: hack.lu CTF Watchdog Infrastructure

After the hack.lu 2017 CTF was successful, there is some time to recap the feedback. One thing that we heard the last years was that the availability of our challenges is really great. To quote one of the feedbacks we got this year: "The CTF runs so flawlessly that I didn't realized I needed to ask for support.". I think there are two reasons for this (and everyone who ever hosted a CTF knows it is not the well written code of the challenges ;-) ). The first and main reason is the awesome team we have. Everyone is really dedicated to the hack.lu CTF and wants to make it a great experience for each participating team. The other reason, I like to think, is that we constantly monitor our challenges. And this is what this article is about.

Every challenge author always writes a proof-of-concept exploit for his challenge during the creation process in order to check if the challenge works. And our idea was that since the author writes the proof-of-concept exploit anyway, why not use it for the monitoring? So we created the guideline that each proof-of-concept exploit should be using the first argument for the address of the service, the second argument for the port, and an exit code unequal to 0 if an error occurred (there might be exceptions for this but this guideline is usually sufficient). That is all a challenge author has to take care of in order to provide a monitoring for his service.

But a bunch of scripts that can exploit the services of the CTF are not sufficient alone. We need a central entity that will notify us if anything goes wrong. And for this we use the alertR monitoring system. Mainly we use the following components of the monitoring system:

* alertR Server
* alertR Sensor Client Executer
* alertR Alert Client Push Notification
* alertR Manager Client Database

In the following sections I will describe what these components do and how they help us keeping the services up and running.


alertR Server

This is the main component of the monitoring system. Each client will connect to this server and send all information about the monitored services to it. In our case, the server has exactly one group (called alertLevel) to which each sensor (exploit that tests the services) and each actuator (called alert) belongs to. This alertLevel is set to trigger always an alarm even if the monitoring system is deactivated. How the alertR infrastructure looks in detail, can be read on the corresponding Github Wiki Page.


alertR Sensor Client Executer

This is the part that executes the exploits. It will check the exit code of each executed exploit and will raise an alarm if it is unequal to 0 (or the exploit times out). The interval in which the exploit is executed and a timeout value for the exploit is given by the author of the challenge.


alertR Alert Client Push Notification

alertR is able to send you push notifications to your Android devices. For this, you have to install the official alertR App on your Android and set up an alertR account (one account per monitoring system, not per Android device). Now every time a challenge has a problem, you get a push notification on your mobile device.


alertR Manager Client Database

This component allows you to provide the monitoring system data in a MySQL database for any component to process it. With the help of this, we provide two services:

IRC Bot

We wrote an IRC bot which can process the alertR system data. Most of our team is online in our internal IRC channel. Therefore, it was only a logical step to implement an IRC bot which will keep us updated on the current state of the CTF challenges. If a challenge encounters any problems, the IRC bot will post it into the channel.

Status Website

The website that shows the state of our challenges is used by us internally as well as by the CTF users. This website just shows the state of each challenge plain and simple. With this, CTF users can see if a challenge is actually working before they are asking for any support if they encounter problems.


Monitoring Infrastructure

Well, the hack.lu CTF is a hacking competition. Therefore, we try to make the infrastructure as secure as possible. This also applies to the monitoring system infrastructure. The alertR monitoring system is network based. This means that each component can be run on a different host. Since most watchdog scripts have the flag hardcoded, we have to make sure they can not be read by anyone. Because of this, we set up the executing mechanism on a separate host that only runs this part. As a result, we make sure that the attack vector for this crucial part is as narrow as possible. The status website as well as the IRC bot are also running on a different host than the alertR server. This is just circumstantial, but it certainly does not weaken the security of the system.


Final Words

I hope you enjoyed this little article about one internal aspect of the hack.lu CTF. Perhaps it is helpful to some people out there that are also hosting CTFs.

Trackbacks

No Trackbacks

Comments

Display comments as Linear | Threaded

No comments

The author does not allow comments to this entry

Add Comment

Standard emoticons like :-) and ;-) are converted to images.
E-Mail addresses will not be displayed and will only be used for E-Mail notifications.
Form options

Submitted comments will be subject to moderation before being displayed.