Skip to content

Android (LineageOS 16) Execute Script on Start Up

For a project I am currently working on I needed my old mobile phone (a Motorola Moto G 2014) to start a script directly after start up. I searched a lot in the depth of the Internet but did only find outdated information. Therefore, I write this article so others do not have the same problem (and I have notes available if I want to do it again ;-) ). Since this is a change to the operating system, I needed something like LineageOS to have access to it. So, this is a description how to set up your LineageOS 16 to start a script on start up.


Installing LineageOS 16

The first step is to install LineageOS 16 onto your mobile phone. I will not describe how this is done, because the LineageOS website has really good tutorials for this (here for the Moto G 2014). Since my phone is no longer supported, there are no official images for it. But I did not want to build LineageOS for myself. So I searched for unofficial builds and found one with the name lineage-16.0-20190420-UNOFFICIAL-titan.zip. I used this to flash my mobile phone. Additionally, please install the LineageOS SU Addon to get root permissions on the phone. When everything is working, we can start our changes to the operating system.


Execute Script on Start Up

As I said, I tested a lot of different methods I found on the Internet. The one that worked for me I found in a forum thread. In a short description, we have to enable the init.d process to execute start scripts. Everything you have to do can be done via adb under root permissions: The steps we have to do are the following:


  1. Turn on the developer options on your mobile phone.

  2. Allow USB debugging.

  3. Allow adb to have root access.

  4. Use the command adb root on your computer to restart adb with root access.

  5. Use the command adb shell to get a shell on the phone.

  6. Remount the file system with write permissions via:



  7. mount -oremount,rw /system
     


  8. Go to the directory /system/etc/init/ and create the file init_d.rc with the following content:



  9. service init_d /system/bin/sh /system/bin/sysinit
        user root
        group root
        disabled
        oneshot
        seclabel u:r:sudaemon:s0

    on property:sys.boot_completed=1 && property:sys.logbootcomplete=1
        start init_d
     


  10. Now you can go into the directory /etc/init.d and create scripts that are executed on start up.



To give an example, I add a script that loops and checks if the mobile phone is charged. If it is not charged for more than 5 seconds, it shuts down the mobile phone. The following has to be done:


  1. Go to the directory /etc/init.d and execute:



  2. touch 99batteryshutdown
     


  3. Execute the following commands to give the correct permissions:



  4. chgrp shell 99batteryshutdown
    chmod 755 99batteryshutdown
     


  5. Place the following content into the file:


    #!/system/bin/sh

    # Start script in background
    /bin/batteryshutdown.sh &
     


  6. Now go to directory /bin and create the file batteryshutdown.sh with the correct permissions:



  7. touch batteryshutdown.sh
    chmod 755 batteryshutdown.sh
     


  8. Place the following content into the script file:


    #!/system/bin/sh

    CTR=0
    while true; do
        STATUS=$(cat /sys/class/power_supply/battery/status)

        # Observed states: Charging, Discharging, Full
        if [ $STATUS == "Discharging" ]; then
            let CTR=CTR+1
        else
            CTR=0
        fi

        # Tested: when on battery mode, after around 20 seconds the process
        # does not wake up from sleep until charger is plugged in again
        # or mobile phone is used by user.
        if [ $CTR -gt 1 ]; then
            # On Lineage 15 with 'su -c', the command returns
            # CANNOT LINK EXECUTABLE "su": cannot locate symbol
            #svc power shutdown
            # On Lineage 16 without the 'su -c', the command returns just 'Killed'
            # (perhaps SELinux settings).
            su -c 'svc power shutdown'
        fi

        sleep 5
    done
     



With this script, around 10 seconds after the charger has been removed from the mobile phone it gets shutdown. Please note, that the Android operating system optimizes the processes for the battery usage. This means as soon as the phone runs on battery, processes get suspended when the system goes to sleep. You can see my observations in the comments of the script above. Hopefully, this helps some of you to not spend hours on testing.

Android (LineageOS 15.1 and 16) Auto Boot on Charging

For a project I am currently working on I needed an old mobile phone with Android (a Motorola Moto G 2014) to automatically boot up as soon as it gets charged. In this project, the mobile phone is always connected to a charger and as soon as this charger gets power, the phone should start its boot process. Normally, any mobile phone goes into a special "charging screen". I searched a lot in the depth of the Internet but did not find much about this topic. Therefore, I write this article so others do not have the same problem (and I have notes available if I want to do it again ;-) ). Since this is a change to the operating system, I needed something like LineageOS to have access to it. So, this is a description how to set up your LineageOS 16 to boot up as soon as it gets power via the charger. However, I also tested it on LineageOS 15.1 on a Nexus 5X and it works.


Installing LineageOS 16

The first step is to install LineageOS 16 onto your mobile phone. I will not describe how this is done, because the LineageOS website has really good tutorials for this (here for the Moto G 2014). Since my phone is no longer supported, there are no official images for it. But I did not want to build LineageOS for myself. So I searched for unofficial builds and found one with the name lineage-16.0-20190420-UNOFFICIAL-titan.zip. I used this to flash my mobile phone. Additionally, please install the LineageOS SU Addon to get root permissions on the phone. When everything is working, we can start our changes to the operating system.


Auto Boot on Charging

As I said, I tested a lot of different methods I found on the Internet. The one that worked for me I found in a forum thread. In a short description, we have to change the init.rc in the boot image. However, for this we have to reflash the mobile phone. I tried to change the file directly via adb (getting write permissions to the file and editing it directly). However, after each reboot it changes back to the original file. So, we have to change it in the boot image itself.

Normally, I work on Linux. However, since there is a Windows tool that does all the packing and repacking (and I actually do not care about the Android image internals), I used Windows for this part. The steps we have to do are the following:


  1. Download Android Image Kitchen. I used version 3.5.

  2. Unzip our LineageOS file (the lineage-16.0-20190420-UNOFFICIAL-titan.zip) and copy the boot.img into the Android Image Kitchen directory (next to unpackimg.bat and repackimg.bat).

  3. Open a command line in Windows into this directory and execute:



  4. unpackimg.bat boot.img
     


  5. Go into the directory ramdisk and edit the file init.rc. I would suggest to use Notepad++ for this, since the normal Windows editor could fuck up the charset (e.g., by using \r\n instead of \n).

  6. Find the section that starts with on charger and change it to the following:



  7. #[...]
    on charger
        class_start charger
        class_stop charger
        trigger late-init
    #[...]
     


  8. Repack the image by open a command line into the Android Image Kitchen directory and execute repackimg.bat. You should now have a file that is called image-new.img. This is our new boot image.

  9. Copy the image-new.img to your phone (I used a SD card for this).

  10. Start TWRP on your phone (you used it to flash your LineageOS onto your phone, so do the same steps to go into the recovery mode which uses TWRP).

  11. In TWRP, go to install, switch to install image and then select the image-new.img file you copied to your phone. Select the boot partition and swipe to install it. In short, do install -> install image -> select image-new.img -> select boot partition -> swipe to install.

  12. Reboot.

  13. Done.



After this, the mobile phone should boot up as soon as your charger delivers power to it. If you want to check if your changes are now on the phone, you can use adb for it. Do the following if you want to check:


  1. Turn on the developer options on your mobile phone.

  2. Allow USB debugging.

  3. Allow adb to have root access.

  4. Use the command adb root on your computer to restart adb with root access.

  5. Use the command adb shell to get a shell on the phone.

  6. Output the file init.rc file via cat init.rc and see if your changes are there.


So, I hope this is useful for others that have the same problem as I did.

AlertR User Management Update in Version 0.503-5

AlertR User Management Update

In version 0.503-5 of the AlertR server I updated the user management. The previous user management was always a thorn in my side. Every user has to be added manually to the users.csv and the server has to be restarted. Additionally, the passwords of the users were stored in cleartext in the file (since I am working in security, this was always nagging at me). Hence, updating the user management was definitely necessary.


User Management Script

So what is actually new? First of all, the users are no more added by manually updating the users.csv file. The server now has a new script called manageUsers.py which handles all the user management. It can add, delete, modify any user and list all existing ones. To make it more simple, it prompts questions for data it needs and downloads information from the central repository. For example, when adding a new user it will ask for the username and password, downloads the list of existing clients from the central repository and asks you what kind of client you want to add. Adding a user looks then like the following:


alertr@towel:/home/alertr/server# python manageUsers.py -a

Please make sure that the AlertR Server is not running while adding a user.
Otherwise it can lead to an inconsistent state and a corrupted database.
Are you sure to continue?
(y/n): y

Please enter username:
client_raspi_kitchen

Please enter password:

Please verify password:

####################################################################################################
No.  | Option
####################################################################################################
---------------------------------------- Type: alert -----------------------------------------------
1.   | Use instance 'alertClientDbus'.
2.   | Use instance 'alertClientExecuter'.
3.   | Use instance 'alertClientMail'.
4.   | Use instance 'alertClientPushNotification'.
5.   | Use instance 'alertClientRaspberryPi'.
6.   | Use instance 'alertClientTemplate'.
7.   | Use instance 'alertClientXBMC'.
---------------------------------------- Type: manager ---------------------------------------------
8.   | Use instance 'managerClientConsole'.
9.   | Use instance 'managerClientDatabase'.
10.  | Use instance 'managerClientKeypad'.
---------------------------------------- Type: sensor ----------------------------------------------
11.  | Use instance 'sensorClientDevelopment'.
12.  | Use instance 'sensorClientExecuter'.
13.  | Use instance 'sensorClientFIFO'.
14.  | Use instance 'sensorClientICalendar'.
15.  | Use instance 'sensorClientLightning'.
16.  | Use instance 'sensorClientPing'.
17.  | Use instance 'sensorClientRaspberryPi'.
18.  | Use instance 'sensorClientWeatherService'.
---------------------------------------- Type: other -----------------------------------------------
19. Enter instance and node type manually.

Please choose an option: 17
 


However, it also allows you to add the same user just with a single command execution:


alertr@towel:/home/alertr/server# python manageUsers.py -a -u client_raspi_kitchen -p totally_secret_pw -t sensor -i sensorClientRaspberryPi

Please make sure that the AlertR Server is not running while adding a user.
Otherwise it can lead to an inconsistent state and a corrupted database.
Are you sure to continue?
(y/n): y
 


If the last prompt about asking if the AlertR server is stopped at the moment is also annoying, we can also suppress this:


alertr@towel:/home/alertr/server# python manageUsers.py -a -u client_raspi_kitchen -p totally_secret_pw -t sensor -i sensorClientRaspberryPi -y
 


If you do not have an Internet connection or you do not want to connect to the central repository you can use the -o argument to disable it.


Password Storage

The password is no longer stored in cleartext but using bcrypt. This ensures that an adversary that is able to get the users.csv file cannot read them. When updating the AlertR server from a previous version, the old users.csv file will automatically be converted into the new version. So nothing to change here. However, the AlertR server needs a new pip package called bcrypt to work correctly.


Adding and Deleting Users

An additional new thing is that the AlertR server does not have to be restarted when adding or deleting a new user. The server will check every 60 seconds if the users.csv has changed and reload it if it has. However, this does not work correctly when modifying a user. Modifying a user without stopping the server will definitely corrupt your database. This happens because of the way the users are managed internally. And since this small edge case is just too much effort to fix (regarding cost-benefit assessment), I added the warning prompt when using the manageUsers.py.

A thoughtful reader might now ask: but you also show the warning prompt when adding or deleting a user. This is correct. Since deleting a user and instantly adding the same user with other features is the same as modifying it (because the AlertR server needs around 60 seconds before reloading the users.csv file), I also added the warning prompt to the adding and deleting options.

Build a dynamic firewall or how to add dynamically clients to iptables

Introduction

Some weeks ago I read an article about zero trust networks. Even though I knew the concept, I thought to myself "How much of a zero trust network can I build with easy methods?". So I started to re-model my firewall to add dynamical rules to it depending on the trust level of the client. Before I start please note that this is just the first building block. What I have in mind of doing in the future and a discussion about the security issues with this architecture are in the end of this article. So if you read it and want to shout "I can easily circumvent this with xyz", please see this discussion section. And if you have something I was not thinking of, please let me know (preferable on twitter). In the following I assume everyone knows how iptables works.


TL;DR

I re-modeled my iptables script of the router to add clients dynamically depending on their trust level with the help of the isc-dhcp-server.


Infrastructure

In my home network I use a self-made router based on Debian. The iptables rules are configured with a simple bash script holding all rules, the clients get their IP address with the help of an isc-dhcp-server and bind is used as a local DNS server. Since this router has 3 separate network interfaces, I separated my wifi from my internal network and the Internet. The internal network and the wifi have different IP address ranges. The infrastructure of the network looks something like this:



Obviously, the laptops can switch from the wifi to the internal network when plugging in an ethernet cable. So we have to keep this in mind when designing the mechanism to dynamically updating the iptables rules. The old configuration I used was a classical one. Meaning I got a pool of valid IP addresses for the clients which connected to the network and configured the iptables rules for this pool statically. When I got a special client that had different iptables rules, I gave them a specific IP address and configured the rules for it accordingly.


Design

As I mentioned before, I want to have different trust levels which I assign to each client. When a client connects to the network, it should get an IP address from a pool and the iptables rules are set up according to the trust level. Also I want to be able to reset the iptables without loosing each configured dynamic client (for example when I change something at the static part of the iptables rules) and without storing some kind of state for each client. With the help of iptables chains I came up with the following design:



This is the design for the INPUT chain. The FORWARD and OUTPUT chains have the same design principle. I separated the iptables rules in 3 different categories: static chains, transit chains, dynamic chains.

The static chains contain all the rules for the static clients that do not change their IP address (like the router itself or the servers). For example the router allows each client in the internal and wifi network to get an IP address via DHCP.

The transit chains contain the rules that connect the static chains to the dynamic chains. In the image you can see that the INPUT chain contains a rule that jumps to the dynamic input chain. This chain contains all the jump rules to the chains for the dynamic clients.

The dynamic chains contain the rules for the dynamic clients. In the image you can see that each client that gets an IP address from the DHCP server gets an own input chain that contains the rules for it. Each client gets its own rules according to its trust level. In the image you can see for example that PC1 is allowed to connect to the router via SSH, whereas Laptop1 and Phone1 are not allowed.

This design allows me to change something in my iptables script on the static chains and restart it without loosing the rules for the dynamic clients. This can be done by flushing and removing all static chains, but leaving the transit chains and dynamic chains alone. Otherwise I had to write something that keeps states for each dynamic client which would also complicate everything further.


Implementation

As I mentioned in the design section, I do not want to write something that keeps states for each dynamic client. But in order to add and remove iptables rules for clients dynamically, something has to know the current state of the clients. For this, I decided to use my isc-dhcp-server installation. It has 3 events on which it can execute a script: commit, release and expiry. The commit is the event that is triggered whenever a client gets an IP address from the dhcp server. The release event is triggered when a client releases its IP address, and the expiry event triggers when a IP address lease expires. We can use this to add and delete iptables rules dynamically. The architecture for this looks as follows:



The dhcp server executes wrapper scripts for each event with the IP address and the MAC address of the client as argument. This is done because it cannot start a script in the background. This means the script blocks the dhcp server until it is finished. To avoid this, the wrapper script does nothing else than executing the add client/remove client script in the background and just forwards the arguments. The add client script searches in a "database" (I just use a csv file for this) for a mapping of the MAC address to a trust level. If it has found one, it executes the iptables script and passes it the IP address and the corresponding trust level. The iptables script then adds the corresponding rules for the client. The remove client script which is executed for the release and expiry event just passes the IP address to the iptables script which then removes the rules for the corresponding client.

This was the high-level overview of the architecture. Now to the technical aspect. I assume that the scripts are stored under /etc/firewall/. The given code is a slimed down version of the one I use. I removed aspects such as running the dhcp server under a different user as root in order to make it easier to understand (just add an entry in the /etc/sudoers for the add client/remove client scripts). The important configuration part for the isc-dhcp-server (dhcpd.conf) looks like the following:


 subnet 10.1.1.0 netmask 255.255.255.0 {
  range 10.1.1.50 10.1.1.150;
  option routers 10.1.1.1;
  option broadcast-address 10.1.1.255;
  option domain-name "h4des.org";
  option domain-name-servers 10.1.1.1;

  on commit {
   set ClientIP = binary-to-ascii(10, 8, ".", leased-address);
   set ClientMac = binary-to-ascii(16, 8, ":", substring(hardware, 1, 8 ) );
   execute ("/etc/firewall/dhcpd/on_commit_wrapper.sh", ClientIP, ClientMac);
  }

  on release {
   set ClientIP = binary-to-ascii(10, 8, ".", leased-address);
   set ClientMac = binary-to-ascii(16, 8, ":", substring(hardware, 1, 8 ) );
   execute ("/etc/firewall/dhcpd/on_release_wrapper.sh", ClientIP, ClientMac);
  }

  on expiry {
   set ClientIP = binary-to-ascii(10, 8, ".", leased-address);
   set ClientMac = binary-to-ascii(16, 8, ":", substring(hardware, 1, 8 ) );
   execute ("/etc/firewall/dhcpd/on_expiry_wrapper.sh", ClientIP, ClientMac);
  }
 }

 subnet 192.168.0.0 netmask 255.255.255.0 {
  range 192.168.0.50 192.168.0.150;
  option routers 192.168.0.1;
  option broadcast-address 192.168.0.255;
  option domain-name "h4des.org";
  option domain-name-servers 192.168.0.1;

  on commit {
   set ClientIP = binary-to-ascii(10, 8, ".", leased-address);
   set ClientMac = binary-to-ascii(16, 8, ":", substring(hardware, 1, 8 ) );
   execute ("/etc/firewall/dhcpd/on_commit_wrapper.sh", ClientIP, ClientMac);
  }

  on release {
   set ClientIP = binary-to-ascii(10, 8, ".", leased-address);
   set ClientMac = binary-to-ascii(16, 8, ":", substring(hardware, 1, 8 ) );
   execute ("/etc/firewall/dhcpd/on_release_wrapper.sh", ClientIP, ClientMac);
  }

  on expiry {
   set ClientIP = binary-to-ascii(10, 8, ".", leased-address);
   set ClientMac = binary-to-ascii(16, 8, ":", substring(hardware, 1, 8 ) );
   execute ("/etc/firewall/dhcpd/on_expiry_wrapper.sh", ClientIP, ClientMac);
  }
 }
 


The scripts needed for the rest can be downloaded here.


Discussion and Future Work

Obviously, this is not a finished zero trust network and it has still security issues. But it is a first building block for it. The biggest security issue is relying on the MAC address to distinguish clients. Of course any adversary can easily forge this. For this a more sophisticated method has to be used (perhaps IEEE 802.1X?). But at the moment I have no idea what can be used for this in an easy way.

The next problem is even if distinguishing clients were secure, an attacker can still steal the MAC address of a client that is currently connected to the network. In order to tackle this issue, the ARP packets have to be monitored. I do not know what already exists to do this.

And connected to the IP addresses, if a client just disconnects from the network without releasing the IP address, the router still allows every traffic for it according to the trust level until the expiry event triggers. In this time frame, an attacker can abuse the iptables rules. This can be tackled by lowering the lease time of an IP address.

At the moment the iptables rules are only set in the router, however, the servers still do not distinguish between the clients. A better way would be to let the router tell the servers to add or remove iptables rules. At the moment I think the easiest way to do this is to use ssh and just execute a script on the server that handles everything. And also it would be way cooler if not only the router changes its permissions dynamically, but the servers as well (meaning the whole network does) :-)

So I guess this was every security issue I currently had on my mind with the current concept. But if you have thought of something else, please let me know (preferable on twitter). And if you have other ideas to tackle the problems I mentioned, please let me also know.

Behind the Scenes: hack.lu CTF Watchdog Infrastructure

After the hack.lu 2017 CTF was successful, there is some time to recap the feedback. One thing that we heard the last years was that the availability of our challenges is really great. To quote one of the feedbacks we got this year: "The CTF runs so flawlessly that I didn't realized I needed to ask for support.". I think there are two reasons for this (and everyone who ever hosted a CTF knows it is not the well written code of the challenges ;-) ). The first and main reason is the awesome team we have. Everyone is really dedicated to the hack.lu CTF and wants to make it a great experience for each participating team. The other reason, I like to think, is that we constantly monitor our challenges. And this is what this article is about.

Every challenge author always writes a proof-of-concept exploit for his challenge during the creation process in order to check if the challenge works. And our idea was that since the author writes the proof-of-concept exploit anyway, why not use it for the monitoring? So we created the guideline that each proof-of-concept exploit should be using the first argument for the address of the service, the second argument for the port, and an exit code unequal to 0 if an error occurred (there might be exceptions for this but this guideline is usually sufficient). That is all a challenge author has to take care of in order to provide a monitoring for his service.

But a bunch of scripts that can exploit the services of the CTF are not sufficient alone. We need a central entity that will notify us if anything goes wrong. And for this we use the alertR monitoring system. Mainly we use the following components of the monitoring system:

* alertR Server
* alertR Sensor Client Executer
* alertR Alert Client Push Notification
* alertR Manager Client Database

In the following sections I will describe what these components do and how they help us keeping the services up and running.


alertR Server

This is the main component of the monitoring system. Each client will connect to this server and send all information about the monitored services to it. In our case, the server has exactly one group (called alertLevel) to which each sensor (exploit that tests the services) and each actuator (called alert) belongs to. This alertLevel is set to trigger always an alarm even if the monitoring system is deactivated. How the alertR infrastructure looks in detail, can be read on the corresponding Github Wiki Page.


alertR Sensor Client Executer

This is the part that executes the exploits. It will check the exit code of each executed exploit and will raise an alarm if it is unequal to 0 (or the exploit times out). The interval in which the exploit is executed and a timeout value for the exploit is given by the author of the challenge.


alertR Alert Client Push Notification

alertR is able to send you push notifications to your Android devices. For this, you have to install the official alertR App on your Android and set up an alertR account (one account per monitoring system, not per Android device). Now every time a challenge has a problem, you get a push notification on your mobile device.


alertR Manager Client Database

This component allows you to provide the monitoring system data in a MySQL database for any component to process it. With the help of this, we provide two services:

IRC Bot

We wrote an IRC bot which can process the alertR system data. Most of our team is online in our internal IRC channel. Therefore, it was only a logical step to implement an IRC bot which will keep us updated on the current state of the CTF challenges. If a challenge encounters any problems, the IRC bot will post it into the channel.

Status Website

The website that shows the state of our challenges is used by us internally as well as by the CTF users. This website just shows the state of each challenge plain and simple. With this, CTF users can see if a challenge is actually working before they are asking for any support if they encounter problems.


Monitoring Infrastructure

Well, the hack.lu CTF is a hacking competition. Therefore, we try to make the infrastructure as secure as possible. This also applies to the monitoring system infrastructure. The alertR monitoring system is network based. This means that each component can be run on a different host. Since most watchdog scripts have the flag hardcoded, we have to make sure they can not be read by anyone. Because of this, we set up the executing mechanism on a separate host that only runs this part. As a result, we make sure that the attack vector for this crucial part is as narrow as possible. The status website as well as the IRC bot are also running on a different host than the alertR server. This is just circumstantial, but it certainly does not weaken the security of the system.


Final Words

I hope you enjoyed this little article about one internal aspect of the hack.lu CTF. Perhaps it is helpful to some people out there that are also hosting CTFs.