Tuesday 24 November 2020

Magnet Weekly CTF writeup - Week 7

This week was a short and easy week as promised, so let's jump right in to the questions!

This week's challenge is split into three parts, but the answers can all be obtained from the networking file /etc/network/interfaces.


Part 1
What is the IP address of the HDFS primary node?

Understanding that 'HDFS primary node' to refer to the HDFS master node, we zoom in to the /etc/network/interfaces file and see its IP.

Answer: 192.168.2.100


Part 2
Is the IP address on HDFS-Primary dynamically or statically assigned?

Referencing the same network file, we see that the interface is configured for static IP address.

Answer: statically


Part 3
What is the interface name name for the primary HDFS node?

Within the same network interface file, we can see the interface name for the primary network interface.

Answer: ens33

Note: There is another interface, ens36 present in the same file which is configured for DHCP. Ideally one should check in the logs to see which of the network interfaces was/were active and their respective IP addresses but for this week's challenge, I simply went with the first likely answer and got lucky.

Tuesday 17 November 2020

Magnet Weekly CTF writeup - Week 6

We are in the second week of the Linux Hadoop images by Ali Hadi and Magnet has also kindly posted a recap for October's challenges. On to this week's challenge question!

This week's challenge is split into two parts. The first question:

Challenge 6 (Nov. 9-16) The Elephant in the Room
Part One: Hadoop is a complex framework from Apache used to perform distributed processing of large data sets. Like most frameworks, it relies on many dependencies to run smoothly. Fortunately, it's designed to install all of these dependencies automatically. On the secondary nodes (not the MAIN node) your colleague recollects seeing one particular dependency failed to install correctly. Your task is to find the specific error code that led to this failed dependency installation.

This question had me looking in the /usr/local/hadoop directories for program logs initially but there was nothing noteworthy other than some messages relating to failure to connect to the master server. Reading the Hadoop documentation also did not turn up any information on automatic installation of dependencies or where such logs would be contained. So I did the next best I could and retraced what the user did on the box.

Checking the /home/hadoop/.bash_history logs to see what I could find, I noted that the user attempted to install java via apt-get but did not see any command related to automatic installations. My next best guess was in the /var/log/ directory and here is where I found that which we were seeking.

In /var/log/apt/history.log screenshot above, we see some failures for installation of Java at 2017-11-08 01:17:04. It so happens that Java is a required software for Hadoop (i.e. dependency) which leads me to think I am close.

Checking the corresponding /var/log/apt/term.log around the time in question per the above image, we see that the Java installation failure was due to a HTTP 404: Not Found error. So I tried the answer "404" and bingo!

Answer (part 1): 404


Challenge 6 Part 2 (Nov. 9-16) The Elephant in the Room
Don't panic about the failed dependency installation. A very closely related dependency was installed successfully at some point, which should do the trick. Where did it land? In that folder, compared to its binary neighbors nearby, this particular file seems rather an ELFant. Using the error code from your first task, search for symbols beginning with the same number (HINT: leading 0's don't count). There are three in particular whose name share a common word between them. What is the word?

I understood the 'ELFant' reference was referring to the size of the file and that this was related to whichever version of Java the user managed to successfully install. A quick search for Java on the system pointed me to the /usr/local/jdk1.8.0_151 directory.

Searching for the largest 'binary' file (I took it to mean 'non-text' initially) had me poking in the src.zip file for symbols begining with 404. Finding nothing that could be the answer, I re-read the question and it finally clicked that the 'ELFant' reference was referring to both the size of the file, and the type of file (ELF file)! Nice wordplay by Magnet Forensics there. With that, it was straightforward to locate the largest ELF file (unpack200) and search for the appropriate symbols using the readelf command with -s option to display the symbol table (ignoring leading zeroes per the hint).

Answer: deflate

Note: I initially solved the challenge by working on the HDFS-Slave2.E01 image but the write-up was done by retracing my steps on the HDFS-Slave1.E01 imaage. So it doesn't matter which of the two slave nodes one worked with, the answer would be the same for both.

Tuesday 10 November 2020

Magnet Weekly CTF writeup - Week 5

We are switching over to a Linux image this month and the image used is Ali Hadi's Linux image from a HDFS cluster.

This week's question:

Had-A-Loop Around the Block
What is the original filename for block 1073741825?

Well well, once again I was stumped at the start of the question. Where do we start looking? There were a total of three sets of E01 images provided and unsure of what to expect, I loaded each of them up in FTK Imager and satisfied myself that all three images were of Linux systems. (They almost looked like clones of each other too!) But with three system images, where do I start looking? Taking a hint from the image names, I decided to research on what is HDFS.

The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. From the linked HDFS architecture guide, HDFS has a master/slave architecture with a single NameNode (master server) which manages the file system namespace, together with a number of DataNodes that manage the storage. Notably, I noted the following regarding the persistence of file system metadata on a HDFS cluster:

The entire file system namespace, including the mapping of blocks to files and file system properties, is stored in a file called the FsImage. The FsImage is stored as a file in the NameNode’s local file system.

Could this FsImage file contain the secrets we are looking for? First we had to try and locate the file on our NameNode, so I mounted the HDFS-Master.E01 image at /mnt/hdfs to commence the search. Note also that this image appeared to have a dirty log and required the "norecovery" option to be mounted.

First, I tried searching for the FsImage file, as well as the EditLog. A case insensitive regex search was used for find command as my initial searches did not turn up anything, and the output was piped to grep to filter out the Hadoop HTML documentation files.

# find /mnt/hdfs -iregex ".*FsImage.*" -print | grep -v ".html"

Ignoring the .md5 files and those in the /tmp/ directory for the time being, I focused my search on the three fsimage files found in the /usr/local/hadoop and /opt/hadoop directories and peeked at their contents.

It is quickly apparently that some help is needed to decode the contents of the file and I thankfully chanced upon this answer by Jing Wang on Stack Overflow that pointed me to the HDFS Offline Image Viewer utility. I downloaded and unpacked the Hadoop 2.x release and queried the fsimage files. (Note that HDFS utilities requires JAVA_HOME variable to be configured.)

# /opt/hadoop/bin/hdfs oiv -p XML -i /mnt/hdfs/opt/hadoop/hadoop/dfs/name/current/fsimage_0000000000000000000 -o fsimage_00.xml
# /opt/hadoop/bin/hdfs oiv -p XML -i /mnt/hdfs/usr/local/hadoop/hadoop2_data/hdfs/namenode/current/fsimage_0000000000000000024 -o fsimage_24.xml
# /opt/hadoop/bin/hdfs oiv -p XML -i /mnt/hdfs/usr/local/hadoop/hadoop2_data/hdfs/namenode/current/fsimage_0000000000000000026 -o fsimage_26.xml

Looking through the resultant XML files, I found the name of the file occupying block 1073741825 present in both fsimage_0000000000000000024 and fsimage_0000000000000000026.

Answer: AptSource



Update 24 Nov 2020: From the answer reveal by Magnet, it appears that the HDFS EditLog was named edits_* and they can be parsed by Hadoop's oev tool. No wonder I couldn't find them previously.

Sunday 8 November 2020

Mounting E01 images in Linux

Some quick notes on mounting EWF images in Linux. The Expert Witness format (EWF) is commonly used by Encase and other forensic tools. This format divides the physical bit stream data of the disk into data chunks interlaced with CRCs for each chunk. The first chunk of data is created with file extension 'E01', with subsequent chunks in running sequential order (e.g. 'E02', 'E03', etc.).

The following commands were tested on Ubuntu 20.04 LTS system with ewf-tools package installed.

Print EWF image information

# ewfinfo image.e01

Mount EWF container and check disk layout

# ewfmount image.E01 /mnt/e01/
# fdisk -l /mnt/e01/ewf1

Note the sector size as well as the starting sector of the partition to be mounted. In the example image above, the 78GB Linux partition is at an offset of 2048*512 = 1,048,576 bytes.

Attach disk image file to loop device (Optional)

# losetup --show -f /mnt/e01/ewf1


Mount image disk partition

# mount -o ro,loop,offset=<offset> <loop-device/disk-image> <mount-point>

In our example, the command we will use is:

# mount -o ro,loop,offset=1048576 /dev/loop0 /mnt/partition1

Or:

# mount -o ro,loop,offset=1048576 /mnt/e01/ewf1 /mnt/partition1

Occasionally one may get an error saying "cannot mount block device /dev/loop read-only" as the filesystem has a dirty log that needs to be replayed but the read-only option prevents that. In this situation, add the 'norecovery' option to overcome the error.

Note also that one can add the '-t' option to specify the filesystem type if required. In my experience, I've found Linux to be fairly adept at auto detecting and mounting NTFS, exFAT, ext2/3/4 filesystems correctly even without the '-t' option. Other options include 'noexec' to prevent accidental execution of malicious binaries in the image file.

Unmount partitions and devices

# umount <mount-point>
# losetup -D
# umount /mnt/e01

Tuesday 3 November 2020

Magnet Weekly CTF writeup - Week 4

We are on to week 4 of the Magnet Weekly CTF Challenge, and the final question for the Android image from week 1.

Animals That Never Forget
Chester likes to be organized with his busy schedule. Global Unique Identifiers change often, just like his schedule but sometimes Chester enjoys phishing. What was the original GUID for his phishing expedition?

Okay I had absolutely no idea where to start for this week's challenge so I went ahead with parsing the Android image using the fantastic ALEAPP from Alexis Brignoni for some leads. A friend guessed that it might be related to the Calendar or some scheduling app as the question mentioned "organized" and "busy schedule".

Looking through the information parsed by ALEAPP, we see something of interest in the Recent Activity related to Evernote app:


Knowing that Evernote is frequently used as a notes organizer and more, we might be on to something here. So the next step was to extract the app directory for Evernote and take a look at what we have within.

$tar -xf MUS_Android.tar data/data/com.evernote

Poking around at the contents of the app directory, I spied a database at data/data/com.evernote/databases/user213777210-1585004951163-Evernote.db that looks promising so I opened it up with DB Browser for SQLite to have a more detailed look. Within this database we have a table named guid_updates as well as a note within the table notes that has a very suspicious title of "Phishy Phish phish". It is straightforward from here on to get the answer we needed using a simple SQL statement:


We can also confirm the contents of the note in the XML file with matching GUID filename:

$ cat data/data/com.evernote/files/user-213777210/notes/c80/c80ab339-7bec-4b33-8537-4f5a5bd3dd25/content.enml 
<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE en-note SYSTEM "http://xml.evernote.com/pub/enml2.dtd"><en-note><div>Esteemed entrenepeur,</div><div><br /></div><div>My name is Chestnut Russman and I am indeed interested in a sourie with you to discuss potential investment opportunities to your fine establishment.</div><div><br /></div><div>A little more about me:</div><ul><li><div>I'm worked on Wall Street for 10 years and have made my money and retired at age 30. </div></li><li><div>I have large investments in Disney, Uber, Tesla, Microsoft, and many others.</div></li><li><div>I am an inventory with over 25 worldwide patents</div></li><li><div>And I own several very "legal" establishments" that make me a plethora of money every day.</div></li></ul><div><br /></div><div>I believe that together, we can make even more money.</div><div><br /></div><div>Attached is my CV.</div><div><br /></div><div>Graciously</div><div><br /></div><div>Chestnut Russman</div><div><br /></div><div>[Insert malware here]</div><div><br /></div></en-note>

Answer: 7605cc68-8ef3-4274-b6c2-4a9d26acabf1

Fun fact: The question title of "Animals That Never Forget" likely refers to the generalization of elephants having incredible memories and is probably a hint for the Evernote app, which has an icon of an elephant.

Magnet Summit 2022 Virtual CTF - Windows

Magnet Forensics recently concluded their Virtual CTF for the Magnet Summit 2022.  Participants were provided with the following three image...