Saturday 7 May 2022

Magnet Summit 2022 Virtual CTF - Windows

Magnet Forensics recently concluded their Virtual CTF for the Magnet Summit 2022. 

Participants were provided with the following three images to process prior to the start of the Capture-the-Flag (CTF) challenge, as well as a trial key for the newly launched AXIOM 6.

  1. Pixel image containing what appears to be a full file system extraction of a Pixel 3 running Android 9;
  2. HP Image containing a full disk image of a Windows 11 system; and
  3. Google Takeout image of the account used in the CTF, rafaelshell24@gmail.com.

The questions for the CTF are split into three sections, and the write-ups for each section is as follows: 

  1. Windows
  2. Android
  3. Egg Hunt

 

Tuesday 3 May 2022

Magnet Summit 2022 Virtual CTF - Android

Magnet Forensics recently concluded their Virtual CTF for the Magnet Summit 2022. 

Participants were provided with the following three images to process prior to the start of the Capture-the-Flag (CTF) challenge, as well as a trial key for the newly launched AXIOM 6.

  1. Pixel image containing what appears to be a full file system extraction of a Pixel 3 running Android 9;
  2. HP Image containing a full disk image of a Windows 11 system; and
  3. Google Takeout image of the account used in the CTF, rafaelshell24@gmail.com.

The questions for the CTF are split into three sections, and the write-ups for each section is as follows: 

  1. Windows
  2. Android
  3. Egg Hunt

Sunday 1 May 2022

Magnet Summit 2022 Virtual CTF - Egg Hunt

Magnet Forensics recently concluded their Virtual CTF for the Magnet Summit 2022.

Participants were provided with the following three images to process prior to the start of the Capture-the-Flag (CTF) challenge, as well as a trial key for the newly launched AXIOM 6.

  1. Pixel image containing what appears to be a full file system extraction of a Pixel 3 running Android 9;
  2. HP Image containing a full disk image of a Windows 11 system; and
  3. Google Takeout image of the account used in the CTF, rafaelshell24@gmail.com.
The questions for the CTF are split into three sections, and the write-ups for each section is as follows:

Wednesday 30 December 2020

Magnet Weekly CTF writeup - Week 12

It's the final week of the Magnet Weekly CTF Challenge. The past 12 weeks have been fun, learning Android, Linux, and Windows memory forensics along the way. Many thanks to the great team at Magnet Forensics and their guests for the weekly challenges. Without further ado, let's get to the final challenge for the year!


Challenge 12 (Dec. 21-28)
What is the PID of the application where you might learn "how hackers hack, and how to stop them"?

Format: #### Warning: Only 1 attempt allowed!

Phew, the first part of the final challenge and we are only given a single attempt! While I did get a likely answer fairly quickly, attempting to verify the answer took much longer.

The first step was similar to part 6 of week 9's challenge, where I searched for the string "how hackers hack, and how to stop them" in the memory image using the strings command, followed by the strings plugin in volatility to locate the corresponding process where the string was found.

With the Volatility strings plugin, there were some entries which were in unallocated memory, leaving only one valid process which was confirmed to belong to Internet Explorer with the pslist plugin. However as we only had one attempt, I wanted to be sure of the owning process before submission. As the strings search showed, the results appeared to be part of a HTML page so I dumped all the files for the Internet Explorer process using dumpfiles plugin and confirmed the answer as part of the video search results in the cached 'search[1].htm' page.

Answer: 4480


Challenge 12 (Dec. 21-28) Part 2
What is the product version of the application from Part 1?

Format: XX.XX.XXXX.XXXXX

Using the procdump plugin and running the dumped executable through exiftool, we find the product version in the format requested.

Answer: 11.00.9600.18858


And that wraps up the Magnet Weekly CTF Challenge for 2020. Looking forward to more CTFs next year!

Tuesday 22 December 2020

Magnet Weekly CTF writeup - Week 11

It's been a fun last quarter of 2020 and we are now in the second last week of the Magnet Weekly CTF Challenge. This is a short week with a two part challenge, compared to the muti-part challenges the last two weeks so let's go!


Challenge 11 (Dec 14-21)
What is the IPv4 address that myaccount.google.com resolves to?

Considering that the question revolves around IP addresses and name resolution, the first step I did was to dump available network packets from the memory image using the networkpackets plugin for Volatility before analyzing the resultant pcap file with Wireshark.

> vol.py -f memdump.mem --profile=Win7SP1x64 networkpackets -D networkdump

Opening up the pcap file from networkpackets, I checked for resolved addresses under the Statistics menu but did not see any reference to myaccount.google.com. I then did a string search across the network packets and found the answer.

Answer: 172.217.10.238


Challenge 11 (Dec 14-21) Part 2
What is the canonical name (cname) associated with Part 1?

Looking at the highlighted packet above, we can see the CNAME associated with myaccount.google.com.

Answer: www3.l.google.com


And that wraps up week 11!

Wednesday 16 December 2020

Magnet Weekly CTF writeup - Week 10

Whew, we are in the 10th week of the Magnet weekly CTF challenge. It's another lengthy memory forensics challenge week so let's get on to it without further ado.

Challenge 10 ( Dec 7 - 14 )
At the time of the RAM collection (20-Apr-20 23:23:26- Imageinfo) there was an established connection to a Google Server.
What was the Remote IP address and port number? format: "xxx.xxx.xx.xxx:xxx"

This would have been straightforward with the netscan Volatility plugin and grepping for established connections, but I could not resolve some of the IP addresses with nslookup. Thankfully a quick search on who.is for the 4 IP addresses with established connections confirmed the answer.

(Note: I redirected stderr to /dev/null to suppress warning messages for deprecated Python2 packages.)

Answer: 172.253.63.188:443


Challenge 10 ( Dec 7 - 14 ) Part 2
What was the Local IP address and port number? same format as part 1

From the netscan output above, we have the answer.

Answer: 192.168.10.146:54282


Challenge 10 ( Dec 7 - 14 ) Part 3
What was the URL?

This question had me pulling out my hairs and I must admit, I only got past this question on what I felt was a lucky guess. While trying to determine the process that owned the connection in parts 1 and two, I came across a post by Axlotl Dunk Tank that describes the same symptom I was observing - that established TCP connections were showing up with a PID of -1. Fixing the tcpip_vtypes.py overlay for Win7x64 from 0x238 to 0x248 per the post gave me the corrected PID of 3604.

Strangely enough, the Volatility chromehistory plugin did not manage to recover any history for me. Next I tried pulling strings from memdump of the chrome.exe process but ended up with way too many URLs for a brute-force attempt. Feeling defeated, I had a look at the file handles for the Chrome process and noticed a handle to the Chrome Cookies database.

Dumping the Cookies database file and opening it in DB Browser for SQLite, we see a few likely candidates based on creation times. Making an educated guess with HTTPS port 443 protocol, combined with the domain (host_key) and path from the Cookies table, I finally arrived at the correct answer after a few tries. (Thankfully there was no limit on the number of attempts for this challenge.)

Answer: https://www.google.com/


Challenge 10 ( Dec 7 - 14 ) Part 4
What user was responsible for this activity based on the profile?

This was easy after the hair pulling previous question, as we've already seen the username from the Cookies path.

Answer: Warren


Challenge 10 ( Dec 7 - 14 ) Part 5
How long was this user looking at this browser with this version of Chrome? *format: X:XX:XX.XXXXX * Hint: down to the last second

Hint: Solving this challenge takes FOCUS & time

If I thought part 3 was hard, part 5 almost had me losing my mind. The phrasing of the question had me thinking this might be related to the System Resource Usage Monitor (SRUM) database but the feature wasn't introduced until Windows 8 while this was a Windows 7 image. I then tried a few guesses by manually calculating the duration based on the Chrome processes start time (taken from pslist output) versus the image time but the answer format had me realizing this was not the correct answer or method.

The lightbulb moment came when discussing the question together with a mentor and colleague, with the help of a hint for 5 points and Magnet AXIOM. We then learnt the little known fact that the UserAssist registry key tracks not just the Run Count of applications, but also the focus time the application had. Knowing that focus time is tracked in the UserAssist key, we can then use Volatility's userassist plugin to arrive at the same answer.

Answer: 3:36:47.30100
Note that the focustime from Volatility is 6 decimal places but the question requested for the time format accurate to 5 decimal places.

Wednesday 9 December 2020

Magnet Weekly CTF writeup - Week 9

This week marks the first of the final image for the Magnet weekly CTF challenge - memory forensics.For those who have yet to download the image, you can get it from here.

This week's challenge is a lengthy one that is split over 7 parts.


Challenge 9 ( Nov 30 - Dec 7 ) Part 1
The user had a conversation with themselves about changing their password. What was the password they were contemplating changing too. Provide the answer as a text string.

The first step we have to take is to determine the profile to use for processing the image. Using Volatility's imageinfo plugin, I decided to go with the first suggested profile: Win7SP1x64. I then ran pstree plugin for a quick look at the likely processes we should investigate further. (Note: the following screenshot for processes was taken from MemProcFS but the same process list is obtainable from Volatility's pstree plugin.)

From the output, I felt that the answer is likely hidden in Slack or Microsoft Word (WINWORD.EXE). Since the question mentioned that the user was talking to himself/herself, I decided to investigate Word process first.

Dumping the files for the WINWORD.EXE process using dumpfiles plugin and using the -p and -n options to restrict files to process ID for WINWORD.EXE as well as including the extracted filename in the output, I did a grep search for keyword "password" and was pleasantly surprised to find a likely answer candidate in the form of an AutoRecovery save file.

Viewing the hexdump of the AutoRecovery save file and searching for keyword "password" as above, we found our answer to the first part of the challenge.

Answer: wow_this_is_an_uncrackable_password


Challenge 9 ( Nov 30 - Dec 7 ) Part 2
What is the md5 hash of the file which you recovered the password from?

This is thankfully straightforward as we just needed to use md5sum to calculate the hash.

Answer: af1c3038dca8c7387e47226b88ea6e23


Challenge 9 ( Nov 30 - Dec 7 ) Part 3
What is the birth object ID for the file which contained the password?

This question is solved using Volatility's mftparser plugin. A quick grep search through the plugin output for the AutoRecovery save file gives us the answer.

Answer: 31013058-7f31-01c8-6b08-210191061101


Challenge 9 ( Nov 30 - Dec 7 ) Part 4
What is the name of the user and their unique identifier which you can attribute the creation of the file document to?
Format: #### (Name)

From the previous screenshot of the MFT entry, file is found in Warren's AppData folder. Using Volatility's getsids plugin for the Microsoft Word process, we can get his RID.

Answer: 1000 (Warren)


Challenge 9 ( Nov 30 - Dec 7 ) Part 5
What is the version of software used to create the file containing the password?
Format ## (Whole version number, don't worry about decimals)

For this question, I dumped the binary for the WINWORD.EXE process via procdump plugin and checked for the executable's version information using exiftool

Answer: 15


Challenge 9 ( Nov 30 - Dec 7 ) Part 6
What is the virtual memory address offset where the password string is located in the memory image?
Format: 0x########

Referencing this post from Context Information Security, I decided to use the strings plugin for Volatility, which matches physical offsets to virtual addresses. Note that the strings plugin requires the physical offsets to be in decimal. (I submitted a wrong answer on the first try as I had the offsets in octal.)

Using strings command with --radix=d option to ensure my output was in decimal, I grep for the password to find its physical offset in the image. That information is then fed to Volatility's strings plugin to determine its virtual address.

Answer: 0x02180a2d


Challenge 9 ( Nov 30 - Dec 7 ) Part 7
What is the physical memory address offset where the password string is located in the memory image?
Format: 0x#######

Turns out I already had the answer from the steps in part 6, just that it was not in the required format. A simple bash printf converted the decimal offset to hexadecimal.

Answer: 0x0af12a2d

Tuesday 1 December 2020

Magnet Weekly CTF writeup - Week 8

We are on the final question for Linux this week in the Magnet Weekly CTF Challenge. Next month's challenge will be memory analysis based so go ahead and download the memory image here. Now on to the solves!

Part 1
What package(s) were installed by the threat actor? Select the most correct answer!

This question was a little hard initially due to 'threat actor'. I was trying to see what a potentially malicious actor would do on the box, and kept rummaging through the .bash_history logs for hadoop and root account on all three boxes but could not find anything damning. There was a little ELF file in the hadoop user's home directory that appeared malicious in nature but I could not figure out how it got there, or which installed package was responsible for it.

Eventually I decided to try installed packages from /var/log/dpkg.log that didn't look like they belonged in a standard Hadoop setup and got lucky.

Answer: PHP


Part 2
Why?
  • hosting a database
  • serving a webpage
  • to run a php webshell
  • create a fake systemd service

We were only given two attempts for this question so it was important to make it count. I had discounted the first two options as it didn't seem likely that a threat actor would install PHP just for hosting a database or serving a webpage, plus I did not see any indications of database packages being installed.

Looking around in the /etc/systemd/system directory, I noted that the cluster.service starts PHP and had a look at the PHP file referenced within.

Seeing keywords such as socket_bind and shell_exec, I immediately jumped on the third option: 'to run a php webshell' but it turned out wrong. I then backtracked a step and tried the fourth option and thankfully that turned out to be the right answer.

Answer: create a fake systemd service

Tuesday 24 November 2020

Magnet Weekly CTF writeup - Week 7

This week was a short and easy week as promised, so let's jump right in to the questions!

This week's challenge is split into three parts, but the answers can all be obtained from the networking file /etc/network/interfaces.


Part 1
What is the IP address of the HDFS primary node?

Understanding that 'HDFS primary node' to refer to the HDFS master node, we zoom in to the /etc/network/interfaces file and see its IP.

Answer: 192.168.2.100


Part 2
Is the IP address on HDFS-Primary dynamically or statically assigned?

Referencing the same network file, we see that the interface is configured for static IP address.

Answer: statically


Part 3
What is the interface name name for the primary HDFS node?

Within the same network interface file, we can see the interface name for the primary network interface.

Answer: ens33

Note: There is another interface, ens36 present in the same file which is configured for DHCP. Ideally one should check in the logs to see which of the network interfaces was/were active and their respective IP addresses but for this week's challenge, I simply went with the first likely answer and got lucky.

Tuesday 17 November 2020

Magnet Weekly CTF writeup - Week 6

We are in the second week of the Linux Hadoop images by Ali Hadi and Magnet has also kindly posted a recap for October's challenges. On to this week's challenge question!

This week's challenge is split into two parts. The first question:

Challenge 6 (Nov. 9-16) The Elephant in the Room
Part One: Hadoop is a complex framework from Apache used to perform distributed processing of large data sets. Like most frameworks, it relies on many dependencies to run smoothly. Fortunately, it's designed to install all of these dependencies automatically. On the secondary nodes (not the MAIN node) your colleague recollects seeing one particular dependency failed to install correctly. Your task is to find the specific error code that led to this failed dependency installation.

This question had me looking in the /usr/local/hadoop directories for program logs initially but there was nothing noteworthy other than some messages relating to failure to connect to the master server. Reading the Hadoop documentation also did not turn up any information on automatic installation of dependencies or where such logs would be contained. So I did the next best I could and retraced what the user did on the box.

Checking the /home/hadoop/.bash_history logs to see what I could find, I noted that the user attempted to install java via apt-get but did not see any command related to automatic installations. My next best guess was in the /var/log/ directory and here is where I found that which we were seeking.

In /var/log/apt/history.log screenshot above, we see some failures for installation of Java at 2017-11-08 01:17:04. It so happens that Java is a required software for Hadoop (i.e. dependency) which leads me to think I am close.

Checking the corresponding /var/log/apt/term.log around the time in question per the above image, we see that the Java installation failure was due to a HTTP 404: Not Found error. So I tried the answer "404" and bingo!

Answer (part 1): 404


Challenge 6 Part 2 (Nov. 9-16) The Elephant in the Room
Don't panic about the failed dependency installation. A very closely related dependency was installed successfully at some point, which should do the trick. Where did it land? In that folder, compared to its binary neighbors nearby, this particular file seems rather an ELFant. Using the error code from your first task, search for symbols beginning with the same number (HINT: leading 0's don't count). There are three in particular whose name share a common word between them. What is the word?

I understood the 'ELFant' reference was referring to the size of the file and that this was related to whichever version of Java the user managed to successfully install. A quick search for Java on the system pointed me to the /usr/local/jdk1.8.0_151 directory.

Searching for the largest 'binary' file (I took it to mean 'non-text' initially) had me poking in the src.zip file for symbols begining with 404. Finding nothing that could be the answer, I re-read the question and it finally clicked that the 'ELFant' reference was referring to both the size of the file, and the type of file (ELF file)! Nice wordplay by Magnet Forensics there. With that, it was straightforward to locate the largest ELF file (unpack200) and search for the appropriate symbols using the readelf command with -s option to display the symbol table (ignoring leading zeroes per the hint).

Answer: deflate

Note: I initially solved the challenge by working on the HDFS-Slave2.E01 image but the write-up was done by retracing my steps on the HDFS-Slave1.E01 imaage. So it doesn't matter which of the two slave nodes one worked with, the answer would be the same for both.

Tuesday 10 November 2020

Magnet Weekly CTF writeup - Week 5

We are switching over to a Linux image this month and the image used is Ali Hadi's Linux image from a HDFS cluster.

This week's question:

Had-A-Loop Around the Block
What is the original filename for block 1073741825?

Well well, once again I was stumped at the start of the question. Where do we start looking? There were a total of three sets of E01 images provided and unsure of what to expect, I loaded each of them up in FTK Imager and satisfied myself that all three images were of Linux systems. (They almost looked like clones of each other too!) But with three system images, where do I start looking? Taking a hint from the image names, I decided to research on what is HDFS.

The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. From the linked HDFS architecture guide, HDFS has a master/slave architecture with a single NameNode (master server) which manages the file system namespace, together with a number of DataNodes that manage the storage. Notably, I noted the following regarding the persistence of file system metadata on a HDFS cluster:

The entire file system namespace, including the mapping of blocks to files and file system properties, is stored in a file called the FsImage. The FsImage is stored as a file in the NameNode’s local file system.

Could this FsImage file contain the secrets we are looking for? First we had to try and locate the file on our NameNode, so I mounted the HDFS-Master.E01 image at /mnt/hdfs to commence the search. Note also that this image appeared to have a dirty log and required the "norecovery" option to be mounted.

First, I tried searching for the FsImage file, as well as the EditLog. A case insensitive regex search was used for find command as my initial searches did not turn up anything, and the output was piped to grep to filter out the Hadoop HTML documentation files.

# find /mnt/hdfs -iregex ".*FsImage.*" -print | grep -v ".html"

Ignoring the .md5 files and those in the /tmp/ directory for the time being, I focused my search on the three fsimage files found in the /usr/local/hadoop and /opt/hadoop directories and peeked at their contents.

It is quickly apparently that some help is needed to decode the contents of the file and I thankfully chanced upon this answer by Jing Wang on Stack Overflow that pointed me to the HDFS Offline Image Viewer utility. I downloaded and unpacked the Hadoop 2.x release and queried the fsimage files. (Note that HDFS utilities requires JAVA_HOME variable to be configured.)

# /opt/hadoop/bin/hdfs oiv -p XML -i /mnt/hdfs/opt/hadoop/hadoop/dfs/name/current/fsimage_0000000000000000000 -o fsimage_00.xml
# /opt/hadoop/bin/hdfs oiv -p XML -i /mnt/hdfs/usr/local/hadoop/hadoop2_data/hdfs/namenode/current/fsimage_0000000000000000024 -o fsimage_24.xml
# /opt/hadoop/bin/hdfs oiv -p XML -i /mnt/hdfs/usr/local/hadoop/hadoop2_data/hdfs/namenode/current/fsimage_0000000000000000026 -o fsimage_26.xml

Looking through the resultant XML files, I found the name of the file occupying block 1073741825 present in both fsimage_0000000000000000024 and fsimage_0000000000000000026.

Answer: AptSource



Update 24 Nov 2020: From the answer reveal by Magnet, it appears that the HDFS EditLog was named edits_* and they can be parsed by Hadoop's oev tool. No wonder I couldn't find them previously.

Magnet Summit 2022 Virtual CTF - Windows

Magnet Forensics recently concluded their Virtual CTF for the Magnet Summit 2022.  Participants were provided with the following three image...