Wednesday, 30 December 2020

Magnet Weekly CTF writeup - Week 12

It's the final week of the Magnet Weekly CTF Challenge. The past 12 weeks have been fun, learning Android, Linux, and Windows memory forensics along the way. Many thanks to the great team at Magnet Forensics and their guests for the weekly challenges. Without further ado, let's get to the final challenge for the year!


Challenge 12 (Dec. 21-28)
What is the PID of the application where you might learn "how hackers hack, and how to stop them"?

Format: #### Warning: Only 1 attempt allowed!

Phew, the first part of the final challenge and we are only given a single attempt! While I did get a likely answer fairly quickly, attempting to verify the answer took much longer.

The first step was similar to part 6 of week 9's challenge, where I searched for the string "how hackers hack, and how to stop them" in the memory image using the strings command, followed by the strings plugin in volatility to locate the corresponding process where the string was found.

With the Volatility strings plugin, there were some entries which were in unallocated memory, leaving only one valid process which was confirmed to belong to Internet Explorer with the pslist plugin. However as we only had one attempt, I wanted to be sure of the owning process before submission. As the strings search showed, the results appeared to be part of a HTML page so I dumped all the files for the Internet Explorer process using dumpfiles plugin and confirmed the answer as part of the video search results in the cached 'search[1].htm' page.

Answer: 4480


Challenge 12 (Dec. 21-28) Part 2
What is the product version of the application from Part 1?

Format: XX.XX.XXXX.XXXXX

Using the procdump plugin and running the dumped executable through exiftool, we find the product version in the format requested.

Answer: 11.00.9600.18858


And that wraps up the Magnet Weekly CTF Challenge for 2020. Looking forward to more CTFs next year!

Tuesday, 22 December 2020

Magnet Weekly CTF writeup - Week 11

It's been a fun last quarter of 2020 and we are now in the second last week of the Magnet Weekly CTF Challenge. This is a short week with a two part challenge, compared to the muti-part challenges the last two weeks so let's go!


Challenge 11 (Dec 14-21)
What is the IPv4 address that myaccount.google.com resolves to?

Considering that the question revolves around IP addresses and name resolution, the first step I did was to dump available network packets from the memory image using the networkpackets plugin for Volatility before analyzing the resultant pcap file with Wireshark.

> vol.py -f memdump.mem --profile=Win7SP1x64 networkpackets -D networkdump

Opening up the pcap file from networkpackets, I checked for resolved addresses under the Statistics menu but did not see any reference to myaccount.google.com. I then did a string search across the network packets and found the answer.

Answer: 172.217.10.238


Challenge 11 (Dec 14-21) Part 2
What is the canonical name (cname) associated with Part 1?

Looking at the highlighted packet above, we can see the CNAME associated with myaccount.google.com.

Answer: www3.l.google.com


And that wraps up week 11!

Wednesday, 16 December 2020

Magnet Weekly CTF writeup - Week 10

Whew, we are in the 10th week of the Magnet weekly CTF challenge. It's another lengthy memory forensics challenge week so let's get on to it without further ado.

Challenge 10 ( Dec 7 - 14 )
At the time of the RAM collection (20-Apr-20 23:23:26- Imageinfo) there was an established connection to a Google Server.
What was the Remote IP address and port number? format: "xxx.xxx.xx.xxx:xxx"

This would have been straightforward with the netscan Volatility plugin and grepping for established connections, but I could not resolve some of the IP addresses with nslookup. Thankfully a quick search on who.is for the 4 IP addresses with established connections confirmed the answer.

(Note: I redirected stderr to /dev/null to suppress warning messages for deprecated Python2 packages.)

Answer: 172.253.63.188:443


Challenge 10 ( Dec 7 - 14 ) Part 2
What was the Local IP address and port number? same format as part 1

From the netscan output above, we have the answer.

Answer: 192.168.10.146:54282


Challenge 10 ( Dec 7 - 14 ) Part 3
What was the URL?

This question had me pulling out my hairs and I must admit, I only got past this question on what I felt was a lucky guess. While trying to determine the process that owned the connection in parts 1 and two, I came across a post by Axlotl Dunk Tank that describes the same symptom I was observing - that established TCP connections were showing up with a PID of -1. Fixing the tcpip_vtypes.py overlay for Win7x64 from 0x238 to 0x248 per the post gave me the corrected PID of 3604.

Strangely enough, the Volatility chromehistory plugin did not manage to recover any history for me. Next I tried pulling strings from memdump of the chrome.exe process but ended up with way too many URLs for a brute-force attempt. Feeling defeated, I had a look at the file handles for the Chrome process and noticed a handle to the Chrome Cookies database.

Dumping the Cookies database file and opening it in DB Browser for SQLite, we see a few likely candidates based on creation times. Making an educated guess with HTTPS port 443 protocol, combined with the domain (host_key) and path from the Cookies table, I finally arrived at the correct answer after a few tries. (Thankfully there was no limit on the number of attempts for this challenge.)

Answer: https://www.google.com/


Challenge 10 ( Dec 7 - 14 ) Part 4
What user was responsible for this activity based on the profile?

This was easy after the hair pulling previous question, as we've already seen the username from the Cookies path.

Answer: Warren


Challenge 10 ( Dec 7 - 14 ) Part 5
How long was this user looking at this browser with this version of Chrome? *format: X:XX:XX.XXXXX * Hint: down to the last second

Hint: Solving this challenge takes FOCUS & time

If I thought part 3 was hard, part 5 almost had me losing my mind. The phrasing of the question had me thinking this might be related to the System Resource Usage Monitor (SRUM) database but the feature wasn't introduced until Windows 8 while this was a Windows 7 image. I then tried a few guesses by manually calculating the duration based on the Chrome processes start time (taken from pslist output) versus the image time but the answer format had me realizing this was not the correct answer or method.

The lightbulb moment came when discussing the question together with a mentor and colleague, with the help of a hint for 5 points and Magnet AXIOM. We then learnt the little known fact that the UserAssist registry key tracks not just the Run Count of applications, but also the focus time the application had. Knowing that focus time is tracked in the UserAssist key, we can then use Volatility's userassist plugin to arrive at the same answer.

Answer: 3:36:47.30100
Note that the focustime from Volatility is 6 decimal places but the question requested for the time format accurate to 5 decimal places.

Wednesday, 9 December 2020

Magnet Weekly CTF writeup - Week 9

This week marks the first of the final image for the Magnet weekly CTF challenge - memory forensics.For those who have yet to download the image, you can get it from here.

This week's challenge is a lengthy one that is split over 7 parts.


Challenge 9 ( Nov 30 - Dec 7 ) Part 1
The user had a conversation with themselves about changing their password. What was the password they were contemplating changing too. Provide the answer as a text string.

The first step we have to take is to determine the profile to use for processing the image. Using Volatility's imageinfo plugin, I decided to go with the first suggested profile: Win7SP1x64. I then ran pstree plugin for a quick look at the likely processes we should investigate further. (Note: the following screenshot for processes was taken from MemProcFS but the same process list is obtainable from Volatility's pstree plugin.)

From the output, I felt that the answer is likely hidden in Slack or Microsoft Word (WINWORD.EXE). Since the question mentioned that the user was talking to himself/herself, I decided to investigate Word process first.

Dumping the files for the WINWORD.EXE process using dumpfiles plugin and using the -p and -n options to restrict files to process ID for WINWORD.EXE as well as including the extracted filename in the output, I did a grep search for keyword "password" and was pleasantly surprised to find a likely answer candidate in the form of an AutoRecovery save file.

Viewing the hexdump of the AutoRecovery save file and searching for keyword "password" as above, we found our answer to the first part of the challenge.

Answer: wow_this_is_an_uncrackable_password


Challenge 9 ( Nov 30 - Dec 7 ) Part 2
What is the md5 hash of the file which you recovered the password from?

This is thankfully straightforward as we just needed to use md5sum to calculate the hash.

Answer: af1c3038dca8c7387e47226b88ea6e23


Challenge 9 ( Nov 30 - Dec 7 ) Part 3
What is the birth object ID for the file which contained the password?

This question is solved using Volatility's mftparser plugin. A quick grep search through the plugin output for the AutoRecovery save file gives us the answer.

Answer: 31013058-7f31-01c8-6b08-210191061101


Challenge 9 ( Nov 30 - Dec 7 ) Part 4
What is the name of the user and their unique identifier which you can attribute the creation of the file document to?
Format: #### (Name)

From the previous screenshot of the MFT entry, file is found in Warren's AppData folder. Using Volatility's getsids plugin for the Microsoft Word process, we can get his RID.

Answer: 1000 (Warren)


Challenge 9 ( Nov 30 - Dec 7 ) Part 5
What is the version of software used to create the file containing the password?
Format ## (Whole version number, don't worry about decimals)

For this question, I dumped the binary for the WINWORD.EXE process via procdump plugin and checked for the executable's version information using exiftool

Answer: 15


Challenge 9 ( Nov 30 - Dec 7 ) Part 6
What is the virtual memory address offset where the password string is located in the memory image?
Format: 0x########

Referencing this post from Context Information Security, I decided to use the strings plugin for Volatility, which matches physical offsets to virtual addresses. Note that the strings plugin requires the physical offsets to be in decimal. (I submitted a wrong answer on the first try as I had the offsets in octal.)

Using strings command with --radix=d option to ensure my output was in decimal, I grep for the password to find its physical offset in the image. That information is then fed to Volatility's strings plugin to determine its virtual address.

Answer: 0x02180a2d


Challenge 9 ( Nov 30 - Dec 7 ) Part 7
What is the physical memory address offset where the password string is located in the memory image?
Format: 0x#######

Turns out I already had the answer from the steps in part 6, just that it was not in the required format. A simple bash printf converted the decimal offset to hexadecimal.

Answer: 0x0af12a2d

Tuesday, 1 December 2020

Magnet Weekly CTF writeup - Week 8

We are on the final question for Linux this week in the Magnet Weekly CTF Challenge. Next month's challenge will be memory analysis based so go ahead and download the memory image here. Now on to the solves!

Part 1
What package(s) were installed by the threat actor? Select the most correct answer!

This question was a little hard initially due to 'threat actor'. I was trying to see what a potentially malicious actor would do on the box, and kept rummaging through the .bash_history logs for hadoop and root account on all three boxes but could not find anything damning. There was a little ELF file in the hadoop user's home directory that appeared malicious in nature but I could not figure out how it got there, or which installed package was responsible for it.

Eventually I decided to try installed packages from /var/log/dpkg.log that didn't look like they belonged in a standard Hadoop setup and got lucky.

Answer: PHP


Part 2
Why?
  • hosting a database
  • serving a webpage
  • to run a php webshell
  • create a fake systemd service

We were only given two attempts for this question so it was important to make it count. I had discounted the first two options as it didn't seem likely that a threat actor would install PHP just for hosting a database or serving a webpage, plus I did not see any indications of database packages being installed.

Looking around in the /etc/systemd/system directory, I noted that the cluster.service starts PHP and had a look at the PHP file referenced within.

Seeing keywords such as socket_bind and shell_exec, I immediately jumped on the third option: 'to run a php webshell' but it turned out wrong. I then backtracked a step and tried the fourth option and thankfully that turned out to be the right answer.

Answer: create a fake systemd service

Tuesday, 24 November 2020

Magnet Weekly CTF writeup - Week 7

This week was a short and easy week as promised, so let's jump right in to the questions!

This week's challenge is split into three parts, but the answers can all be obtained from the networking file /etc/network/interfaces.


Part 1
What is the IP address of the HDFS primary node?

Understanding that 'HDFS primary node' to refer to the HDFS master node, we zoom in to the /etc/network/interfaces file and see its IP.

Answer: 192.168.2.100


Part 2
Is the IP address on HDFS-Primary dynamically or statically assigned?

Referencing the same network file, we see that the interface is configured for static IP address.

Answer: statically


Part 3
What is the interface name name for the primary HDFS node?

Within the same network interface file, we can see the interface name for the primary network interface.

Answer: ens33

Note: There is another interface, ens36 present in the same file which is configured for DHCP. Ideally one should check in the logs to see which of the network interfaces was/were active and their respective IP addresses but for this week's challenge, I simply went with the first likely answer and got lucky.

Tuesday, 17 November 2020

Magnet Weekly CTF writeup - Week 6

We are in the second week of the Linux Hadoop images by Ali Hadi and Magnet has also kindly posted a recap for October's challenges. On to this week's challenge question!

This week's challenge is split into two parts. The first question:

Challenge 6 (Nov. 9-16) The Elephant in the Room
Part One: Hadoop is a complex framework from Apache used to perform distributed processing of large data sets. Like most frameworks, it relies on many dependencies to run smoothly. Fortunately, it's designed to install all of these dependencies automatically. On the secondary nodes (not the MAIN node) your colleague recollects seeing one particular dependency failed to install correctly. Your task is to find the specific error code that led to this failed dependency installation.

This question had me looking in the /usr/local/hadoop directories for program logs initially but there was nothing noteworthy other than some messages relating to failure to connect to the master server. Reading the Hadoop documentation also did not turn up any information on automatic installation of dependencies or where such logs would be contained. So I did the next best I could and retraced what the user did on the box.

Checking the /home/hadoop/.bash_history logs to see what I could find, I noted that the user attempted to install java via apt-get but did not see any command related to automatic installations. My next best guess was in the /var/log/ directory and here is where I found that which we were seeking.

In /var/log/apt/history.log screenshot above, we see some failures for installation of Java at 2017-11-08 01:17:04. It so happens that Java is a required software for Hadoop (i.e. dependency) which leads me to think I am close.

Checking the corresponding /var/log/apt/term.log around the time in question per the above image, we see that the Java installation failure was due to a HTTP 404: Not Found error. So I tried the answer "404" and bingo!

Answer (part 1): 404


Challenge 6 Part 2 (Nov. 9-16) The Elephant in the Room
Don't panic about the failed dependency installation. A very closely related dependency was installed successfully at some point, which should do the trick. Where did it land? In that folder, compared to its binary neighbors nearby, this particular file seems rather an ELFant. Using the error code from your first task, search for symbols beginning with the same number (HINT: leading 0's don't count). There are three in particular whose name share a common word between them. What is the word?

I understood the 'ELFant' reference was referring to the size of the file and that this was related to whichever version of Java the user managed to successfully install. A quick search for Java on the system pointed me to the /usr/local/jdk1.8.0_151 directory.

Searching for the largest 'binary' file (I took it to mean 'non-text' initially) had me poking in the src.zip file for symbols begining with 404. Finding nothing that could be the answer, I re-read the question and it finally clicked that the 'ELFant' reference was referring to both the size of the file, and the type of file (ELF file)! Nice wordplay by Magnet Forensics there. With that, it was straightforward to locate the largest ELF file (unpack200) and search for the appropriate symbols using the readelf command with -s option to display the symbol table (ignoring leading zeroes per the hint).

Answer: deflate

Note: I initially solved the challenge by working on the HDFS-Slave2.E01 image but the write-up was done by retracing my steps on the HDFS-Slave1.E01 imaage. So it doesn't matter which of the two slave nodes one worked with, the answer would be the same for both.

Tuesday, 10 November 2020

Magnet Weekly CTF writeup - Week 5

We are switching over to a Linux image this month and the image used is Ali Hadi's Linux image from a HDFS cluster.

This week's question:

Had-A-Loop Around the Block
What is the original filename for block 1073741825?

Well well, once again I was stumped at the start of the question. Where do we start looking? There were a total of three sets of E01 images provided and unsure of what to expect, I loaded each of them up in FTK Imager and satisfied myself that all three images were of Linux systems. (They almost looked like clones of each other too!) But with three system images, where do I start looking? Taking a hint from the image names, I decided to research on what is HDFS.

The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. From the linked HDFS architecture guide, HDFS has a master/slave architecture with a single NameNode (master server) which manages the file system namespace, together with a number of DataNodes that manage the storage. Notably, I noted the following regarding the persistence of file system metadata on a HDFS cluster:

The entire file system namespace, including the mapping of blocks to files and file system properties, is stored in a file called the FsImage. The FsImage is stored as a file in the NameNode’s local file system.

Could this FsImage file contain the secrets we are looking for? First we had to try and locate the file on our NameNode, so I mounted the HDFS-Master.E01 image at /mnt/hdfs to commence the search. Note also that this image appeared to have a dirty log and required the "norecovery" option to be mounted.

First, I tried searching for the FsImage file, as well as the EditLog. A case insensitive regex search was used for find command as my initial searches did not turn up anything, and the output was piped to grep to filter out the Hadoop HTML documentation files.

# find /mnt/hdfs -iregex ".*FsImage.*" -print | grep -v ".html"

Ignoring the .md5 files and those in the /tmp/ directory for the time being, I focused my search on the three fsimage files found in the /usr/local/hadoop and /opt/hadoop directories and peeked at their contents.

It is quickly apparently that some help is needed to decode the contents of the file and I thankfully chanced upon this answer by Jing Wang on Stack Overflow that pointed me to the HDFS Offline Image Viewer utility. I downloaded and unpacked the Hadoop 2.x release and queried the fsimage files. (Note that HDFS utilities requires JAVA_HOME variable to be configured.)

# /opt/hadoop/bin/hdfs oiv -p XML -i /mnt/hdfs/opt/hadoop/hadoop/dfs/name/current/fsimage_0000000000000000000 -o fsimage_00.xml
# /opt/hadoop/bin/hdfs oiv -p XML -i /mnt/hdfs/usr/local/hadoop/hadoop2_data/hdfs/namenode/current/fsimage_0000000000000000024 -o fsimage_24.xml
# /opt/hadoop/bin/hdfs oiv -p XML -i /mnt/hdfs/usr/local/hadoop/hadoop2_data/hdfs/namenode/current/fsimage_0000000000000000026 -o fsimage_26.xml

Looking through the resultant XML files, I found the name of the file occupying block 1073741825 present in both fsimage_0000000000000000024 and fsimage_0000000000000000026.

Answer: AptSource



Update 24 Nov 2020: From the answer reveal by Magnet, it appears that the HDFS EditLog was named edits_* and they can be parsed by Hadoop's oev tool. No wonder I couldn't find them previously.

Sunday, 8 November 2020

Mounting E01 images in Linux

Some quick notes on mounting EWF images in Linux. The Expert Witness format (EWF) is commonly used by Encase and other forensic tools. This format divides the physical bit stream data of the disk into data chunks interlaced with CRCs for each chunk. The first chunk of data is created with file extension 'E01', with subsequent chunks in running sequential order (e.g. 'E02', 'E03', etc.).

The following commands were tested on Ubuntu 20.04 LTS system with ewf-tools package installed.

Print EWF image information

# ewfinfo image.e01

Mount EWF container and check disk layout

# ewfmount image.E01 /mnt/e01/
# fdisk -l /mnt/e01/ewf1

Note the sector size as well as the starting sector of the partition to be mounted. In the example image above, the 78GB Linux partition is at an offset of 2048*512 = 1,048,576 bytes.

Attach disk image file to loop device (Optional)

# losetup --show -f /mnt/e01/ewf1


Mount image disk partition

# mount -o ro,loop,offset=<offset> <loop-device/disk-image> <mount-point>

In our example, the command we will use is:

# mount -o ro,loop,offset=1048576 /dev/loop0 /mnt/partition1

Or:

# mount -o ro,loop,offset=1048576 /mnt/e01/ewf1 /mnt/partition1

Occasionally one may get an error saying "cannot mount block device /dev/loop read-only" as the filesystem has a dirty log that needs to be replayed but the read-only option prevents that. In this situation, add the 'norecovery' option to overcome the error.

Note also that one can add the '-t' option to specify the filesystem type if required. In my experience, I've found Linux to be fairly adept at auto detecting and mounting NTFS, exFAT, ext2/3/4 filesystems correctly even without the '-t' option. Other options include 'noexec' to prevent accidental execution of malicious binaries in the image file.

Unmount partitions and devices

# umount <mount-point>
# losetup -D
# umount /mnt/e01

Tuesday, 3 November 2020

Magnet Weekly CTF writeup - Week 4

We are on to week 4 of the Magnet Weekly CTF Challenge, and the final question for the Android image from week 1.

Animals That Never Forget
Chester likes to be organized with his busy schedule. Global Unique Identifiers change often, just like his schedule but sometimes Chester enjoys phishing. What was the original GUID for his phishing expedition?

Okay I had absolutely no idea where to start for this week's challenge so I went ahead with parsing the Android image using the fantastic ALEAPP from Alexis Brignoni for some leads. A friend guessed that it might be related to the Calendar or some scheduling app as the question mentioned "organized" and "busy schedule".

Looking through the information parsed by ALEAPP, we see something of interest in the Recent Activity related to Evernote app:


Knowing that Evernote is frequently used as a notes organizer and more, we might be on to something here. So the next step was to extract the app directory for Evernote and take a look at what we have within.

$tar -xf MUS_Android.tar data/data/com.evernote

Poking around at the contents of the app directory, I spied a database at data/data/com.evernote/databases/user213777210-1585004951163-Evernote.db that looks promising so I opened it up with DB Browser for SQLite to have a more detailed look. Within this database we have a table named guid_updates as well as a note within the table notes that has a very suspicious title of "Phishy Phish phish". It is straightforward from here on to get the answer we needed using a simple SQL statement:


We can also confirm the contents of the note in the XML file with matching GUID filename:

$ cat data/data/com.evernote/files/user-213777210/notes/c80/c80ab339-7bec-4b33-8537-4f5a5bd3dd25/content.enml 
<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE en-note SYSTEM "http://xml.evernote.com/pub/enml2.dtd"><en-note><div>Esteemed entrenepeur,</div><div><br /></div><div>My name is Chestnut Russman and I am indeed interested in a sourie with you to discuss potential investment opportunities to your fine establishment.</div><div><br /></div><div>A little more about me:</div><ul><li><div>I'm worked on Wall Street for 10 years and have made my money and retired at age 30. </div></li><li><div>I have large investments in Disney, Uber, Tesla, Microsoft, and many others.</div></li><li><div>I am an inventory with over 25 worldwide patents</div></li><li><div>And I own several very "legal" establishments" that make me a plethora of money every day.</div></li></ul><div><br /></div><div>I believe that together, we can make even more money.</div><div><br /></div><div>Attached is my CV.</div><div><br /></div><div>Graciously</div><div><br /></div><div>Chestnut Russman</div><div><br /></div><div>[Insert malware here]</div><div><br /></div></en-note>

Answer: 7605cc68-8ef3-4274-b6c2-4a9d26acabf1

Fun fact: The question title of "Animals That Never Forget" likely refers to the generalization of elephants having incredible memories and is probably a hint for the Evernote app, which has an icon of an elephant.

Tuesday, 27 October 2020

Magnet Weekly CTF writeup - Week 3

 And we're on to week 3 of the Magnet Weekly CTF Challenge! This week's question still references the Android image from week 1:

Which exit did the device user pass by that could have been taken for Cargo?

This week's question had me stumped initially and added to the difficulty was the three answer attempt limit. Thankfully Magnet Forensics was generous enough to give a hint on Cache Up, which pointed players to one of their webinar on mobile artifact comparison.

From the webinar hint, my instinct tells me that this had to do with the Pixel equivalent of 'live photos' - a.k.a. motion photos - where the phone records and trims up to 3 seconds of video when taking a photo with motion enabled.

So I started looking at the MVIMG*.jpg files in the DCIM folder:

$ ls data/media/0/DCIM/Camera/ | grep MVIMG
MVIMG_20200305_145544.jpg
MVIMG_20200306_151636.jpg
MVIMG_20200307_130221.jpg
MVIMG_20200307_130237.jpg
MVIMG_20200307_130326.jpg
MVIMG_20200307_185225.jpg
MVIMG_20200307_201453.jpg
MVIMG_20200310_133405.jpg

There were 8 motion photos and I needed a way to extract the embedded video within. A quick Google search did not disappoint and I found a ready script by Jerry Peek on StackOverflow that does exactly what we needed.

#!/bin/bash
# extract-mvimg: Extract .mp4 video and .jpg still image from a Pixel phone
# camera "motion video" file with a name like MVIMG_20191216_153039.jpg
# to make files like IMG_20191216_153039.jpg and IMG_20191216_153039.mp4
#
# Usage: extract-mvimg MVIMG*.jpg [MVIMG*.jpg...]

for srcfile
do
  case "$srcfile" in
  MVIMG_*_*.jpg) ;;
  *)
    echo "extract-mvimg: skipping '$srcfile': not an MVIMG*.jpg file?" 2>&1
    continue
    ;;
  esac

  # Get base filename: strip leading MV and trailing .jpg
  # Example: MVIMG_20191216_153039.jpg becomes IMG_20191216_153039
  basefile=${srcfile#MV}
  basefile=${basefile%.jpg}

  # Get byte offset. Example output: 2983617:ftypmp4
  offset=$(grep -F --byte-offset --only-matching --text ftypmp4 "$srcfile")
  # Strip trailing text. Example output: 2983617
  offset=${offset%:*}

  # If $offset isn't an empty string, create .mp4 file and
  # truncate a copy of input file to make .jpg file.
  if [[ $offset ]]
  then
    dd status=none "if=$srcfile" "of=${basefile}.mp4" bs=$((offset-4)) skip=1
    cp -ip "$srcfile" "${basefile}.jpg" || exit 1
    truncate -s $((offset-4)) "${basefile}.jpg"
  else
    echo "extract-mvimg: can't find ftypmp4 in $srcfile; skipping..." 2>&1
  fi
done

Running the script against the MVIMG*.jpg files earlier and looking through the extracted videos, I noted an interesting frame extracted from MVIMG_20200307_130326.jpg:


The video appears to have captured a signboard on a highway, with the keyword 'Cargo' on it. Unfortunately the video quality isn't the best (or maybe it's just my screen) and I could not make out clearly what was on the signboard.

Checking the EXIF metadata of the image gives us the following information:

$ exiftool data/media/0/DCIM/Camera/MVIMG_20200307_130326.jpg 
ExifTool Version Number         : 12.00
File Name                       : MVIMG_20200307_130326.jpg
Directory                       : data/media/0/DCIM/Camera
File Modification Date/Time     : 2020:03:07 07:03:28-05:00
File Type                       : JPEG
File Type Extension             : jpg
MIME Type                       : image/jpeg
Make                            : Google
Camera Model Name               : Pixel 3
Modify Date                     : 2020:03:07 13:03:26
Date/Time Original              : 2020:03:07 13:03:26
Create Date                     : 2020:03:07 13:03:26
GPS Version ID                  : 2.2.0.0
GPS Altitude                    : 246.8 m Above Sea Level
GPS Date/Time                   : 2020:03:07 12:03:26Z
GPS Latitude                    : 60 deg 11' 38.70" N
GPS Longitude                   : 11 deg 5' 46.65" E

Looking up the GPS coordinates on Google Maps places us within Gardermoen Airport in Norway, next to Starbucks - not quite what I expected since the motion photo clearly showed the device user on the move outdoors.

Refusing to be daunted, I checked the EXIF of the images that were sequentially before and after the motion photo of interest and noted that MVIMG_20200307_185225.jpg places the user in Gamle Oslo, Norway. Since the motion photos suggests that the device user was on a bus, I used Google Maps for directions from Gardermoen Airport to Gamle Oslo and followed the route on street view. My persistence finally paid off when I found the signboard at 60°10'14.3"N 11°06'13.8"E.


Answer: E16

Fun fact: E16 is actually the route and not exit, unlike what the question suggests. From Wikipedia: European route E16 is the designation of a main west-east road through Northern Ireland, Scotland, Norway and Sweden.

Tuesday, 20 October 2020

Magnet Weekly CTF writeup - Week 2

Magnet is hosting a weekly DFIR challenge until the end of 2020. Head on over to https://magnetweeklyctf.ctfd.io to sign up if you haven't already done so!


This week's question:

What domain was most recently viewed via an app that has picture-in-picture capability?

This question is based off the Android image we were given in week 1. To start, we have to determine which apps supports picture-in-picture (PIP). Per the Android Developers' guide, apps have to declare support for PIP by registering video activity in their manifest by setting android:supportsPictureInPicture to true.

I unpacked the MUS_Android.tar image given and used apkanalyzer from the Android SDK to print out the manifest files from all the Android packages (apks) in data/app.

$ find data/app -name "base.apk" -print0 | xargs -0 -i apkanalyzer manifest print {} > manifest-all

Stringing a bunch of grep commands together to search for packages with android:supportsPictureInPicture="true" from the printed manifests in the previous step gives us the following 7 out of 79 packages with PIP capability:

$ grep -E "package=|PictureInPicture=\"true\"" manifest-all | grep -B 1 "PictureInPicture" | grep package
    package="com.google.android.apps.maps"
    package="com.facebook.orca"
    package="com.google.android.apps.tachyon"
    package="com.google.android.youtube"
    package="com.android.chrome"
    package="com.google.android.videos"
    package="com.google.android.gms"

Of the above, the most likely candidate to start with is Chrome, as the other apps are not known for being used to view other domains. The Chrome app history is located in the database at data/data/com.android.chrome/app_chrome/Default/History. Opening up the History SQLite database with DB Browser for SQLite, a quick join and sort of the visits and urls table by visit_time gives us:


Answer: malliesae.com

Friday, 16 October 2020

Installing Android SDK command-line tools on Linux

 Recently I had to use some tools from the Android SDK on Linux and it was surprisingly not as straightforward as I thought it'll be. Perhaps Google was trying to convince folks to use Android Studio...

Anyways, the first step is to download the command-line tools from Android Developers. (Click on "Download Options" for more options.) Grab the zip file for command-line tools and unzip it to a folder of choice. You should see a tools folder, with the necessary binaries in the bin subfolder.

If you get a Warning: Could not create settings error when trying to run tools/bin/sdkmanager, that's because the tools directory hierarchy has been changed starting from Android SDK Command-line Tools 1.0.0 (6200805). (Refer to the excellent answer by Jing Li on Stack Overflow here.) The solution therefore, is to move the unzipped tools folder into the cmdline-tools subfolder, and add cmdline-tools/tools/bin to your path.

Assuming the sdkmanager is located at /home/<username>/cmdline-tools/tools/bin/sdkmanager, one would add the following to their bash profile (~/.bashrc):

export PATH=$PATH:/home/<username>/cmdline-tools/tools/bin

Exit and restart the terminal, and the Android SDK command-line tools should work now. Some useful commands:

$ sdkmanager --help
$ sdkmanager --list
$ sdkmanager --install <package>

Note: to use certain tools such as apkanalyzer, one would also need to install the necessary packages with sdkmanager.

Tuesday, 13 October 2020

Magnet Weekly CTF writeup - Week 1

Magnet Forensics has recently launched a weekly capture-the-flag (CTF) challenge that will run through the last quarter of 2020! Head on over to their blog for more details on the challenge and how to sign up.

For challenge one, we are provided with an Android image which is a tar file containing what appears to be a filesystem extraction of an Android phone.The challenge question was:

What time was the file that maps names to IP's recently accessed?
(Please answer in this format in UTC: mm/dd/yyyy HH:MM:SS)

I had to first figure out which is the file that maps names to IP addresses on Android. According to this answer on StackOverflow, it is no different than on a standard Linux system - i.e. the /etc/hosts file. However I could not find an /etc/hosts file in the given Android tar image.

Running a search for an "etc/hosts" file in the tarball points me to data/adb/modules/hosts/system/etc/hosts.

$ tar -tvf MUS_Android.tar | grep "etc/hosts"
-rw-r--r-- 0/0                85 2020-03-05 05:50 data/adb/modules/hosts/system/etc/hosts

A quick check of the contents of the file after extracting confirms it to be the one we are after.

$ cat data/adb/modules/hosts/system/etc/hosts
127.0.0.1       localhost
::1             ip6-localhost
184.171.152.175 malliesae.com

Based on the above output, the file was last modified on 5th March 2020 at 05:50 UTC but we also need the seconds for the answer. A quick search on the internet indicates that the --full-time option is available for both the ls and tar commands, giving us timestamp information in ISO format.

So listing the specific file in our tarball with the --full-time option gives us:

$ tar --full-time -tvf MUS_Android.tar 'data/adb/modules/hosts/system/etc/hosts'
-rw-r--r-- 0/0              85 2020-03-05 05:50:18 data/adb/modules/hosts/system/etc/hosts

While the challenge technically asked for recently accessed (i.e. last accessed) instead of last modified, I could not find any other timestamp. Checking with 7zip also revealed only a single modified timestamp.


Answer: 03/05/2020 05:50:18

Friday, 15 May 2020

Installing Volatility 2.x on Windows 10

Quick documentation on getting Volatility 2.x set up on Windows 10.

Volatility Foundation (https://www.volatilityfoundation.org/) offers pre-compiled binaries for Volatility 2.6 on Windows but the executable was last updated in 2016 and missing much of the newer Windows memory profiles. Google search revealed plenty of folks having trouble getting Volatility to work on Windows 10 from source and the closest I've found is Mike Cary's post on Installing Volatility on Windows. There's still a bunch of errors from his steps when I followed it, so here's a quick documentation of what I did to get Volatility 2.6.1 working.

Step 1: Download and install Python 2.7 from https://www.python.org/downloads/.

Step 2: Download and install Microsoft Visual C++ Compiler for Python 2.7 from https://www.microsoft.com/en-us/download/details.aspx?id=44266.

Step 3: Install Volatility 2.6.1 dependencies per https://github.com/volatilityfoundation/volatility/wiki/Installation:
  • Distorm3: install version 3.3.4 as newer versions don't seem to support Python 2 nor work with Volatility 2.
    pip install distorm3==3.3.4
  • Yara: I tested version 3.8.1 which works. You can also try with other versions but note that there was an error about missing 'stdbool.h' for version 4.0.0.
    pip install yara-python==3.8.1
  • PyCrypto: no major issues encountered for current versions of pycrypto 2.6.1 and pycryptodome 3.9.7 with Python 2.
    pip install pycrypto pycryptodome
  • OpenPyxl: again, no issues with Python 2 support for current version 2.6.4.
    pip install openpyxl
  • ujson: it seems they have stopped supporting Python 2 for newer releases so I installed version 1.35.
    pip install ujson==1.35

Step 4: Finally, download (or git clone) Volatility 2 from https://github.com/volatilityfoundation/volatility.


Now you should have a working version of Volatility 2 on Windows 10 with the latest profiles included. Refer to the Volatility Usage wiki for additional plugins. (E.g. memory baseline plugin by csababarta)

Last but not least, with end of support for Python 2, do consider switching to Volatility 3!

Magnet Summit 2022 Virtual CTF - Windows

Magnet Forensics recently concluded their Virtual CTF for the Magnet Summit 2022.  Participants were provided with the following three image...