Forensics: Beverages Aside, A Look at Incident Response Tools

There Was COFEE

cofee_pod In November, Microsoft’s forensics tool called COFEE (Computer Online Forensic Evidence Extractor) was leaked on torrents for download. The news coverage was much hype about nothing, as many free tools already out there exceed COFEE in features and functionality. However, that did not stop statements such as “now that COFEE has leaked, hackers can reverse engineer to see what it does.” Well, I can save them time and tell them it launches OS commands and sysinternals tools to collect information, using a simple method that law enforcement can easily launch from a thumb drive. I also hesitate to call it Microsoft’s tool, as I believe it has more development coming from The National White Collar Crime Center (NW3C.org) than from Microsoft. Ok, let’s move on to DECAF.

Then There Was DECAF

brushed Just recently, with the COFEE hype behind us, a tool called Decaf was released to combat the use of COFEE. A VB.Net application which detects for the use of COFEE and then reacts by ways configurable by the user, such as shutting down the system, clearing event logs, or disabling the network, USB, CDROM, and more. The authors of Decaf shared my distaste for COFEE and its hype, and though the press coined them hackers, they informed me that they are developers who have a passion for security, forensics, privacy, and free flow of information online.

Let’s Talk Tools

I want to put aside the media hoopla of COFEE and DECAF and discuss some great tools for forensic analysis out there worth discussing. I want to try and focus on volatile data collection (grabbing important information from a live running system) but many of the tools mentioned can be used in offline analysis as well. If you are familiar with digital forensics, you most likely have used these tools in many cases, and if you are new to this area I hope this provides some groundwork for you to try some of these tools out.

The List

Before getting into it, I want to share this Excel spreadsheet that contains a good amount of various tools that can be used in the forensic analysis process. Any prices listed have either been found online or are estimates from VARs, but please check with the specific vendors for exact pricing. The tools discussed throughout the article are in this spreadsheet along with links to their respective websites. Also note this is Windows focused and this is by no means a complete list, but I feel its a good start for anyone interested in forensic analysis.

Don’t use a Sledgehammer to Hang a Picture – Use this comprehensive list of tools for reference

One last note before discussing the tools, it is important to know your situation and choose the right tool for the task at hand. You may grab the Helix CD, test it, and become very familiar with it where it becomes your tool of choice; but, know that it may not be suitable for all situations and you should have as many options as possible and be familiar with all that is available so you can be prepared with the right instruments. For instance, inserting the Helix CD may autorun the GUI menu system, then clicking through the menus to run acquisition tools generates many changes to the contents of memory, whereas a method to immediately run a memory acquisition tool would be less of an impact.

Frameworks

Let’s start by talking about what I refer to as forensics frameworks. These are programs or scripts that are wrappers to commands used to collect data. They organize a collection of common tools, handle the output of the tools, verify the tools are trusted, and provide some basic reporting. The Helix collection from e-fense includes several frameworks to choose from, including The Incident Response Collection Report (IRCR) by John McLeod, Windows Forensics Toolchest (WFT) by Foolmoon Software and more. Another popular framework is by Harlan Carvey, author of Windows Forensic Analysis (Syngress Publishing) and the Windows IR blog, called the Forensic Server Project (FSP) which uses a client (FRUC) that runs the collection of tools and sends the output to a listening server (FSU).

I’ve also written a framework based on collating various features from the tool sets mentioned above as well as including some of my own ideas. The common theme in these, as in COFEE, is that they collect data using a suite of tools including commands available with the OS (such as netstat, net, systeminfo), Sysinternal utilities (such as pslist, listdlls, handle), and well-known utilities available freely (such as fport, autorunsc, pmdump, etc).

Dealing with Memory

compfor Any actions on a system generated by the operating system or the user constantly change the contents of memory. Thus if the first thing you do on a live system is running tools, you will be significantly modifying the memory contents. A good detailed primer on physical memory analysis by Mariusz Burdach can be found here. An important fact to note is the possible hardware methods available to collect the contents of memory without interacting with the operating system. The tools I list in the spreadsheet for this purpose are software based, thus their execution and their changes to memory will be in the image that is captured.

Acquisition

To acquire an image containing the contents of memory, start by looking at the following two tools: WinDD by Matthieu Suiche and MDD by ManTech International. Both provide a CLI tool that can be incorporated into your preferred framework which can be used to create an image of the contents of physical memory prior to running additional tools. WinDD will create a raw dump or a crash dump file which can be analyzed with standard debugging tools like WinDbg from Microsoft. A commercial tool with a nice price point from HBGary called FastDump Pro acquires memory and includes probing features for malware analysis. The folks at HBGary state that Fastdump has a lighter footprint than other tools and acquires the contents of all physical memory (a community version is available which works on 32-bit systems only).

Analysis

ScreenHunter_02 Dec. 15 19.54Memory analysis has come a long way since running “strings” against an image created from a memory dump. This presentation notes how strings can produce 50 to 80 megabytes of unusable text from a 512MB memory dump. One exciting project, founded by Aaron Walters, is The Volatility Framework, an amazing collection of tools written in Python and used for analyzing memory dumps. With it, you can extract very specific data from the memory dump files obtained using the tools mentioned earlier (MDD, WinDD, etc). The screenshot shows how volatility pulls the process list from a memory dump called mal.dmp. Notice the last process on the list is actually MDD.

Volatility Example

Volatility Example

Volatility can extract the following information:

  • Image date and time
  • Running processes
  • Open network sockets
  • Open network connections
  • DLLs loaded for each process
  • Open files for each process
  • Open registry handles for each process
  • A process’ addressable memory
  • OS kernel modules
  • Mapping physical offsets to virtual addresses (strings to process)
  • Virtual Address Descriptor information
  • Scanning examples: processes, threads, sockets, connections,modules
  • Extract executables from memory samples

The framework is open source, fully written in python, and also modular in the use of plugins. Michael Hale Ligh has produced some great plugins including malfind2 which helps detect hidden/injected code in usermode processes. Here are some more plugins, and here is a plugin that can help find TrueCrypt passphrases and suspicious processes.

Registry

ScreenHunter_04 Dec. 16 15.58You wouldn’t spend time poking around in the registry during live analysis (many CLI tools, such as autorunsc.exe, will pull pertinent information automatically from the registry), but I wanted to include this section to talk about another great tool out there. This one is also by Harlan Carvey and is called RegRipper. RegRipper is intended for use against offline registry hive files to extract information from the registry helpful to your analysis. For example, you can extract data from the registry to determine USB disks previously used on the system or wireless networks joined. The examples are numerous and the use of plugins to extract particular keys and values for information make the tool very extensible. Harlan and many others have written various plugins for RegRipper.

F-Response

F-Response

F-Response

A tool that I’m giving a section on its own to is F-Response which comes in several flavors (Enterprise, Consultant, Field Kit, and Tactical Editions). F-Response in a nutshell provides a client executable to be launched on the target machine which is then connected to using Microsoft’s iSCSI Initiator, providing read-only access to physical drives across the network. On 32-bit Windows systems, physical memory can be captured as well. This is very beneficial in that you can run any tools which analyze data on the hard drive remotely and in a read-only mode. This video demonstrates how a target was inspected using F-Response and RegRipper.

Disk Imaging

There are many options for disk imaging, both live and offline. Here are some of the
popular commercial suites: Picture1

Choose the platforms that suit you as each package has its benefits, however I will go over a method that utilizes the freely available dd.exe with netcat. Yes, this is free, but this option may not suit you in many situations, such as attempting to image large disks in a certain time frame.

You need a computer which will have netcat listening and retrieve the disk image. On this machine, run netcat with the following options:

nc.exe -l -p 8888 -w 5 > diskimage.dd

The -l puts netcat in listen mode, -p specifies the port number (8888 in the example) and -w specifies the timeout for connects and final net reads. Be sure that if this host has a firewall enabled, the port you specify is open for incoming connections.

On the workstation which you are taking a disk image from, you need to have dd.exe and nc.exe, which can be stored on a CD (such as Helix) or a USB thumb drive for use. If you are imaging an entire disk, you need the physical drive number for the dd command. In this example, we are imaging the OS drive, which is physical drive 0, and sending to a listening netcat instance created in the previous step, which has an IP address of 192.168.100.25:

dd if=\\.\PHYSICALDRIVE0 conv=noerror bs=1024 | nc.exe 192.168.100.25 8888

The if parameter specifies the input file to be imaged, in this case it is PHYSICALDRIVE0. The conv=noerror parameters tells dd to continue processing after read errors and the bs=1024 specifies a buffer size of 1 megabyte. Since no output file is specified (of) we are piping to netcat and sending the data to the IP address listening on port 8888.

Evidence Handling

An excerpt from Government Computer News specifies that because digital data is easily altered and it is difficult to distinguish between original data and copies, extracting, securing and documenting digital evidence requires special attention. The guidelines lay out the following general principles for handling digital evidence:

  1. The process of collecting digital evidence should not alter it or raise questions about its integrity.
  2. Examination of digital evidence should be done by trained personnel.
  3. All actions in processing the evidence should be documented and preserved for review.
  4. Examination should be conducted on a copy of the original evidence. The original should be preserved intact.

Mission Statement Image

The numbering above is not meant to signify priority, but rather for discussing each bullet point. Starting with number one, I’ve been a part of many discussions related to which tools are permissible in a court of law, and the answer is that evidence collected in a reliable manner and obtained legally is permissible. The reliable manner is where the tool becomes important. For example, if you are a hobbyist developer and wrote a tool to list processes with Visual Studio, you can be challenged on the accuracy of the processes running which you’ve collected. If you used pslist.exe from Sysinternals, verified the MD5 hash of the executable, and properly tagged, timestamped, labeled, and handled its output, you would have a better case in proving your process list is accurate and reliable.

Point number two specifies that trained personnel should be responsible for evidence examination. The point here is that systems administrators or related expertise on the operating system is not equivalent to “trained in forensic examinations”. Additionally, such internal IT resources may have difficulty being questioned and cross-examined in a court of law. One who is experienced specifically in digital forensics is better able to handle evidence and participate in the litigation.

Points three and four are related and involve documentation, and the processing and handling of the evidence. Every step taken in the analysis must be meticulously documented and timestamped. You should have a standard and repeatable process for this. A UK based firm has an editor type application called Forensic CaseNotes to assist in documenting and tracking your case notes. In addition to careful documentation, an examination and analysis should be performed on duplicates. It is not dramatic step to take the original hard disk, and one additional hard disk containing an untouched block by block copy, and seal them in plastic bags marked with time, date, who collected the drives, and identification numbers. A third hard disk with a block by block copy can be used for further examination.

Proof of preservation can be maintained with MD5 hashing. In the exercise where we acquired an image of the hard disk, we can obtain an MD5 hash of the image file created and log that in our case notes. If that image is tampered with, the MD5 hash will change and the evidence is not reliable and thus can be dismissed. Output logs from the various tools run during an analysis should be hashed as well.

Conclusion

There is no conclusion to learning about digital forensics as the world of analysis techniques evolves and continuously changes. New operating system releases (Windows 7 and 2008 R2), progress in anti-forensics technologies, and sophistication of malware and rootkits continue to challenge forensic investigators. My purpose for this primer is to hopefully detract the sensationalism of COFEE being released, and DECAF to counter it, and take a look at some great aspects of forensic tools that are out there and continue to grow.

Updates

The intention of this article was to reflect on some of the great tools out there that have been around and growing before any word of COFEE. I feel its important to understand what is available and how it works, but one thing I did not touch on was that the tools are a just a subset of the overall process, and it is the process you use in your investigation that is critical to your analysis. Harlan provides some good examples of this in his latest blog entry.

References

Filed Under: featuredForensics

Tags: , , , ,

Comments (3)

Trackback URL | Comments RSS Feed

  1. [...] de la justice dans le monde, tout comme le sont tous les outils reconnus par la communauté (article sur Praetorian, ou sur [...]

  2. [...] PDRTJS_settings_546374_post_55 = { "id" : "546374", "unique_id" : "wp-post-55", "title" : "Forensics%3A+Beverages+Aside%2C+A+Look+at+Incident+Response+Tools", "item_id" : "_post_55", "permalink" : "http%3A%2F%2Fzecure.wordpress.com%2F2009%2F12%2F23%2Fforensics-beverages-aside-a-look-at-incident-response-tools%2F" } Forensics: Beverages Aside, A Look at Incident Response Tools. [...]

  3. Education says:

    Education…

    [...]Forensics: Beverages Aside, A Look at Incident Response Tools : Praetorian Prefect[...]…